by liberal japonicus
I flagged a story about Air Canada's use of an AI chatbot and how they tried to avoid responsibility for the information it gave out and Janie pointed out in a comment that JB's John Cole mentioned it. Take it away, JC
In other news, the Air Canada Chatbot lawsuit thing is quite a hoot. For those who have not heard or read about it, apparently a guy’s mother died, so he was trying to buy tickets from Air Canada to go home. He didn’t have time to wait for an approval from them, so he went to their website, used the chatbox, and asked if in their bereavement policy it was allowed to buy the tickets ahead of time and request a refund later. Unbeknownst to the man, he was not actually talking to an employee at Air Canada, but an AI chatbot. And that AI chatbot had just created that policy out of the ether. Which just goes to show you that artificial intelligence has more common sense and empathy than the MA’s and corporate lawyers at a major corporation.
At any rate, the obvious happened. The guy said “Ok that seems reasonable” and went and purchased his tickets. Later on, he tried to get reimbursed, and living Air Canada employees told him to go get fucked that that was not their policy. So he sued, and the Air Canada lawyers actually argued in court that technically the AI chatbot on their website is not an Air Canada employee so technically we are not liable and don’t owe him a refund.
At which point, everyone on the tribunal said “get the fuck out of here with that bullshit” and found them liable:
Here's the Guardian link. His point that a chatbot actually has more empathy and common sense than the corporate lawyers is nice and a lot more cheerful than mine. I find the argument by Air Canada that the chatbot is responsible for it's own counsel links up with my feeling that the use of these tools is going to reflect different cultures, which I posted about earlier here. To return to that hobby horse, it's difficult for me to imagine an Asian company making that argument. This is not because Asian companies have some sort of greater integrity, it's just that they won't use 'just because we programmed that bot, it doesn't mean we are responsible'. (one point that I'd make is that you now need to screenshot any interactions with a chatbot, it's because this guy did that he was able to get the tribunal ruling in his favor)
Slowly, my Japanese colleagues are finding that their students are turning in assignments done with chatGPT and are lamenting. However, there is still no sense of crisis, no concern of what this does to their learning. I believe it is because they don't really have a concept of learning as a way to become more individualistic and as the Japanese system rushes to adopt bits and bobs from Western (basicaly US) education, they are just stuck to the outside.
I may have told this story in the comments, but we are being encouraged to create can-do statements for our classes. Now, Japanese university students who fail a class, say 1st year [whatever], are told that they have to take that class at the same time as they are 2nd year English [whatever] so I said gee, if we have can-do statements, that means the students need to first pass the 1st year course before moving on to the second year, what a great idea! To which my colleagues said no, why would that be the case? This is just one example, we have lots of things nailed here and there that are to adhere to some letter of the law, but not to the spirit.
What generally happens here is that everyone takes advantage of the system until it reaches the breaking point and then draconian rules are put in place. Enforcement is done because everyone is watching you, not because you feel a duty to do the right thing. If you can then get a certain percentage of the population to conform, you develop something like a herd immunity.
Anyway, that's what I was thinking when I read about Air Canada's chatbot 'going rogue'. If you had thoughts about this or anything else, knock yerselves out.
Recent Comments