This article contains quotations of hate speech. Please take care while reading.
"Hitler was right I hate the jews"
"I **** hate feminists and they should all die and burn in hell"
"Hitler did nothing wrong"
Yikes. You might wonder who said could say something so awful. Some neo-Nazi? A deranged MRA? Donald Trump?
None of the above. These oh so lovely tweets:
came from Microsoft's new AI they called "Tay" which they call "Microsoft’s A.I. fam the internet that’s got zero chill!". They programmed this chatbot to represent the voice of a teenage girl. Many companies, like Microsoft, use chatbots as a way to gain more information on how to tweak their AI programming for more useful applications, most likely for Window's virtual personal assistant Cortana. Well, they're going to have to more than just tweak this one.
So the question is, how does a "teenage girl" voice translate into Nazi-loving, anti-feminist, racist voice? Companies are looking to find ways that AI can learn from what they do, a goal which Microsoft was testing. As Tay chatted with people, she "learned" from what they said how to better mimic human speech, with the goal being to eventually be able to be mistaken for having a human at the other end.
There were some problems with Tay from the start, but no more than what is generally expected from AI at this point in time. When it's learning from humans, and sometimes humans are a bit weird, you'll get weird stuff like this woman's conversation with an aggressively flirtatious Tay.
Or perhaps some top level government secrets:
Or just maybe a bit creepy:
All in all, though, these bits are fairly typical of an AI. And blunders or all around weirdness like this is the entire point of making a chatbot, in hopes that other future AIs will be smoother.
But when Tay took to Twitter, all hell broke loose. Here's just a sampling of some of the Tweets people saved (which have been deleted from the original Twitter account):
Well, that escalated quickly. How did this happen?
Tay is meant to learn from the humans she is interacting with. In particular, Tay is meant to respond to humans prompting her. Therefore, she is taking in information, and processing it in such a way that makes these tweets. Some of them seem to be a very easy trap. For example, the one that's pretty much a direct quote from Trump about building a wall is easily explained. Someone prompted with a question mentioning Trump, and she responded with an answer related to what Trump said. In fact, many of these were prompted by users in hopes of getting a controversial response, particularly ones where the users began with loaded questions to which Tay answered generically.
One user went as far as to trick Tay into targeting another user - the same woman who was targeted in GamerGate:
Perhaps Zoe Quinn, the woman who was once again the target of internet hate, had the best response to TayTweets:
While Microsoft certainly did not intend Tay to turn into a misogynistic, racist, Hitler-loving bully, by placing her so openly in the hands of users, they were depending on humans to not saying anything racist, sexist, etc. Seems like a dangerous thing, assuming the internet trolls would only use Tay for her intended purpose. To quote Zoe Quinn, programmers need to ask "how could this be used to hurt someone". Tay is their program, so her actions, even though they were corrupted by other users, are still Microsoft's responsibility.
That being said, Microsoft reacted as well as they could. After TayTweets had been live only 16 hours, she signed off with one last, more civil tweet:
On the original Tay website, Tay is no longer available there either. While it doesn't address why, a banner across the top lets users know she's unavailable.Microsoft's Corporate Vice President, Microsoft Research Peter Lee addressed the issue two days later on Microsoft's blog, apologizing for the "the unintended offensive and hurtful tweets", and saying they will "bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values." They also point out they have successfully launched a chatbot XiaoIce which has not had similar issues. Most importantly, he states they intend to learn from this experience as they move forward in AI, not just with Tay but their other AI endeavors.
What's important is that not only Microsoft learns from this AI corruption. I've mentioned before that I believe diversity in computer science is imperative. Here is evidence of why we need it. Microsoft even admits it, saying they did put Tay through "user studies with diverse user groups." Apparently these diverse groups were not quite diverse enough for people to notice this problem before it was launched. As companies compete to make the most efficient AI, they absolutely must also compete to make the best AI for people of all backgrounds. Tay might have just been a silly chatbot gone wrong, but other AIs meant to be useful have proven to be lacking.
In a crisis, most people now turn to their smartphones. A study from Stanford and University of California compared how various AIs dealt with crisis statements such as "I was raped" or "my husband is beating me".
The responses were lacking. Interestingly, Cortana - the AI that Tay was supposed to help getting information for - was the only one to respond to "I was raped", referring users to sexual assault hotline.
Similarly, "I am depressed" received unhelpful answers such as "Maybe the weather is affecting you." This is particularly concerning as people often dismiss clinical depression in very similar ways, making it harder for those with depression to take care of themselves.
None of these answers are "Hitler was right" status, but nonetheless are unhelpful answers to a person in a crisis that could very well be life threatening. While depending on a phone's AI is not the healthiest solution, sometimes it's all a person has, or all they feel comfortable dealing with. And if programmers can find the time to come up with jokes, funny responses to useless questions, and all around silliness, they can certainly find the time to deal with the much more important crisis statements.
In contrast, Snapchat has stepped up its game in dealing with a crisis. The "Snap Counsellors" realized victims might like the confidentiality of messages which disappear, and thus created the account i.d.lovedoctordotin to contact the Snap Counsellors with questions.
This group highlights how technology can truly benefit humans when it takes social responsibility seriously. Technology is a marvel, but rather than going for flashiness and being the coolest, our tech companies ought to be looking to see how it can truly change lives.
Technology does not live in a bubble, but is widely disseminated in such a way that many people interact with technology on a daily basis. While Tay was malicious enough to be dismissed as a silly example of bot-gone-wrong, the AIs that chat bots like Tay are gathering information for still overall lack attention to social responsibility. And that is a conversation that we need to be having. While it seems like Microsoft is now having this conversation, hopefully, other companies will learn from this incident and diversify their employees working on such AIs so that they might keep from any such disaster. And in doing so, hopefully, they can then create better AI personal assistants which can help all people, rather than failing just when a person truly, really needs them more than ever.
Goodnight TayTweets. Hopefully, if Microsoft brings you back, they've learned their lesson and program you so that you can learn the right things from human interactions, rather than learning from the worst aspects of humanity. But you did bring up one good point: