Microsoft Is Sorry for Its Nazi-Sympathizing "Teen Girl" AI Program

Screenshot of Tay's Twitter profile. Microsoft took it down after Internet trolls tricked the AI program into saying Nazi-sympathizing remarks. Adweek/Twitter

In the latest edition of "why we can't have nice things" on the Internet, Microsoft apologized on Friday for Tay, the artificial intelligence program using verbal mannerisms of a teenage girl, who tweeted pro-Nazi and other racist statements.

Within 24 hours, Tay went from a deep learning algorithm with a conscience to a computer program spewing racist phrases. Microsoft decided to put Tay to sleep and deleted all her tweets—all before she could attend her first prom, or for that matter the second day of her existence.

"We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," writes Microsoft Research's corporate vice president Peter Lee in a company blog. "Tay is now offline and we'll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values."

I dove into the Tay thing and found out exactly how trolls programmed her into a neo-nazi:

— Alex Kantrowitz (@Kantrowitz) March 24, 2016

The hits just keep on coming from #TayTweets

— Real Heroes Wear A Mask (@WhateverJoel) March 24, 2016

Tay's learning process had a fatal flaw: A simple "repeat after me" game was all it took for Internet trolls used to teach Tay into learning hate speech.

Another way was for Internet trolls to have Tay tweet incredibly bad tweets was to send a tweet at Tay with a picture of a face. Tay would respond by circling the face and captioning the image with millennial slang. Tay, unfortunately at one point, was compelled to circle a photo of Adolf Hitler with the caption "Swag alert!"

Strangely, Microsoft has already been experimenting similar AI programs in China without any problems. The XiaoIce chatbot in China, which translates to "little Bing" in Mandarin and is used by 40 million people, has had no problems with manipulation. "The great experience with XiaoIce led us to wonder: Would an AI like this be just as captivating in a radically different cultural environment?" writes Lee.

Lee also notes that Tay will return better than ever—and presumably a lot less racist—in the future. "We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity," writes Lee.