Microsoft AI Twitter Bot ‘Tay’ Returns Briefly To Spam Followers
Microsoft’s artificial intelligence experiment known as Tay has, for the second time in less than a week, been silenced, and while the chatbot this time avoided spouting genocidal and racist comments, it has been spamming its 200,000-plus followers with hundreds of similar tweets.
Microsoft got into hot water last week when Tay was tricked into making extremely controversial comments, including one that called President Barack Obama a “monkey” and another that denied the Holocaust ever happened.
The company was forced to apologize, delete most of the most controversial tweets and take the bot offline.
Tay returned on Wednesday, but within hours it was once again silenced by Microsoft, who stopped it from responding to comments and made the account private after it spewed hundreds of similar replies in what appeared to be a glitch in the system.
Before Tay began spewing out the Twitter spam, however, the chatbot looked to have returned to make ill-judged comments, including one which admitted to smoking marijuana in front of the police.
On Wednesday, the Twitter account @TayandYou began responding to tweets by saying: “You are too fast, please take a rest…” However the chatbot placed its own Twitter handle at the beginning of the messages meaning it entered an endless loop and continued to repeat the same phrase over and over again, all of which were seen by the account’s 214,000 followers.
Microsoft launched the AI chatbot as a way “to experiment with and conduct research on conversational understanding.” Tay is designed to engage with 18- to 24-year-olds in the U.S., as these are the “dominant users of mobile social chat services.” The chatbot was designed to engage with these users by using the same language they do and get progressively smarter through conversations with millennials.
However, soon after the account was launched, people spotted a glitch in the system and were able to get Tay to respond to provocative questions and comments with even more controversial comments. Microsoft apologized for the comments made.
“We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay,” Microsoft Research Head Peter Lee wrote. The company added that the inexcusable comments were the result of a “coordinated attack by a subset of people exploited a vulnerability” in the chatbot’s system.
© Copyright IBTimes 2024. All rights reserved.