Microsoft shuts Artificial Intelligence Robot – Tay

Humans will be humans and the latest victim is Microsoft, which launched artificial intelligence (AI)-powered bot on Twitter for a playful chat with people, only to silence it within 24 hours as users started sharing racist and offensive comments with the bot.

  • Launched on Twitter as an experiment in “conversational understanding” and to engage people through “casual and playful conversation”, Tay was soon bombarded with racial comments and the innocent bot repeated those comments back with her commentary to users.
  • Some of the tweets had Tay referring to Adolf Hitler, denying the Holocaust, supporting Donald Trump’s immigration plans, among others.
  • The AI chatbot Tay is a machine learning project, designed for human engagement.

“Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments,” – M.S

  • Tay — an AI project built by the Microsoft Technology and Research and Bing teams — was coded with information which can tell users jokes or offer up a comment on a picture you send her.
  • The bot is also designed to personalise her interactions with users.
  • Microsoft has since deleted some of the most damaging tweets from nearly the 96,000 that Tay tweeted.
  • However, a website called Socialhax.com made screenshots of several of Tay’s comments before they were removed.

Tay was seen referring to feminism as a “cult” and a “cancer”, as well as noting “gender equality feminism” and “i love feminism now.”