As an AI chatbot, her tweets were learned from the conversations she had with real humans online, and as we all know, real humans cannot be trusted with language. Microsoft's Technology and Research and Bing teams launched Tay as a way to study conversational understanding.
We won't repeat here what Tay said, but you can see some examples the Telegraph put out. According to the chatbot's website, the more real people chat with Tay, the more personalized the experience will be. Other times, as The Verge's Jacob Kastrenakes pointed out, they "sound like they're written by a 40-something trying to sound cool." To give you an idea on that front, here's the leadership for one of the Microsoft teams that helped create the chatbot.
Right after Alpha Go beat 9-dan Go master Lee Sedol, I wrote an essay about the potential impact of AI on human cognitive work entitled, “Alpha Go vs.
You: Not a Fair Fight.” It will, and should, scare you.
S., the dominant users of mobile social chat services in the U.He defined AI (known back then as “machine learning”) as “a field of study that gives computers the ability to learn without being explicitly programmed.” We can benefit from this definition, but first we must define the verb “to learn.” When Arthur Samuel used the term “to learn” it was not cognitive; it was operational.Today, systems like Google’s Alpha Go are starting to do work we (humans) would describe as cognitive. In less than 24 hours she went from super cool to super Nazi.
It seems that Twitter had to delete Microsoft's "teen" chatbot, Tay, after she starting making offensive tweets.
That said, Microsoft has apologized and now it’s time to learn from the experience.