To be sure, Microsoft’s “chat bot” Tay, big-eyed, cute, and artfully pixelated, may represent the future.
Chat bots, AI-powered fake people that interact with customers via text messages, have become a huge focus across many industries.
Artificial intelligence, of course, starts with human intelligence.
AI systems are typically fed big data and the output of some of the world’s finest brains – case in point, Google’s Alpha Go system that learned from millions of moves played by elite players of the complex board game.
This goes back to an "Eliza" AI chat program (see What is an "Eliza" program?
S., the dominant users of mobile social chat services. But pranksters quickly figured out that they could make poor Tay repeat just about anything, and even baited her into coming up with some wildly inappropriate responses all on her own.
In 1995 I finally got around to setting up this page to tell everyone about it.
MGonz is finally written up as a book chapter: Humphrys, Mark (2008), "How my program passed the Turing Test", Chapter 15 of Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer, Robert Epstein, Gary Roberts and Grace Beber (eds.), Springer, 2008.
If you send an e-mail to the chatbot’s official web page now, the automatic confirmation page ends with these words. We’re making some adjustments.” But the company was more direct in an interview with , pointing their finger at bad people on the Internet.
“Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways.” Maybe it wasn’t an engineering issue, they seemed to be saying; maybe the problem was Twitter.