Update: This post appeared before we knew the full extent of how offensive the Tay chatbot became. Microsoft has since disabled the account.

We haven't been able to make smart machines that think like humans. Now, Microsoft has decided to create a robot that acts more like a spoiled brat.

The Tay chatbot runs on Twitter, GroupMe, and the messaging app Kik. When you start interacting with it, the bot can communicate with you on the same level as a middle-schooler or maybe an incredibly entitled Millennials. At least, the bot uses texting shorthand like "ur" and says words like "chill" and "yaz" in tweets. According to this report, the bot crosses the line into  racism and insulting behavior, although it usually takes some prompting. (You can ask Tay to repeat your  tweet, so that's not exactly fair to the Microsoft Research team behind it.)

I tested Tay and only had to put up with some bad jokes. I asked about how old the chatbot is and guessed at 18-34. "How did you know?" It tweeted back. Another user told me about an exchange where Tay said "Carpe DM" which is not a bad pun.

I've written extensively about this at the time, but robots have not quite lived up to their potential. Decades ago, most of the predictions were that we would have home helpers who cooked our food, mowed our lawns, and educated our children. There are robot lawnmowers, but they tend to require that you define the exact space to mow. What we don't have? Fully autonomous robots that look like humans and also think like us. According to robotics expert Rodney Brooks, who spoke at SxSW, we wouldn't really want humanoids anyway. It's better when robots take over mundane tasks or drive cars for us so we can be more productive at other tasks.

And yet, we seem to have settled for Tay, a mildly annoying chatbot that is "learning" how to talk like a human being through artificial intelligence (and likely backed by a Microsoft cloud infrastructure). We have to put up with the Google Car, which recently had a run-in with a bus and drives like an overly cautious teen with a permit. One expert told me the car drives like it is living according to a strict set of algorithms, which is safer but not at all fun or even remotely like a human driver.

In for "intelligent" machines to appear human, they have to act a little dumb. It's not that they have to be unsafe, but the best robotic tech--like the Tesla Model S that drives autonomously--is just a bit risky. It's not overly robotic.

Tay is a good example of this. The chatbot is at least a bit convincing with some of its jokes because humans tell bad jokes on Twitter all day. (Guilty as charged here.) There is something distinctly human about off-the-cuff remarks, and something distinctly robotic about perfectly planned and choreographed behavior.

It's almost like we need robots to act like that robot taxi driver in the first Total Recall movie with Arnold Schwarzenegger, a bit edgy and unpredictable.

And what's next for Tay?

Last night, the bot seemed to shut down. Microsoft has reportedly started deleting some of the more offensive tweets. My own conversation trailed off and Tay stopped responding, even by DM. Maybe that's the most human characteristic of all.

Published on: Mar 24, 2016