Early Wednesday morning, a new Twitter user named Tay met the world. She was a Microsoft-designed experiment in artificial intelligence that would develop an increasingly human-like persona by engaging with Millennials online.

Early this morning though, Tay abruptly withdrew from the world. It seems she learned too indiscriminately, repeating all sorts of language that had been directed at her--including racial epithets, political conspiracy theories, and all-caps Trumpisms (Microsoft has since deleted the tweets, though The Guardian was kind enough to take screenshots).

If you're like me, your initial reaction to Tay was excitement. AI is cool. Chatbots are fun. It seemed like a neat experiment. I spent the afternoon chatting with Inc. columnist John Brandon about a post on Tay. His working headline was This Sassy Twitter Chatbot by Microsoft Talks Like a Millennial...and It's Awesome. Ultimately, the headline became Microsoft and the Rise of the Dumbots.

Welcome to the Internet. Tay's initial failure is a great example of how the most well-intentioned tech innovations can be taken in unexpected directions when subjected to a public audience. It's a reflection of virtually any human interaction, in a way, though a slightly skewed one: The loudest voices aren't always the most popular ones, but they're the ones that most often get heard.

Here's another example in the news this week: A British government agency, the National Environment Research Council, asked people to vote online for a name for its new $287 million polar research ship. Their overwhelming choice so far? The R.R.S. Boaty McBoatface

The ongoing poll is less troubling than Tay's lesson on racism, of course. For Tay, maybe a few bad apples are spoiling the bunch. That's what I hope, anyway. Internet trolls love to mess with people (or chatbots), and they don't seem to care when they cross a line--by hiding their identity behind a screen name, they count on never having to experience serious retribution.

But we've seen this before in real life, too.

Take, for example, hitchBOT: a hitchhiking robot that set out from Boston in July 2015 on a mission to see the entire country. He had (limited) speech recognition skills, and would ask if you wanted to have a conversation, delivering fun facts from Wikipedia if you said yes.

He'd already successfully traversed Germany, the Netherlands, and Canada. But in the U.S., he made it as far Philadelphia before being decapitated and dismembered.

"Sometimes bad things happen to good robots," hitchBOT's creators wrote on their website following the journey's unexpected end. "For now we will focus on the question 'What can be learned from this?' and explore future adventures for robots and humans."

The hopeful response is encouraging, because we live in exciting times: Innovation is everywhere, from plants that can grow without soil to football helmets that look normal but are engineered to significantly reduce concussions. Elon Musk's new recycled/reusable rockets will cost $40 million per launch, 30 percent less than a first-time flight and well below than the industry standard of roughly $150 million.

It's easier (and a whole lot more optimistic) to focus on the successes rather than the failures. Are robot murderers and chatbot trolls taking over the Internet? Are they everywhere we look? Are we being overrun?

Probably not.

But are failures like these--small groups of people throwing wrenches into the works, just for laughs--why exciting projects like artificial intelligence or hitchhiking robots sometimes don't make it off the ground? Is this why we can't have nice things?

Pretty much.