Roman died in a car accident.
Kuyda used the neural language platform of her AI startup Luka to resurrect a semblance of him, based on years of text messages and emails. In May this year, she launched "Roman" as a chatbot memorial. "It's still a shadow of a person -- but that wasn't possible just a year ago, and in the very close future we will be able to do a lot more," Kuyda predicts.
Meet an AI version of Roman--a digital ghost.
You can download the Luka app in the Apple Appstore to meet Roman for yourself. In the current incarnation, the app leads you to form questions, and he responds--including pictures from his life, his music, and his opinions. What you notice quickly, though, is that you're in charge of the interaction; the Roman-as-bot responds. It feels a bit like a sensitive, slightly spooky search tool for Roman's text archive.
An experience like Roman, however, opens a potentially creepy, potentially incredible, door to the possibilities of interactions with digital personalities. What if chatbots got emotion right? Then, they'd be able to engage, and persuade, not just respond.
To do this, though, the AI would need these things:
Access, hopefully with permission, to your real-time data.
A bot like Roman has access to a stagnant reservoir of data. A bot with access to your viewing history, your purchasing history, or even your maps data would be able to ask you questions with current meaning.
Like, "So, what did you think of that first ep of The Black Mirror you saw last night?"
In response, chatbots will need a way to score your emotional affect, not just be tasked with making up one of their own. Let's call that reaction monitoring.
"I thought The Black Mirror was kinda dark."
Now the chatbot assigns The Black Mirror two (out of five) stars in its ongoing profile of my taste. That's one way my reaction could be monitored. Facial/emotional recognition might be another.
AI having purpose, and a lack of knowledge on how to tune that purpose, ranks high on the list of things people like Elon Musk, Stephen Hawking, and Bill Gates fear.
Roman's mission, for example, is just to respond as Roman would. It responds when it has high confidence its answer is like previous answers. Driving software, such as Tesla's or Google's, balances purposes. They rank safety with efficiency on the route. Spooky or not, digital personalities in the near future will have missions that prompt them to reach out instead of simply respond to requests (like Alexa or Siri do now).
Imagine my chatbot coming back to me the next evening with a suggestion tuned by my reaction to The Black Mirror. "Hey, I found a series you might like--Stranger Things. Want me to put it on?"
The trifecta of real-time insights, response monitoring, and drive, on top of today's great language-processing skills and machine-learning capabilities will soon achieve something so close to personality it will make no difference. If you have an ongoing interaction with a digital personality that knows you well, usefully suggests things you'll like, and is driven to learn your preferences, is that spooky--or spectacular?