You're being recorded.

Well, some of the time. If you, like nearly half of all Americans, use some sort of voice-controlled "agent," like Apple's Siri, Amazon Echo, or Google Home, your device is not just listening to your sound waves, it's capturing them to help bolster its machine learning of voice-issued commands.

Along the way, it's also gathering a huge amount of information about us (only collectively, the companies claim). In a cheery press release late last year, for example, Amazon Canada noted some usage data it had gathered over the winter holiday season. The No. 1 recipe request among users of Amazon's Alexa-based systems was for chocolate chip cookies, while "Jingle Bells" was the most-requested song. One other finding: Compared with the previous holiday season, people made four times as many inquiries about Santa Claus.

"People," eh? Clearly North America's youngsters have gotten to know their smart speakers.

This development begs the question of what these companies know about our children, and what could happen if the devices' data troves were hacked, misused, or otherwise disseminated. As parents grapple with the ramifications of letting these devices into the family home, so too do the executives who are building the future of our voice interactions with machine-learning-equipped systems. They face a complicated slate of ethical issues--and not just in the boardroom. Many have children themselves. 

"What freaks me out a little bit is how comfortable my children...are with these interactive agents," admits Taniya Mishra, the lead speech scientist at Affectiva, an emotion-recognition company based in Boston. Mishra says her three children, all younger than 10, frequently ask her in-home device for help with their homework or other questions rather than asking her or her husband, who are sometimes slower to respond. Watching and listening to their interactions with the technology has caused her to think about both her parental and business responsibilities, she said on stage at a conference hosted by the NYC Media Lab in May: "We need some people who ask us, 'Should you be building it?'"

Steve McLendon, a product manager at Google who works on the company's assistant technology, also says his children's interactions with Google Home feel remarkably natural. He's fascinated by what they say to the agent but recognizes the potential downside. "The thing I think a lot about," he says, "is the unconscious disclosure of information." (Google and other AI businesses say their products don't continuously listen to conversations in your home, and only function once "woken" by a command, such as "Hey Siri," or "OK Google," though sometimes families unknowingly wake the device and trigger recording.)

More than just privacy issues

Turns out, these executives aren't the only ones concerned by how children interact with their voice-controlled agents. Congress is, too. There's new legislation in the works to amend the 2013 Children's Online Privacy Protection Act to tighten controls on data collection from minors. The Do Not Track Kids Act would force tech companies to obtain parental consent before collecting data from kids younger than 13. Its bipartisan sponsors hope to include a digital "erase" button to help parents should their kids disclose personal information.

For parents though, there's plenty more to consider than just keeping their child's personal information out of the wrong hands. There are also the more slippery concepts of how simply interacting with these devices may shape kids' learning, their self-esteem, and their worldview.

The early research on children's interactions with AI systems is far from conclusive, but it does illuminate some implications that could make parents lose sleep. For example, in a 2005 study at the the University of California, San Diego, researchers tested whether a small humanoid robot could entertain toddlers for more than 10 hours. The children were trepidatious at first, but over the course of dozens of sessions interacting with it, grew more attached, and ultimately treated the device more as a peer than a toy. Long-term socialization had happened.

In 2016, researchers from MIT studied the ways children aged 3 to 10 interacted with digital personal assistants such as Amazon Alexa and Google Home. Afterward, the children reported back that they could both teach the agents, and learn from them. The older children--who ostensibly could better communicate with the agents--unanimously believed Alexa was smarter than them. They found some agents friendly and perceived most as truthful. Even the older kids displayed a degree of empathy toward the robots.

Some executives at voice-AI companies say they have similarly witnessed their children anthropomorphize the digital personal assistants in their homes. The interactions raised concerns, but also revealed benefits that persuaded them to let the devices take permanent residence in their home.

Assaf Gad is vice president of marketing and strategic partnerships at Audioburst, an AI-powered search engine for audio content based in Tel Aviv. His family moved to Silicon Valley from Israel two years ago. While his son understood English very quickly, he struggled to muster the courage to speak much to strangers in his unsteady new language.

"Suddenly he had the device, and he could talk to it and not be shy," Gad says, referring to a Google Home speaker. "When the device started talking to him it was so great, it gave him the confidence to talk. That was the convincing point that the value was greater than the risk." (The technology built by Audioburst has integration for both Google Assistant and Amazon Alexa.)

Gad's family now has three smart speakers in their home. He says at first they were concerned that the microphones were essentially open all of the time. "But our microphone on our phone is open all the time already," he says. "You'd have to shut down the whole internet to avoid that!"

Brian Peterson, the co-founder and vice president of engineering at Dialpad, the company that makes UberConference, and which recently acquired a meeting-recording AI company called TalkIQ, has two little kids who can't write or type but already use their words to command his in-home device. He's not concerned--for now. "They don't know anything bad yet to ask it," he says.

He points out that for the past decade we've all grappled with inputting information to Web browsers and email servers and apps--at which point we surrender a measure of control over our information. "I trust, maybe too much, the bigger companies at least," he says. "On privacy, asking a question to a Google Home is no different than going to Chrome and Googling something. It's the same input, just a different mechanism."

His primary annoyance at his Google Home device is that it "sounds so dumb and takes so long" to say "OK Google." That's exacerbated by the fact his kids crank up pop tunes on the speaker so loud that it can't hear his commands. He says he finds himself bolting over to the device, leaning his head into the blasting sound, and screaming into it, "OK GOOGLE" at all hours.

 "That's our biggest pain point on a daily basis," Peterson says.