Siri--and all her artificially intelligent peers--are increasingly taking on more tasks for people at home and at work. But as impressive as these digital assistants can be, they're still very much in their infancy.

"I don't think anyone has nailed the user experience yet," warns Jason Cornwell, who leads the team at Google tasked with designing exactly that.

Still, the biggest players in artificial intelligence (A.I.) agree on some important design lessons. Cornwell shared a stage with execs from Slack, Facebook, and Microsoft on Thursday at Fast Company's Innovation Festival in New York City. The group discussed what each respective company has learned about what works--and what doesn't--when building consumer-focused, A.I.-based products.

Here's what the panelists had to say:

1. A.I. should add to the conversation...

Electronic assistants can be extremely useful at saving people time on menial tasks. But to become a can't-live-without tool, an assistant "needs to be interesting enough to add to a group conversation," says Lili Cheng, general manager of Microsoft's Future Social Experiences labs. In other words, the A.I. needs its own personality, instead of being purely reactive. That necessary extra bit of oomph is why Google has hired a team of comedy writers from places like The Onion and Pixar to write dialogue for Google Assistant.

2. ...But it shouldn't be intrusive.

Whether it's a friend, an overbearing presidential candidate, or a computer, nobody likes an interrupter. The group agrees that if A.I. is part of a conversation, it shouldn't butt in out of turn; if it's on a computer or phone, it shouldn't take over your screen or dismiss whatever you're working on. A small pop-up that can be addressed in a moment will suffice. It's a delicate balance, but an important one.

3. A.I. should feel natural.

"If we're going to spend our lives around these things," Cheng says, "they should be designed around the way we naturally talk." For example, the back button on a browser is useful, but users will (rightly) have higher expectations for something they converse with out loud--so saying "back" to get to a previous option won't fly. Exchanges should flow like regular conversation.

4. It should be clear about what it's doing and what it's capable of.

Several years ago, Gmail introduced a feature that automatically sorted mail into "important" and "unimportant" folders. The A.I. was very accurate, Cornwell says, but users weren't comfortable letting a computer make those decisions if they didn't know the criteria it was using. So Google applied that very same A.I. in a different way, sorting mail into folders based on work, personal, spam, etc.--and users loved it.

Similarly, an A.I. system should set expectations: If users expect a digital assistant to be all-knowing and it turns out not to be, they'll be disappointed. Slack's chatbot, meant to serve as an office assistant, lets you know its limitations when it doesn't understand something: "Sometimes I have an easier time with a few simple keywords."

5. Creators need to be aware of any biases.

"Artificial intelligence is only as smart as the data you feed it," Cheng says. As such, A.I. will reflect the biases of the information it reads. Microsoft learned a tough lesson about this when it created Tay, a chatbot that quickly began spewing racist and profane answers back at the Twitterverse after some users taught it to do just that.

Much of the time, the prejudices won't be so obvious. They might reflect slight biases based on the region, culture, gender, or likes and dislikes of the person who created it. It's important to keep this in mind when creating something intended to be used by a wide audience. In general, the more input, the better--but keep a watchful eye on it.