I stood stiffly on stage at a writing conference quite a few years ago. With over 800 people in attendance, and my own understanding of how to speak to a large group a little sketchy, I clicked through a few slides and made a few comments that probably didn't sound too profound.

After my talk, I collected some comment cards. (Like I said, this was before Twitter.) One theme that seemed to come up way too often: That I didn't move around enough.

I was not exactly Steve Jobs in his prime back then, but maybe a new app--which debuted at a tech conference over the weekend--would have helped me.

Vocalytics uses machine learning to analyze any existing video. It can tell when you make a "power pose" or gesture with some nice emphasis. If you stand motionless the entire time, it will know. The app can read hand gestures and body posture for now, but in the future, the development team says it might expand to reading eye movement and facial expressions.

This type of machine learning is not new. Microsoft has offered libraries with machine learning algorithms that analyze body movements for several years, and the Vocalytics team uses some of this code. What is new is the idea of analyzing body language in a way that is helpful to someone who might not know what to do. As any expert in emotional intelligence will tell you, people see your expressions, gestures, and movements almost as much as they hear what you say.

You can't hold the attention of a crowd if you sit idle. In another talk recently--one that didn't exactly go swimmingly--I did decide to move around a lot more in front of a group of 50 or so people. I tried to make gestures and show a demo on a computer several times, to break up the tedium of just standing there and talking. (Truth be told, I'm a better writer than speaker.)

Not too long from now, we'll see more and more AI-powered apps that can analyze everything from our voice inflections to our body language during a meeting. In our cars, a bot will know how we drive and, after we get home, make suggestions on how to improve. We'll even use AI to read through our articles and make corrections that improve readability and comprehension.

And these AI engines will make mistakes. From what I understand about coaching people in public speaking, it is not a simple matter of moving your hands around a little. A bot likely won't know which gestures provide the most impact for a given audience for a while. There are so many variables--such as the subject matter, level of expertise in the crowd, and even factors like the time of day. (My recent talk to college students occurred early in the morning. You might say it was doomed for failure from the beginning. My writing conference talk was right after lunch. Everyone should have been clued in.)

Of course, even if the AI bots don't quite know how to give us perfect advice, every tip helps a little. I like what the Vocalytics is doing and how it works.

I wish it had been available way back when.