It's tempting to think of artificial intelligence as nothing more than a robot like Amazon's Alexa that can tell you a weather forecast. You speak, it listens, then responds. With A.I., it all seems innocuous because we have a pretty good idea how it all works right now. In an Audi A8, for example, the car can sense you are not stopping in time and will apply the brakes. A sensor scans in front of you, notices you are not reacting, and "thinks" for you.

It all seems harmless, and it certainly is. In fact, the A.I. of today is perfectly safe and helpful, and could even save your life if you happen to own an Audi A8 or a similar high-end car. My house is outfitted with sensors as well. One camera points to the backyard and can detect an intruder, then whistle at the visitor to make them look and snap a photo. It's brilliant, right?

The problem, of course, is that even these "simple" A.I. routines require thousands of lines of code, fast connections to sensors and databases, and occur in a split second.

The question I have asked about the future of A.I. for several years now is: What happens when we are not even marginally aware of what the A.I. is doing? And more importantly, what happens when A.I. that seems helpful and innocuous is actually malicious?

My favorite example of this, and I'm not sure who came up with the thought experiment anymore, has to do with dispensing medication. What if an A.I. slowly introduces an illness over a long period of time? We'd be blissfully unaware of the problem. Or what if an A.I. bot suggests a diet that makes us all obese? You could argue that is already happening as we speak if you use the McDonald's app.

I've come to realize, though, that these examples of A.I. miss the mark by a mile. This week, a company backed by Elon Musk is going to announce a new brain-computer interface. Neuralink is holding a virtual press conference this coming Tuesday. No one knows for sure what they will announce, but experts say it is likely a way to augment human thinking. Or at least a brain-computer interface of some kind.

If that doesn't scare you, I'm not sure what will.

I know it scared Elon Musk, who has publicly advocated for more A.I. oversight. I'm with him on that, but it also seems like he is pushing for more A.I. advancements at the same time. You could say this is like someone developing a new and innovative drug that makes us perform outstanding physical acts and at the same time making an antidote for the drug. I'm not even sure about chip implants, especially the kind that insert under your skin.

I'm also not sure what to think about brain-computer interfaces.

The idea is for us to connect at faster speeds and with more precision. You're going to plug into that Audi and connect directly to your house someday. What could possibly go wrong? No wonder Musk wants to see more controls in place. If the interface only goes one way--we issue commands to the car and the house--we're all totally fine, I guess.

If the car and house send us commands, start worrying now.