At the Google I/O developer conference this month, there was one super clear message: Alphabet is now in the artificial intelligence business. This has big implications for lots of people. There are 2 billion active Android devices, 800 million Google Drive users, and 500 million Google photos users who upload 1.2 billion photos every day.

"We spoke last year about this important shift in computing, from a mobile-first to an AI-first world," said Google CEO Sundar Pichai in the opening keynote, setting the stage for how Google is adding AI to everything.

"We are excited about designing better machine learning models. What better way to do this than getting neural nets to design better neural nets? . . . Whenever I spend time with a team and think about neural nets building neural nets, it reminds me of one of my favorite movies, Inception. I tell them, we must go deeper--across a wide range of disciplines."

Here are three examples that show you some of the breadth of Alphabet's big AI bet.

1. Google Assistant: a virtual digital assistant that can see, identify, schedule and pay for you

Ever said you want an extra pair of hands? Well, Google heard you. (It's listening all the time, right?) The company is fast improving Google Assistant with a number of updates rolling out this year. Now, it's available on IOS as well as its native Android.

Google scientist Fernanda Viegas says, "Soon, the Google Assistant will be able to have a conversation in your native language about what you are seeing through your phone or device. Google Assistant will be able to place orders for you, and that will disrupt the payments industry. Google Assistant SDK allows any device manufacturer to build Google Assistant into whatever they're building."

With on-device AI, Google made it clear that it's listening hard to the individual user. Your Google Assist will be customizing itself and learning based on data from you--meaning it grows more personalized over time.

2. Google Lens: seeing your world and taking action

Good old Google Photos, often seen as storage, is getting overhauled with a significant number of upgrades, starting with a new product that integrates with Google Albums. It's called Google Lens.

"Google Lens is a set of vision based computing capabilities that can understand what you're looking at and help you take action based on that information," explains Pichai. "For example, if you run into something and want to know what it is, you can invoke Google Lens from Assistant, point your phone at it, and we can tell you what it is," said Pichai.

"Thanks to the machine learning in Google Photos, we'll suggest the photos and people you need to share with. By activating Google Lens, you can identify the cool landmarks in your photos," too, saysAnil Sabharwal, Google head of mobile project management. In other words, Google will know not just where you are, but will also recognize what you are seeing through your phone. It will make its best AI-on-board stab at knowing to whom those images might be significant, too, and help you reach out to them.

3. Google.AI and new hardware for artificial intelligence learning

Fei Fei Li, Chief Scientist of Google Cloud and AI and head of Stanford's AI Lab, had a number of announcements that underscore the breadth of Alphabet's aspiration to become the world's AI platform.

"AI is transforming everything Google does," she said.

For example, "There's no getting around that AI requires enormous computation resources, and this represents one of the steepest barrier to entries. To address this, Sundar announced this morning that we have announced a second generation TPU, Tensor Processing Unit," she said. The new chip accelerates AI's processing-intensive learning phase. It's built specifically to support Google's open source machine learning language, Tensorflow. "Our new language processing model takes a full day to train on 32 of the world's best commercially available GPUs, while only 1/8 of our new TPU pods can do this in just an afternoon," Li continued.

You'll be able to rent Google's new Cloud TPUs on an as-needed basis, paying for what you use. Top machine learning researchers get a free allocation of Cloud TPUs through Google's new Tensorflow Research Cloud.

"It's just the beginning," promises Li. "Every single industry is going through a transformation because of data, because of AI and machine learning. And this is what I see as the historical moment that AI is going to transform the field."

She says, "the tools and the technologies we have developed in the field of AI are really the first few drops of water in the vast ocean of what AI can do. We cannot overpromise this but there should be tremendous excitement that we can do a lot more work to make this AI in vivo happen."

Did you hear the word 'search' anywhere?

Exactly--Alphabet and Google have completed their pivot into AI from mobile search, and there's a new frontier ahead for how companies get found.

Published on: May 25, 2017