Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Google goes AI-first at I/O: What dev and ops need to know

public://pictures/sharon-illustrated.jpg
Sharon Gaudin Writer, Independent
Google's AI chip
 

Enterprise developers and IT shops got a mixed bag at Google I/O last week, but there are a few things they need to keep an eye on and even start to experiment with.

This year’s Google I/O wasn’t a hotbed of exciting announcements as it has been in years past. There were no skydivers wearing Google Glass, as in 2012, and no talk of robots. This year’s Google developer conference, which attracted more than 7,000 attendees and was live-streamed to 85 countries, was all about building on last year’s promise to become centered around artificial intelligence.

Patrick Moorhead, an analyst with Moor Insights & Strategy, said that while Google I/O 2017 was primarily consumer- and consumer-developer-focused, enterprises and their dev teams should pay attention to key announcements. “Google I/O was primarily about AI and the many ways Google is using AI to improve its own services,” he said.

“We’ve been focused evermore on our core mission of organizing the world's information. We are doing it for everyone, and we approach it by applying deep computer science and technical insights to solve problems at scale,” said Sundar Pichai, Google’s CEO, in the first day’s keynote address. 

“Computing is evolving again. We spoke last year about this important shift in computing from a mobile-first to an AI-first approach. … In an AI-first world, we are rethinking all our products and applying machine learning and AI to solve user problems.” —Sundar Pichai

Google is bringing in AI across the spectrum of its products and services. Search, Maps, Google Photos, and the company’s cloud platform, for instance, all are differently using machine learning, Pichai said.

Here are the key takeaways from Google I/O 2017 for dev and ops pros.

AI gets new hardware

With AI front and center, one of the biggest announcements was Google’s AI chip. The chip, which users will be able to access through a cloud-computing service, is designed to work on deep neural networks.

In his keynote, Pichai explained that the processor and the service should propel the advancement of everything from robotics to autonomous systems to image recognition.

 

The new second-generation AI chip, the Tensor Processing Unit, or Cloud TPU, is built specifically to handle the computationally intensive algorithms needed for machine learning and AI. The first generation of the chip, which Google announced last year but has been under the radar since, was focused on running systems that already had been trained, whereas this new chip is designed to run and train deep neural networks.

Developers should, by the end of the year, be able to build software that takes advantage of hundreds of Google’s new AI chips, all working in the cloud.

What AI in the cloud means to developers

Rob Enderle, principal analyst of the Enderle Group, said developers and IT shops should be inspired to look into the future and think about what this will mean for them. Where could artificial intelligence help the company? What problems could it help solve?

Dev and ops teams should also meet with the business side to talk about where AI can be the most useful in the enterprise. It will be key for business and DevOps to work together to figure out where AI should take them, he said.

Dan Olds, an analyst with OrionX, said this is a good time for developers to get their feet wet with AI and to be ready for when they actually can use it to build.

“This is a very solid entry into the AI chip fray.”—Dan Olds

He said the first step was for dev and ops teams to get friendly with TPU and learn the Google AI framework. The second: Figure out how to apply the AI powers that it gives developers—in other words, to come up with ways to apply this tool for business applications.

However, Enderle offered a word of caution: “AI is still a tad over the horizon for most IT shops."

“If they are implementing AI now, they are using a partner or a packaged product, like IBM Watson, anyway. This will change over time, but Google’s announcement on the topic last week was more a statement of direction than anything usable near term by developers.”—Rob Enderle

As AI gets real, so does the competition

Google is also looking at solid competition from Nvidia, which last year released its Tesla P100 GPU, designed for deep neural networks. Moorhead isn’t sold on how well Google’s TPU will perform compared to Nvidia’s processor. “I’m a bit skeptical on the superiority of a TPU on machine learning training, but I think it could be interesting to tire-kick the new service,” he said.

The biggest downside to the approach is its inflexibility. “The developer must use the Google Cloud. If you want to use other frameworks, such as Caffe, Caffe2, CNTK, MxNet, or Torch 7, you are out of luck. If you want to run your workloads on AWS, Azure, IBM, or on-prem, you are also out of luck.”

Olds is more optimistic. He said he believes the new chip will give developers and IT shops a lot to consider when they begin moving toward building AI-focused applications and systems.

Google will flex its muscles

Google’s AI chip is also combined with TensorFlow, the company’s machine-learning software library, so it should make it easier for developers to build and train powerful neural networks, Olds added.

“TensorFlow is going to be important for both Google and developers. It’s Google’s preferred AI framework and, as such, is going to get a lot of attention in the market.”—Dan Olds

Can it compete with the other frameworks? “Time will tell, but because it’s from Google, it already has a lot of momentum in the market,” Olds said.

Join the conversation: Is Google's AI chip with TensorFlow a game-changer? How are you planning to tap the potential?

Google's new AI chip, top. Image courtesy of Google.

Keep learning

Read more articles about: Enterprise ITIT Ops