Skip to main content

Google’s AI chief on AutoML, autonomous weapons, and the future

Google AI chief Jeff Dean speaks with members of the AI community and press at the I/O developer conference
Google AI chief Jeff Dean speaks with members of the AI community and press at the I/O developer conference on May 8, 2018.
Image Credit: Khari Johnson / VentureBeat

testsetset

Roughly one month ago, news broke that Apple had poached Google AI chief John Giannandrea, and he was quickly replaced by Jeff Dean, leader of the Google Brain research division. The longtime Googler took over as AI chief at a time when AI continues to spread to all Google products and services and the company is making AI into its own division.

Google Research recently even changed its name to Google AI.

On Tuesday, as the company rolled out dozens of new features and updates — including ML Kit for mobile app developers, plans for a third generation of tensor processing unit chips, and AI for Google Assistant that can make phone calls on your behalf — Dean shared his vision of the future.

On the horizon, Dean sees opportunities for AI to create new products and find solutions to problems humans haven’t even considered before.


June 5th: The AI Audit in NYC

Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.


He also sees challenges emerging from AutoML, an AI model that can create other AI models, and he does not believe Google should be in the business of making autonomous weapons.

The rise of generally intelligent AI

Much of the AI in the world today was made to accomplish a single, narrow use case like the translation of a sentence from one language to another, but Dean said he wants Google to create more AI models that can achieve multiple tasks and achieve a kind of “common sense reasoning about the world.”

“I think in the future you’re going to see us move more towards models that can do many, many things and then build on that experience of doing those many, so that when we want to train a model to do something else, it can build on that set of skills and expertise that it already has,” he said.

For example, if a robot is asked to pick something up, it will understand things like how a hand works, how gravity works, and other understandings about the world.

“I think that’s going to be an important trend that you’ll see in the next few years,” he said.

AutoML’s bias and opacity challenges

Depending on whom you ask, AutoML, Google’s AI that can create other AI models is either exciting or terrifying.

Machines that train machines surely frighten AI naysayers. But AutoML, said Google Cloud chief scientist Fei-Fei Li, lowers barriers to creating custom AI models for everyone from high-end developers to a ramen shop owner in Tokyo.

Dean finds it exciting because it’s helping Google “automatically solve problems,” but the use of AutoML also presents unique issues.

“Because we’re using more learned systems than traditional sort of hand-coded software, I think that raises a lot of challenges for us that we’re tackling,” he said. “So one is if you learn from data and that data has biased decisions in it already, then the machine learning models who learn can themselves perpetuate those biases. And so there’s a lot of work that we’re doing, and others in the machine learning community, to figure out how we can train machine learning models that don’t have forms of bias.”

Another challenge: how to properly design safety-critical systems with AutoML to create AI for industries like health care. Decades of computer science best practices have been established for hand-coding such systems, and the same must be done for machines making machines.

It’s one thing to get something wrong when you’re classifying the species of a dog, Dean said; it’s another thing entirely to make mistakes in safety-critical systems.

“I think that’s a really interesting and important direction for us to apply, particularly as we start to get machine learning in more safety-critical kinds of systems, things that are making decisions about your health care or an autonomous car,” he said.

Safety-critical AI needs more transparency

Together with news that Google Assistant will soon make phone calls for you and the release of Android P in beta, on Tuesday CEO Sundar Pichai talked about how Google is applying AI to health care to predict the readmission of patients based on information drawn from electronic health records.

An article by Google researchers published Tuesday in the Nature of Digital Medicine explains examples of why its AI made certain decisions about a patient so that doctors could see the reasoning behind a recommendation in medical records. In the future, Dean hopes a developer or doctor who wants to know why an AI made a specific decision will be able to simply ask the AI model and get a response.

Today, the implementation of AI in Google products goes through an internal review process, Dean said. Google is currently developing a set of guidelines for how to assess whether or not an AI model contains bias.

“What you want is essentially, just like security review or privacy review for new features in products, you want an ML fairness review that’s part of integrating machine learning into our products,” he said.

Humans should also be part of the decision-making process, Dean said, when it comes to AI implemented by developers through tools like ML Kit or TensorFlow, which has been downloaded more than 13 million times.

Drawing the line at AI weaponry

In response to a question, Dean said he does not believe Google should be in the business of making autonomous weaponry.

In March, news broke that Google was working with the Department of Defense to improve its analysis of footage gathered by drones.

“I think there are a number of interesting ethical questions about machine learning and AI as we as a society start to develop more powerful techniques,” he said. “I personally have signed a letter, an open letter about six or nine months ago — don’t know exactly when — saying that I was opposed to using machine learning for autonomous weapons. I think obviously there’s a continuum of what decisions we want to make as a company, so should we offer Gmail to military services that want to use it? That seems fine to me. I think most people have qualms about using autonomous weapons systems.”

Thousands of Google employees, according to the New York Times, have signed a letter that states Google should stay out of the creation of “warfare technology” could cause irreparable damage to Google’s brand and trust between the company and the public. Dean did not specify if he signed the letter referenced in the New York Times reporting.

AI drives new projects and products

Alongside patient readmission AI and a Gboard designed to understand Morse code, Pichai also highlighted a previously released study of AI that accurately detected diabetic retinopathy and predicted problems as well as highly trained ophthalmologists did.

AI models with that level of intelligence are beginning to do more than imitate human activity  They’re helping Google discover new products and services.

“By training these models on large amounts of data, we can actually make systems that can do things that we didn’t know we could do, and that’s a really fundamental advance,” Dean said. “We’re now creating entirely new kinds of tests and products proven by AI, rather than using AI to do things we think we want to be able to do but just need the training system.”