Skip to main content

Google Cloud chief scientist: ‘AI doesn’t belong to just a few tech giants in Silicon Valley’

Google Cloud chief scientist Fei-Fei Li (left) speaks with former White House CTO Megan Smith and Foundation Capital partner Joanne Chen about the democratization of AI at SXSW in Austin, Texas on March 13, 2018.
Image Credit: Khari Johnson / VentureBeat

Watch all the Transform 2020 sessions on-demand here.


Silicon Valley may be behind much of the development of AI in the modern world, but it’s vital that everyone feel included in the technology, said Fei-Fei Li, Google Cloud chief scientist for AI.

“It’s time to bring AI together with social science, with humanities, to really study the profound impact of AI to our society, to our legal system, to our organizations, to our society to democracy, to education, to our ethics,” Li said. “Again I stress: AI doesn’t belong to just a few tech giants in Silicon Valley, and these few companies in Silicon Valley have a responsibility to harness AI for the good of everyone, but they also have the responsibility to work with everybody, recognize we don’t know it all, and to include everybody.

“This is a historical moment, and we have a tremendous opportunity and responsibility and to really think about how to remedy this problem.”

Mentoring the next generation

Li delivered her remarks today in a discussion with former White House CTO and Shift7 CEO Megan Smith. The talk was moderated by Foundation Capital partner Joanne Chen.


June 5th: The AI Audit in NYC

Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.


Her words recall the major points of a New York Times op-ed Li published last week arguing in favor of a more human-centered approach to artificial intelligence.

Particularly since AI is a young discipline that has grown to be one of the most powerful tools ever created by humanity, Li said, it’s critical that democratization be defined in large part by tomorrow’s technologists.

In addition to her work at Google, Li is a Stanford University associate professor and director of the Stanford Computer Vision Lab who said she thinks of herself as an educator and technologist first. She’s also founder of AI4All, a nonprofit that brings students from groups underrepresented in AI today into leading universities and companies in the AI field.

Last month, in a story told exclusively by VentureBeat, AI4All launched a new mentorship program that pairs high school students with engineers at tech companies that use AI like IBM, Accenture, and OpenAI.

Expanding AI across disciplines

Li may say AI doesn’t belong to a few tech giants in Silicon Valley, but behemoths like Facebook and Google lock down much of the available human capital in the AI world, acquire many AI startups, supply popular open-source frameworks for machine learning (like TensorFlow), and are investing billions in the advancement of AI across a range of industry verticals and use cases.

In part due to these reasons, on Sunday Future Today Institute founder Amy Webb, also speaking at SXSW, called Google one of nine companies who control the future of artificial intelligence.

The best case scenario with AI in the next decade, Li argued, is to use AI for good and get rid of the idea that AI is a standalone discipline. Rather, AI needs to start working with professionals in other fields such as social scientists, humanists, lawyers, artists, and policy makers.

“Whether it’s manufacturing, energy, health care, education — AI can be used everywhere with various data, and the clear applications area that makes product better, increases productivity, and all this, so I think that is hopefully going to happen in a more full, fair way,” she said.

The panel did not get to questions about the worst case scenario, but did explore what could happen if things remain the same.

Democratization of technology

Li’s words echoed some fairly consistent themes heard at AI-related panel discussions at SXSW this year.

The dangers of the lack of diversity in AI and datasets used to train AI models were at the center of facial recognition and women in robotics conversations Monday, and a need for a more human-centered approach was part of a chat with Google Empathy Lab founder Danielle Krettek earlier in the festival.

Tech democratization initiatives include TechHire, which aims to fill hundreds of thousands of vacant tech jobs, and Computer Science for All, an initiative Smith supports to make computer science part of the K-12 education curriculum. Kaggle, the machine learning competition platform acquired by Google last year, was also referenced in the conversation as a way to attract more people from backgrounds beyond people with computer science degrees.

When planning for the future of AI, sometimes referred to as the fourth industrial revolution or called responsible for the third age of computing, Smith says, it’s imperative that we must “not constrain ourselves to the mistakes of the past.”

To make her point, Smith referenced an MIT Media Lab study released in February that found facial recognition technology from companies like IBM, Microsoft, and Face++ fail to recognize people of color, as well as research from the University of Southern California that used facial recognition and natural language processing to determine time given to speak on screen in movies. Historically, the study found, white men are overrepresented in films.

“This is not just from movies, but this is happening in every meeting you’re in, this is happening in every textbook and history book you learn; we really have a really very biased world,” Smith said.