Skip to main content

The dangers of AI challenge even the experts

AI ethics panel at Re-Word Deep Learning Summit
A panel of experts discusses the ethics of artificial intelligence at Re-Word Deep Learning Summit in Boston on May 24, 2018.
Image Credit: Kyle Wiggers / VentureBeat

Watch all the Transform 2020 sessions on-demand here.


At the Re-Work Deep Learning Summit in Boston today, a panel of ethicists and engineers discussed some of the biggest challenges facing artificial intelligence: algorithmic biases, ethics in AI, and whether the tools to create AI should be made widely available.

The panel included Simon Mueller, cofounder and vice president of think tank The Future Society; Cansu Canca, founder and director of the AI Ethics Lab; Gabriele Fariello, a Harvard instructor in machine learning, researcher in neuroinformatics, and chief information officer at the University of Rhode Island; and Kathy Pham, a Google, IBM, and United States Digital Service alum who’s currently researching ethics at the Artificial Intelligence and Software Engineering at Harvard Berkman Klein Center and MIT Media Lab.

Mueller kicked off the discussion with a thorny question: Is ethics the most pressing problem for the progress of AI?

“It’s always an ‘engineering first and solve the tech problem first’ attitude [when it comes to AI],” Pham said. “There are a lot of experts out there who have been thinking about this, [but] those voices need to be recognized as just as valuable as the engineers in the room.”


June 5th: The AI Audit in NYC

Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.


Canca agreed that ethics aren’t discussed among product leads and designers as often as they should be. “[Engineers] should think about AI from the very beginning all the way until they commercialize a product,” she said. “It seems that [policy] is usually separated from the developers and separate from the discussion.”

Fariello echoed Pham’s sentiments, adding that the de-emphasis of ethics in AI is negatively impacting the lives of people across the country. “There are some significant … problems,” he said. “There are real decisions being made in health care, in the judicial system, and elsewhere that affect your life directly.”

He gave a recent Wisconsin Supreme Court decision as an example. In May 2017, it ruled against Eric Loomis, a Wisconsin man who was sentenced to six years in prison based in part on recommendations from Northpointe’s Compas software, which uses proprietary algorithms to predict the likelihood a defendant will commit more crimes. A report from ProPublica found that Compas was “far more likely to incorrectly judge black defendants at a higher rate recidivism” than white defendants.

Biased algorithms aren’t just influencing the judicial system. Personalized, machine learning-powered news feeds like Google News “divide people,” Canca said. “As the [diversity] of information shrinks, engagement with the world becomes engagement with people like you or with people whose opinions you share.”

Fariello pointed out that algorithm-driven companies like Facebook are almost incentivized to show content that “affirm [users’] beliefs” with a positive feedback loop. “We become polarized, and we don’t see the alternative views.”

Ignorance on the part of policymakers is contributing to the problem, he said. “It’s incumbent on us that, when we do vote and contact our representatives, that if they’re not well-versed in this, they consult with someone who is well-versed in this.”

Canca agreed. “We need to take a proactive approach at the policy level,” she said. “Most of the time, [the government] is playing catch-up … Usually, the policy follows what’s going on in the technology [sector].”

Despite the problems associated with AI and its muses, though, the panelists were in agreement on the issue of democratizing AI: All three said that AI development tools and libraries should be made publicly available and open-sourced. “Let’s open it up and have people use it,” Canca said. “That’s the direction to go.”

The tools themselves aren’t the problem, Fariello said. Rather, it’s secrecy and lack of transparency. “Large neural networks are inherently black boxes,” he said. “If we add a lock around the black box, we’re in big trouble.”