The European Commission will launch a pilot project this summer designed to test ethical guidelines it has developed for the use of artificial intelligence.
Companies, public agencies, and other organizations can now join the European AI Alliance which will officially notify members when the pilot starts.
“The ethical dimension of AI is not a luxury feature or an add-on,” said Vice-President for the Digital Single Market Andrus Ansip in a statement. “It is only with trust that our society can fully benefit from technologies. Ethical AI is a win-win proposition that can become a competitive advantage for Europe: being a leader of human-centric AI that people can trust.”
While the U.S. has a more free-market approach driven by corporate giants such as Google and Amazon, and China has taken a centralized approach, Europe is trying to carve out a different path toward developing AI by emphasizing the role of ethics.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
With AI expected to disrupt a growing range of industries, remaining competitive in developing the technology has taken on a growing urgency in Europe. Over the past two years, the European Commission has been trying to develop a comprehensive approach that balances the need to develop AI with the desire to craft protections around its use.
Last summer, the commission appointed a group of independent experts to help develop a set of ethical guidelines. That group created seven general guidelines that were presented today officially and will be reviewed at a forum scheduled for tomorrow:
- Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit, or misguide human autonomy.
- Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable, and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
- Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
- Transparency: The traceability of AI systems should be ensured.
- Diversity, non-discrimination, and fairness: AI systems should consider the whole range of human abilities, skills, and requirements, and ensure accessibility.
- Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
- Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.
The commission is seeking partners to test these guidelines and offer feedback. Details of how the pilots will work have yet to be announced.
But the pilot phase is expected last until early next year when the AI expert group will review the results and further refine its proposals.
Meanwhile, this fall, the commission plans to launch a network of AI research centers and digital innovation hubs, and create new proposals around data sharing for European Union member states.