Skip to main content

EU AI experts urge nations to avoid mass surveillance

<a href="http://365.dahlstroms.com/">365</a>
365
Image Credit: Håkan Dahlström

Watch all the Transform 2020 sessions on-demand here.


Establish an AI Awareness Day on Alan Turing’s birthday and avoid government mass surveillance. These were two of the investment and policy recommendations a group of more than 50 AI experts from across the European Union offered today.

“While there may be a strong temptation for governments to ‘secure society’ by building a pervasive surveillance system based on AI systems, this would be extremely dangerous if pushed to extreme levels,” the report released today reads.

The potential for AI systems to harm humans may require governments to “provide appropriate safeguards to protect individuals and society,” the authors warned.

Altogether, the independent investment and policy recommendations report includes 33 recommendations aimed at making Europe competitive on the global AI stage while guiding the creation of trustworthy, sustainable AI systems.


June 5th: The AI Audit in NYC

Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.


The European Commission formed the AI High Level Expert Group (HLEG) in June 2018, and today’s report follows the April release of AI ethics guidelines.

The report takes a closer look at what some European leaders have referred to as a third way, a path different from approaches being taken in the United States, where privacy concerns abound, and China, where facial recognition use has been called dystopian and earned international condemnation.

The report suggests B2B AI technology will likely be more important to Europe overall than the consumer AI currently originating from tech giants in the United States and China.

“Europe can distinguish itself from others by developing, deploying, using, and scaling Trustworthy AI, which we believe should become the only kind of AI in Europe, in a manner that can enhance both individual and societal well-being,” the document reads.

Other key recommendations:

  • Closely follow data collection practices of institutions and businesses
  • Require self-identification of AI systems in human-machine interactions
  • Support challenges to address climate change and hold an annual “AI for good” challenge
  • Include workers whose jobs are impacted by AI in the AI design process
  • Map skills shortages to identify AI opportunities
  • Support the development of AI testing systems that let civil society organizations conduct independent quality verification
  • Support elementary AI education courses for all EU citizens
  • Fund government employee AI training and assess potential privacy and personal data risks of AI systems before government agencies procure them
  • Create monitoring mechanisms to track the impact of AI on European members states and across the EU
  • Fund additional research into the impact of AI on individuals and society, including on the rule of law, democracy, jobs, and social systems and structures

AI Now Institute cofounder Meredith Whittaker stressed the need to study the impact of AI in areas like criminal justice, hiring, and education in a U.S. Congress hearing today about the ethical implications of AI in the United States.

In the business sector, the EU AI expert report calls for companies to form partnerships with training programs to teach employees how to work with AI systems and says governments should create incentives for companies to provide skills training and skills updates for their workforce. The report also encourages funding and training support for startups and small businesses and additional InvestEU funding to support the growth of more AI companies in Europe.

Beyond B2B and B2C, the report acknowledges a third sector, P2C, or public-to-citizens services — establishing a category for AI systems used by governments.

“The P2C context, or Digital Government, is emerging very rapidly, leading to a potential revolution in the role and structure of government and its relationship with individuals and businesses,” the report reads.

The investment and policy document is the second and final piece expected from the group and was introduced as part of a European AI Alliance Assembly that took place today in Brussels.

In a gathering held in Helsinki last month, EU member nation foreign ministers agreed to advance towards the creation of a legal framework for AI system design based on the Council of Europe’s standards for rule of law. The Council of Europe introduced ethical standards for the use of AI in criminal justice in December 2018.

Beyond reports commissioned by the European Commission, a number of European nations signed the OECD’s AI principles in France, together with the United States, Australia, Japan, and dozens of other countries.

The European continent continues to boast the largest number of AI researchers in the world. But if growth rates persist, China will surpass Europe in the years ahead, according to a December 2018 analysis by Dutch business analytics company Elsevier.