AI –Overview of the EU Regulation on Artificial Intelligence

The rapid advancements in Artificial Intelligence have sparked significant debates on its regulation. The European Union is among the first global entities to develop a comprehensive policy framework for AI. This post outlines the key aspects of the EU’s AI regulation.

The cornerstone of the EU’s AI regulation is the AI Act (Regulation 2024/1689), which came into force on August 1, 2024. This regulation applies to all AI products developed or distributed within the EU, with exceptions for systems used for military, national security, research, or non-professional purposes. The regulation imposes different rules on AI products based on the level of risk they pose to society. Additionally, there are specific provisions for General Purpose AI models and deepfakes. In May 2024, the EU Commission established the AI Office to oversee the implementation of the AI Act.

The first draft of the AI Act was published by the EU Commission on April 21, 2021. The political debate surrounding the Act focused on balancing the protection of society from high-risk AI products with ensuring that regulation did not stifle AI innovation in the EU. During this debate, the Act was amended to include regulations on General Purpose AI systems and a broader ban on systems posing unacceptable risks. Furthermore, several measures were adopted to promote AI innovation within the EU.

Moving to the content of the regulation, the AI Act classifies AI products into four categories based on the level of risk they pose to society:

  1. Unacceptable Risk: AI systems in this category are banned in the EU. These include technologies like real-time biometric identification in public spaces, social scoring, and systems designed to manipulate human behavior.
  2. High Risk: AI systems that operate in critical areas such as infrastructure, healthcare, exam scoring, job recruitment, credit scoring, migration and border management, and the administration of justice and democratic processes fall under this category. These products are permitted but must meet stringent requirements before distribution. These requirements include transparency of training data, traceability of results, human oversight, comprehensive technical documentation, and robust data protection measures. Additionally, all high-risk AI systems must be registered in an EU database. The regulation for high-risk AI systems will take effect in August 2026.
  3. Limited Risk: This category includes AI products like chatbots or AI-generated content related to issues of public interest. Providers of such systems must ensure that users are aware they are interacting with AI or AI-generated content.
  4. Minimal or No Risk: The remaining AI systems fall into this category and can be freely used within the EU. The regulators note that currently the vast majority of AI systems in the EU are classified under this category.

The AI Act includes special provisions for General Purpose AI (GPAI) models, which are capable of performing a wide range of tasks. GPAI models with significant computational power (greater than 10^25 floating-point operations) are required to notify the European Commission, conduct standardized model evaluations, assess and mitigate systemic risks, track and report incidents, and ensure cybersecurity protections. These rules will come into force in August 2025. Notably, the GPAI systems used for research or non-professional aims do not fall under the regulations described above. Additionally, article 50 of the Act mandates that deep fakes must be clearly labeled as AI-generated or manipulated content.

In May 2024, the European Commission established the AI Office to oversee the implementation of the AI Act and support the development of AI products within the EU. The EU plans to support AI companies through a financial package expected to generate an additional €4 billion in public and private investment in generative AI by 2027. The EU also aims to boost AI-related education, training, and reskilling initiatives.

Overall, the EU’s AI policy aims to position the region as a global leader in AI regulation. By imposing high standards on both domestic and foreign AI companies wishing to operate in Europe, the EU hopes to set a global benchmark for AI transparency and security. However, the policy has attracted criticism, with concerns that it may stifle AI development in the EU due to excessive regulatory burdens. The EU is trailing behind the US and China in AI innovation due to a lack of substantial private investments in AI research, which are heavily supported by tech giants in both countries. Experts fear that the AI Act could further widen this gap. Additionally, the stringent AI regulations in the EU could lead to a fragmented AI market, where companies may be able to offer only limited versions of their AI products to EU consumers.

 

For further information, please consult the following sources:

https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

https://www.whitecase.com/insight-alert/long-awaited-eu-ai-act-becomes-law-after-publication-eus-official-journal

https://www.ey.com/en_ch/forensic-integrity-services/the-eu-ai-act-what-it-means-for-your-business

https://digital-strategy.ec.europa.eu/en/policies/ai-office

https://ec.europa.eu/commission/presscorner/detail/en/ip_24_383

https://www.euronews.com/next/2024/03/22/could-the-new-eu-ai-act-stifle-genai-innovation-in-europe-a-new-study-says-it-could

https://www.technologyreview.com/2024/03/19/1089919/the-ai-act-is-done-heres-what-will-and-wont-change/

https://www.akingump.com/en/insights/alerts/political-deal-on-the-eu-ai-act-a-milestone-but-the-journey-continues

https://www.brusselstimes.com/1068691/eu-is-lagging-behind-us-and-china-in-investments-in-artificial-intelligence-says-audit-report