Blog

EU AI Act: What does it mean for your business?

See the
platform
in action

The European Union Artificial Intelligence Act (EU AI Act) has already begun taking effect. It is not the first nor the last government regulation related to AI.

In May 2023, Clearview AI, a facial recognition technology company, agreed to a $20 million settlement with the American Civil Liberties Union (ACLU) for violating the Illinois Biometric Information Privacy Act. Their facial recognition software collected unauthorized data from social media, and now they must forfeit company shares to the subjects whose rights were infringed.

Unfortunately, these cases are all too common. While AI technologies offer immense potential, they can lead to significant consequences without proper regulation. Companies be aware of these regulations and use AI safely and transparently, enabling businesses to leverage its benefits while protecting personal rights and public safety.

As a modern and competitive enterprise, you are already looking into the benefits of AI models for your business. To do so effectively, you must build your models on the best data possible to achieve AI readiness and be aware of all relevant EU AI regulations that could impact your project. Read on to learn about the European Union AI Act and why it's important to your business.

What is the EU AI Act? (executive summary)

The EU AI Act marks the first legislation by a regulatory body comprehensively addressing AI anywhere in the world. Its goal is to unify a regulatory framework for AI across the European Union. It takes a risk-based approach.

The act classifies AI systems into four risk categories:

  • Unacceptable risk: Completely prohibited, as these systems pose a significant risk to personal rights and safety.
    • E.g., social scoring, compiling facial recognition databases, etc.
  • High-risk: Systems deemed potentially dangerous but still suitable under the right amount of scrutiny. Subject to specific regulations and legal requirements.
    • E.g., systems used in education that determine access to education, AI used in critical infrastructure such as transportation, etc.
  • Limited risk: AI systems that don't necessarily cause concern but still must comply with specific transparency obligations.
    • E.g., chatbots, AI-generated text intended to inform the public on matters of public interest, etc.
  • Minimal risk: Any actions falling outside the unacceptable and high-risk categories.
    • E.g., AI-enabled video games, spam filters, etc.

It will be applicable at both the national and EU levels. National governments will be responsible for creating and implementing governing authorities for market surveillance and conformity assessments. At the EU level, authorities will focus on general-purpose (GPAI) and foundation AI models while providing support at the national level via the European AI office.

Most obligations fall on "High-risk" AI vendors (developers). They will be subject to "conformity assessments," needing to conduct internal checks (in some cases via a third party) for compliance purposes (read more on these checks and their requirements below).

They'll also have to conduct "post-market monitoring," continuously monitoring and reporting any serious incidents or malfunctions. Companies complying with these checks and assessments will receive the "CE Marking," indicating their conformity with the act.

The act entered into force on August 1st, 2024, and will be effective from August 2nd, 2026. Some important dates to keep in mind:

  • February 2nd, 2025 - Ban on AI systems with unacceptable risk
  • May 2nd, 2025 - Codes of conduct are applied
  • August 2nd, 2025 - Governance rules and obligations for GPAI become applicable
  • August 2nd, 2026 - Application of the AI Act for AI systems
  • August 2nd, 2027 - The act is effective for all categories, entirely

What AI systems are prohibited by the EU AI Act?

As stated above, some AI systems will be banned explicitly once the EU AI Act is enacted. Most of these systems are forbidden because they infringe upon safety, fundamental rights, or European Union values. Some of the more prominent systems are:

  • Subliminal techniques. AI systems designed to manipulate individual behavior via deceptive techniques that can damage informed decision-making and cause significant harm
    • E.g., Algorithms on social media that influence political beliefs.
  • Social scoring. Using AI to evaluate or classify individuals based on behavior or personal traits leading to unfavorable treatment of specific groups.
    • E.g., The social scoring system employed by the Chinese government.
  • Compiling facial recognition databases. Untargeted scraping of facial images from the internet and CCTV footage for nefarious purposes.
    • E.g., Scraping images from social media without specific user consent.

You can find a comprehensive list of all the "unacceptable risk" AI systems here.

What are the requirements for high-risk AI system providers under the EU AI Act?

High-risk AI systems are classified via a set of rules defined in the act's legislation. These systems are typically ones that profile individuals or process personal data. You can learn more about how they're defined here.

Providers (developers) of high-risk AI systems must adhere to several rules to reduce public risk. You can find a complete list of them here (Compliance with Requirements) and here (Quality Management System). Find a summary of the requirements below:

  • Risk management system. An established system to monitor risk throughout the AI system's lifecycle.
  • Data governance. Ensure data quality and user training, as well as consistent monitoring, to guarantee that data sets are representative, free of errors, and fit for purpose.
  • Technical documentation. Provide technical evidence of compliance and provide authorities with the necessary information to assess that compliance.
  • Record-keeping. Enable the AI system to automatically record events relevant to identifying risks and significant changes throughout the system lifecycle.
  • Instructions for use. Provided to deployers to ensure the system is used compliantly.
  • Human oversight. Must be implicit in the AI system.
  • Accuracy, robustness, and cybersecurity. AI systems must adhere to these principles to justify results and prevent data breaches.
  • Quality management system. To ensure compliance with quality standards.

What are the requirements for GPAI model providers under the EU AI Act?

GPAI (General-Purpose Artificial Intelligence) models, like OpenAI's GPT-3, are trained on vast data sets and can perform a wide range of tasks across industries.

Under the EU AI regulation, developers of these models have their own set of obligations, including:

  • Model evaluations. Developers must perform regular evaluations, such as adversarial testing, to identify and deal with risks.
  • Risk. Must assess and mitigate potential risks in both the sources of data and the models themselves.
  • Incident reporting. If any issues, risks, or problems arrive, the developer is responsible for tracking, documenting, and reporting them immediately.
  • Cybersecurity protection. Security concerns must be addressed at an adequate level.

Why should businesses care about the EU AI Act?

Beyond compliance and legal obligations, businesses need to address the EU act sooner rather than later. It's not just about avoiding hefty fines – as much as 30 million Euros or 6% of global annual turnover.

Businesses that commit to the EU AI regulation can demonstrate their dedication to responsible AI practices. They can get a head start on risk mitigation that would impact them either way, regardless of regulations. Finally, businesses can develop and deploy their models more confidently knowing what regulations are in place and on the horizon, growing their global leadership and influence.

How Ataccama helps

Don't let new regulations deter your AI journey! Download the end-to-end data quality framework today to address the most important element of AI model development: high-quality data.

The journey doesn't end with high-quality data! Adopt our data quality software to help with everything from data discovery, metadata management, and governance to master data management. Schedule a call to learn more about our platform today.

Written by Anja Duricic

Anja is our Product Marketing Manager of Data Quality and Data Governance at Ataccama. She is passionate about the human experience, learning about real-life companies, and helping them with their real-life needs.

See the
platform
in action

Get insights about data quality in your inbox Subscribe

Related articles

Arrow Right
Arrow Left
Blog
What is AI readiness?

What is AI readiness?

What does AI readiness mean & what’s your organization's assessment? Make sure…

Read more
Blog
AI Readiness: Harnessing the Power of Data and AI

AI Readiness: Harnessing the Power of Data and AI

Check out our new Linkedin Live video where we discuss the importance of data…

Read more
Blog
The Value of AI for business

The Value of AI for business

Is there value in AI for business? Yes! This article explains the benefits of…

Read more
Blog
The Importance of Data Quality for AI

The Importance of Data Quality for AI

Learn why data quality is key to successful AI implementation.

Read more
ataccama
arrows
Lead your team  forward  OCT 24 / 9AM ET
×