Blog

The global ripple effect of the EU AI Act

See the
platform
in action

Understanding the challenges and opportunities for international companies

  • The EU AI Act is the first comprehensive AI law focusing on risk management and data governance. It impacts any company selling AI products or services in the EU, even those outside Europe.
  • Data quality is crucial. Companies must ensure their AI data is accurate, representative, and traceable, with strict documentation requirements.
  • Compliance can be complex. Companies face challenges navigating different global AI regulations and the logistical burden of meeting the Act's requirements.
  • International cooperation is key. The Act may encourage global alignment on AI governance but risks fragmenting the market.
  • Responsible AI is essential. Companies must prioritize ethical AI development, transparency, and accountability to navigate the evolving AI landscape.

Data is the foundation on which companies build AI technologies. It shapes how algorithms learn, make decisions, and improve over time. However, quality, governance, and overall data management remain the biggest challenges for companies building AI systems.

According to Gartner, 39% of companies report data-related challenges as the biggest challenge for AI adoption. Poor DQ can lead to inaccurate or biased outputs, while poor governance can lead to regulatory violations and security risks.

As AI becomes more pervasive, regulators are increasingly concerned about questions surrounding these systems' ethics, privacy, security, and transparency. In response to these challenges, the EU has introduced the EU AI Act.

The EU AI Act represents landmark legislation. It is the world’s first comprehensive attempt to regulate AI systems. It addresses concerns related to the safe and ethical use and development of AI systems, as well as concerns related to the data that powers AI systems and data governance.

Read more about the EU AI Act and what it means for your business in this initial blog post. The Act’s objectives extend beyond safety and ethics, seeking to create a harmonized regulatory framework for AI across the EU.

However, the Act’s implications extend beyond Europe. It influences companies worldwide, forcing them to rethink how they approach their AI development cycle. Let's get started.

What are the implications for companies outside the EU?

Although the EU AI Act is designed to establish a regulatory framework for companies in the EU, the legislation applies to any company (regardless of location) providing its services or products to the EU market. It has a significant impact on AI governance worldwide.

1. Extraterritorial reach of the EU AI Act

Output-based jurisdiction is a key aspect of the AI Act’s extraterritorial scope. It addresses providers and deployers of AI systems that have their place of establishment or are located in a third country, where the output is used in the EU. This means businesses outside the EU must also ensure that their AI systems comply with the new regulations to maintain access to European customers.

For example, a US-based healthcare company offering diagnostic tools to hospitals in Europe would be subject to the Act’s strict risk management and transparency requirements. Since data is a big part of the regulation, it would have to ensure that its data sets are accurate, bias-free, and representative. This requires a robust data governance framework, including DQ checks, audit trails, and mechanisms for identifying and tracing bias.

Some of the specific obligations include:

  • Maintaining technical documentation.
  • Providing information to AI system providers who intend to integrate the model.
  • Complying with the EU copyright law.
  • Publishing a summary of training content.
  • Undergoing conformity assessments.

The EU AI Act places special attention on the data that drives the AI models. AI apps are classified based on the risks they can impose, many of which are tied to data governance.

Image

Systems must adhere to rigorous DQ, transparency, and documentation requirements. This includes:

  • High accuracy, representativeness, and relevance of data used to train AI models.
  • Representative data of good quality with minimal risks.
  • Companies must keep track of all data sources and collection methods.
  • Data has to be traceable, processing techniques have to be known, and detailed data governance practices must be outlined.

All these data requirements apply also to companies outside of the EU.

2. Compliance challenges for non-EU companies

There are a couple of obstacles for companies outside the EU — changing regulatory landscapes and compliance logistical challenges.

Dealing with Different Regulatory Landscapes

The US, UK, Canada, and other countries and regions will be adopting their own AI rules – as we can see already now with legislation such as:

  • The White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (USA).
  • AI bill (UK).
  • AI and Data Act (Canada).

While the EU AI Act is choosing a strict, risk-based approach, emphasizing data quality and accountability, countries like the US or UK are more flexible and are taking a more innovation-focused approach. The US is leaning towards industry-led standards and voluntary guidelines, while the UK is developing its framework centered around technological freedom and economic growth.

The diversity of regulations presents a challenge for companies operating in different markets. Companies mustn't only comply with the EU AI Act, but also balance it with other regulations and standards that can sometimes be conflicting.

However, since the EU AI Act is the strictest, companies (even those based outside the EU) can adopt it as a standard for their operations and development of AI models globally. This will simplify their operations, decrease complexity, and decrease operational costs.

“The Brussels effect,” a phenomenon that causes EU regulations to set standards for the rest of the world, may also lead to other nations quickly following in the EU’s footsteps. So, the legislation may end up being less fragmented than some fear today – but that is yet to be seen.

Adhering to Compliance Logistics

Companies must also consider the logistical burden of compliance. Organizations must adhere to the obligations mentioned above and be able to provide information to regulatory agencies when needed.

This involves implementing robust data collection and management practices and regular audits to ensure data integrity. For many companies, this can mean overhauling their current data management practices to meet the EU's stringent requirements.

Companies may need to acquire additional data or legal professionals or invest in new tools. For smaller companies, the associated costs may be prohibitive, limiting their access to the EU market.

For any company, this could involve significant changes to its algorithms, documentation, and compliance processes, adding both cost and complexity to its operations.

3. Impact on international cooperation

Divergent frameworks could create friction in international trade, stifle innovation, and complicate cross-border collaboration in AI research and development. Compliance costs, additional legal requirements, and significant investments into the operations could force some companies to exit the European market or deprioritize it in favor of regions with less stringent regulations.

For example, smaller companies developing AI apps for autonomous vehicles may find it difficult to meet the EU’s requirements for high-risk systems, leading them to scale back their European operations.

Another relatively recent example is Meta. Meta didn’t want to release its most advanced multimodal AI model because of “unpredictable” EU regulations around AI. We might even see companies developing EU-specific versions of their products to make them compliant, which can further complicate matters because their product portfolios will be fragmented.

The EU AI Act, with its risk-based approach and very strict framework, risks creating tensions with other regulations, potentially limiting international trade and cross-border data flows. These flows are crucial for training and developing AI and collaboration on AI research and development.

However, the EU AI Act also presents an opportunity to foster international cooperation, multilateral initiatives, and more. Initiatives such as the Global Partnership in AI (GPAI) could serve as a platform for countries to align their AI regulations or establish common principles. Moreover, trade agreements and regulatory dialogues could help mitigate the risks of fragmentation and encourage a more harmonized approach to AI governance.

4. Responsible stewardship and good governance of AI

In light of all these developments, companies must adopt responsible governance and data stewardship of AI. This means establishing robust, future-proof mechanisms to ensure AI technologies are developed and deployed ethically and transparently.

Placing a high importance on data quality is at the core. This includes rigorous data quality processes, mechanisms for identifying and avoiding bias, and transparent documentation of data and data sourcing and processing methods.

Companies should prioritize technical and data teams and legal and compliance experts to create a comprehensive governance framework. By embedding governance and stewardship from the start, companies can increase compliance and foster a better culture of innovation that aligns with corporate values and societal expectations.

5. The path forward: navigating the global AI landscape

Businesses have to stay agile, monitor emerging trends, and adapt accordingly. Balancing innovation with regulation is clearly one of the biggest issues. The pace of AI development can outrun the speed at which regulations emerge.

Overall, a few key trends are evident, and we expect them to continue their prominence: transparency, explainability, and accountability of AI systems.

Here are the strategies that can help companies stay ahead and navigate this evolving landscape:

  1. Risk Assessment: Conduct an audit of your current AI systems, assess risk levels, and identify gaps and compliance requirements.
  2. Goal optimization. Identify high priorities and create a roadmap for tackling the areas that need improvement.
  3. Data Governance. Invest in data governance with embedded data quality to ensure the integrity and security of data feeding the AI systems and eliminate bias.
      1. Audit data sources regularly.
      2. Implement data management policies.
      3. Ensure the data governance strategy is robust and adaptable to future technological and regulatory changes.
      4. Implement data management practices such as data cataloging, data lineage tracking, and data access controls (all required under the EU AI Act).
      5. Establish governance roles such as data stewards or owners to oversee data management processes and enforce compliance with data governance policies.
  4. Document AI processes. Ensure AI is explainable and communicate changes to your employees and system users. Also, communicate how AI is used.
  5. Assess risk. Implement risk management and monitoring, and implement continuous feedback loops.
  6. Stay relevant with regulations. Keep your company and teams updated about regulatory changes, and continue educating yourself and your staff on best practices for safe AI use.
  7. Engage. Engage with regulators, legal experts, and external advisors and prepare for regulatory engagement.

Final thoughts on the EU AI Act’s global ripple effect

The EU AI Act is poised to impact the global AI landscape significantly. Its strict risk-based approach and emphasis on data quality and governance will force companies worldwide to rethink their AI development and deployment strategies.

By prioritizing data governance, investing in robust data management practices, and fostering a culture of transparency and accountability, companies can navigate the evolving regulatory landscape and ensure AI's ethical and responsible use.

Ataccama provides comprehensive software for data quality and governance that can help organizations comply with the EU AI Act and other emerging AI regulations.

We were also pleased to collaborate with Camelot, who can assist companies in assessing their compliance readiness and developing strategies to meet the Act's requirements.

See the
platform
in action

Get insights about data quality in your inbox Subscribe

Related articles

Arrow Right
Arrow Left
Blog
Eu AI Act

Eu AI Act

Learn about the EU AI Act & its impact on businesses. Understand the new…

Read more
Blog
What is data governance?

What is data governance?

What is data governance? Our Ultimate Guide explains everything from best…

Read more
Blog
How to Achieve Compliance with Data Regulation

How to Achieve Compliance with Data Regulation

Learn how to get your companies data processes in line with…

Read more