The AI Act – An Introduction

The upcoming European Union (EU) Artificial Intelligence (AI) Act will become one of the most comprehensive legal frameworks for AI globally. With the aim to foster innovation and ensure safe use of AI, the AI Act will have potentially huge impacts on firms leveraging AI for value creation. This article delves into the anticipated implications of the AI Act, and how firms can adapt to the changing regulatory landscape.

Regulating AI Applications

The proposed AI Act is designed to establish a legal framework that governs the development, deployment, and use of AI systems across the EU throughout their lifecycle. The Act categorizes AI systems into four risk categories—unacceptable risk, high risk, limited risk, and minimal risk. Each category comes with its own set of requirements and restrictions.

Unacceptable risk AI systems are those that pose a clear threat to the safety, rights, and freedoms of individuals. Such systems will be banned.

High-risk AI systems, which include specific applications of AI, will not be banned outright but subject to regulatory oversight. This category includes: 

  • critical infrastructures, that could put the life and health of citizens at risk;
  • educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams);
  • safety components of products (e.g. AI used in robotic surgery);
  • employment, management of workers and access to self-employment (e.g. software for recruitment and resume sorting);
  • essential private and public services (e.g. credit scoring);
  • law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);
  • migration, asylum and border control management (e.g. verification of authenticity of travel documents);
  • administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).

Limited-risk AI systems will face minimal regulatory constraints, mainly regulating transparency. For instance, chatbots are required to declare that it’s using AI so that the end user is aware they are interacting with a machine.

Minimal-risk AI systems: Minimal-risk AI systems does not fall inside the scope of the AI Act, rather these systems are regulated by other legal acts such as the GDPR. Such system includes spam detection and AI engines used in video games.

Impact on Firms using AI for Value Creation

1. Compliance and Implementation Costs

Firms using or contemplating using AI for their core business will need to adhere to the new regulatory framework, which may result in increased compliance and technical overhead. Businesses selling, and in some cases importing or distributing, high-risk AI systems will be required to conduct conformity assessments, maintain transparency through documentation and ensure data quality. Small and medium-sized enterprises (SMEs) may find it particularly challenging to bear these costs.

2. Innovation Opportunities

On the other hand, the AI Act aims to create opportunities for innovation by providing a clear and predictable legal environment. The idea is that firms that develop and utilize AI systems in compliance with the new regulations will have a competitive advantage in the market. As conforming systems will be marked with CE marking, EU’s ambition is to use this product “safety” label to generate trust. As such, the regulation is comparable to product safety regulations. Moreover, the Act encourages the development of AI systems that respect fundamental rights and ethical values, which can inspire novel, human-centric AI applications.

Whereas the AI Act focuses on the development of safe AI systems, the AI Act is supplemented by other legal acts relating to the types of data used as well as the application of the AI system. Such examples include the GDPR relating to personal data, profiling, and automated decision making as well as anti-discrimination legislation. 

3. International Market Access

The AI Act aims to harmonize AI regulations across the EU, ensuring that AI systems developed in one Member State can be seamlessly deployed in another. The idea is that it will facilitate cross-border trade and enable firms to access a broader market. Additionally, the AI Act’s comprehensive approach may serve as a model for other jurisdictions, potentially easing the process of international expansion for compliant firms. Time will tell if this turns out to work in practice.

4. Data Protection and Privacy

The Act strengthens data protection and privacy by mandating that AI systems comply with the General Data Protection Regulation (GDPR). Firms using AI for value creation need to continue implementing robust data governance practices or improve existing routines, to avoid substantial fines. In addition, data-hungry AI algorithms come with new challenges tied to anonymization/synthetics, access, data transfer, and traceability.

5. Liability and Accountability

The AI Act places a strong emphasis on accountability, requiring firms to establish clear lines of responsibility for AI systems. This means that firms will need to assess their liability and insurance policies to ensure they are adequately covered in the event of AI-related accidents or harm.

Recommendations

As the AI Act transforms the regulatory landscape, we recommend firms to:

1. Assess their AI systems to determine their risk classification and ensure compliance with the corresponding requirements. Establish a process to capture new experimentation/implementation of AI in the organization.

2. Implement robust data governance practices that align with GDPR and other relevant data protection regulations.

3. Establish clear lines of responsibility and update liability and insurance policies accordingly. Consider including AI in ethical guidelines, the company’s privacy policy and/or other corporate governance.

4. Monitor developments in the AI Act and similar legislation in other jurisdictions to stay abreast of regulatory changes.

Conclusion

The AI Act is set to change the way firms use AI. The timing is uncanny, with large language models stirring the public’s imagination with what is possible to achieve with machine learning and neural networks.

While the new regulation may increase compliance and implementation costs, it also presents opportunities for innovation, market expansion, and enhanced trust. However, there are competing regulations that create an unclear regulatory landscape in the EU and nationally, in the short to medium long term.

In summary, the AI Act is a first attempt to regulate AI, and will likely shape the future of AI in Europe and beyond. No matter the AI Act’s short-term implications, there are some immediate compliance implications: Firms must proactively assess their AI systems, data governance practices, and liability coverage to ensure compliance and capitalize on the opportunities created by the new regulatory framework. By staying abreast of legislative changes and investing in responsible AI innovation, businesses can continue to harness the transformative power of AI for value creation while fostering trust and transparency in the rapidly evolving AI landscape.

For more information please contact:

Carsten Maartmann-Moe

Head of Cyber & Digital Risk

Let's connect

The AI Act – An Introduction The AI Act – An Introduction
I want an Advisense expert to contact me about:
The AI Act – An Introduction

By submitting, you consent to our privacy policy

Thank you for connecting with us

An error occurred, please try again later