EU AI Act Compliance Roadmap: 6 Steps

published on 05 June 2024

The EU AI Act sets rules to ensure AI systems are safe, respect rights, and follow values. It applies to all organizations operating in or serving the EU market. Complying demonstrates responsible AI practices and builds customer trust.

Here's a 6-step roadmap to comply with the EU AI Act:

  1. Identify AI Systems and Roles

    • Determine which systems qualify as AI systems
    • Identify roles: provider, deployer, importer, distributor, manufacturer
  2. Assess Risk Levels

    • Categorize AI systems into risk levels: unacceptable, high-risk, limited-risk, minimal-risk
    • Understand requirements for each risk level
  3. Implement High-Risk AI Requirements

    • Conduct risk assessments, manage data, provide human oversight
    • Document systems, establish quality management processes
  4. Address Limited-Risk AI

    • Inform users when interacting with limited-risk AI systems
    • Clearly label AI-generated content and explain limitations
  5. Establish Governance and Monitoring

    • Set up an oversight body and develop policies
    • Implement incident reporting and corrective actions
    • Continuously improve AI oversight practices
  6. Stay Updated and Prepare for Changes

    • Monitor regulatory updates and adapt to new requirements
    • Provide ongoing training and encourage continuous learning
Timeline Requirement
6 months after coming into effect Ban on prohibited AI systems starts
12 months Rules for general-purpose AI governance apply
24 months Act fully in force, including high-risk AI obligations
36 months Act applies to products needing third-party assessments
Additional period High-risk AI systems already on market only regulated if major design changes; GPAI systems have 2 more years to comply

1. Identify AI Systems and Their Roles

The first step is to identify all AI systems used in your organization and understand their roles. This involves:

What is an AI System?

An AI system is a machine-based system that can operate independently and generate outputs like content, predictions, recommendations, or decisions that impact its environment. Examples include:

  • Chatbots
  • Virtual assistants
  • Image recognition systems
  • Predictive analytics models

Roles Defined by the Act

The EU AI Act defines these roles:

Role Description
Provider Develops or supplies an AI system
Deployer Uses an AI system in its operations
Importer Brings an AI system into the EU market
Distributor Makes an AI system available to the market
Manufacturer Manufactures an AI system

Conducting an Internal Audit

To identify your AI systems and their roles, follow these steps:

  1. Review system documentation: Gather and review technical specifications, system design, and algorithms used.
  2. Assess system functionality: Evaluate if each system meets the AI system definition.
  3. Identify stakeholders: Determine the providers, deployers, importers, distributors, and manufacturers involved with each AI system.
  4. Document findings: Create a list of AI systems, their roles, and stakeholders involved.

2. Assess Risk Levels

The EU AI Act categorizes AI systems into four risk levels: unacceptable, high-risk, limited-risk, and minimal-risk. Each level has specific requirements you must follow.

Risk Categories

Risk Level Description Requirements
Unacceptable AI systems that pose a threat to safety or rights Banned
High-risk Significant impact on health, safety, or fundamental rights Strict obligations like risk assessment, documentation, human oversight
Limited-risk Systems interacting with humans Transparency obligations
Minimal-risk Basic systems like spam filters No specific requirements

Determining Risk Levels

Here's how the risk levels are defined:

  • Unacceptable risk: AI systems that could seriously endanger safety or fundamental rights, such as AI-powered weapons or biometric surveillance.
  • High-risk: AI systems that could significantly impact health, safety, or fundamental rights, like AI-assisted medical diagnosis or AI-powered autonomous vehicles.
  • Limited-risk: AI systems that interact with humans, such as chatbots or virtual assistants.
  • Minimal-risk: Basic AI systems that don't interact with humans or pose a significant risk, like spam filters or image recognition.

Assessing Your AI Systems

To determine the risk level of your AI systems, follow these steps:

  1. Review system details: Gather and review technical specifications, system design, and algorithms.
  2. Evaluate capabilities: Assess the system's functions and potential impact on humans.
  3. Identify stakeholders: Determine the providers, deployers, importers, distributors, and manufacturers involved.
  4. Document findings: Create a list of AI systems, their risk levels, and stakeholders.

Meeting Requirements

Once you've assessed the risk level, you must comply with the specific requirements for each level. For high-risk AI systems, this includes conducting risk assessments, documenting system design and development, and implementing human oversight. For limited-risk AI systems, you must meet transparency obligations, such as informing users about AI-generated content.

3. Implement Requirements for High-Risk AI

High-risk AI systems face strict rules under the EU AI Act. To comply, you must understand and implement specific requirements.

High-Risk AI Requirements

For high-risk AI systems, you must:

  • Assess risks: Identify potential risks to rights, health, and safety.
  • Manage data: Ensure data quality, integrity, and transparency.
  • Provide oversight: Implement human oversight to detect and correct issues.
  • Document systems: Maintain detailed records of design, development, and deployment.
  • Manage quality: Establish a quality management system for continuous improvement.

Setting Up Quality Management

To set up a quality management system:

  1. Define objectives: Establish clear goals aligned with the EU AI Act.
  2. Identify processes: Determine key processes for development, deployment, and maintenance.
  3. Assign responsibilities: Designate teams or individuals for each process.
  4. Monitor and evaluate: Establish mechanisms to monitor and evaluate system performance.
  5. Continuously improve: Regularly review and update the quality management system.

Ensuring Transparency and Traceability

To maintain transparency and traceability:

  • Document clearly: Provide clear and concise documentation of system design, development, and deployment.
  • Track changes: Establish audit trails to track system updates and modifications.
  • Explain decisions: Ensure AI systems are explainable, and their decision-making is transparent.
  • Establish accountability: Identify and address errors, biases, or unintended consequences.
Requirement Description
Risk Assessments Identify potential risks to fundamental rights, health, and safety.
Data Governance Ensure data quality, integrity, and transparency.
Human Oversight Implement mechanisms to detect and correct biases, errors, or unintended consequences.
Documentation Maintain detailed records of system design, development, and deployment.
Quality Management Establish a system for continuous improvement and monitoring.
sbb-itb-ef0082b

4. Address Limited-Risk AI Systems

Transparency for Limited-Risk AI

The EU AI Act requires transparency for limited-risk AI systems, which include general-purpose AI and generative AI models. Businesses using these systems must inform users that they are interacting with AI. This involves explaining how the AI system works and what data is used to operate it.

Informing Users About AI

To comply with the EU AI Act, businesses must clearly inform users about AI interactions. This can be done by:

  • Providing a simple explanation of the AI system's functionality
  • Informing users about the data used to operate the AI system
  • Ensuring users understand how the AI system makes decisions
  • Offering users the option to consent to using the AI system

Labeling AI-Generated Content

The EU AI Act requires businesses to label AI-generated content, such as images, audio, or video. This can be achieved by:

Requirement Description
Marking Content Clearly marking AI-generated content in a machine-readable format
Effective Labeling Ensuring labeling is effective, interoperable, robust, and reliable
Providing Information Informing users about the AI system used to generate the content
Explaining Limitations Ensuring users understand the limitations and potential biases of AI-generated content

5. Establish Governance and Monitoring

Ongoing Oversight is Key

Continuous oversight is crucial for complying with the EU AI Act. It ensures AI systems are developed and used responsibly, minimizing risks of non-compliance and potential harm. Effective oversight involves clear policies, procedures, and accountability to govern AI development, deployment, and use. This includes ensuring AI systems are designed and trained to avoid biases, discrimination, and other ethical issues.

Setting Up an Oversight Framework

To establish an oversight framework, organizations should:

  1. Define oversight goals: Identify goals for AI oversight, including ensuring EU AI Act compliance.
  2. Establish an oversight body: Set up a committee responsible for overseeing AI development and use.
  3. Develop policies and procedures: Create guidelines for AI development, deployment, use, data management, model training, and testing.
  4. Assign roles and responsibilities: Clearly define roles and accountability for AI development, deployment, and use.
  5. Establish incident reporting and corrective actions: Develop processes for reporting and addressing AI system incidents, including errors, biases, or other issues.

Incident Reporting and Corrective Actions

Incident reporting and corrective actions are essential for AI oversight. Organizations should establish processes for:

  1. Systematic incident reporting: Report incidents related to AI systems, such as errors, biases, or other issues.
  2. Corrective actions: Take actions to address incidents, including updating AI models, retraining data, or implementing new procedures.
  3. Continuous improvement: Continuously review and improve AI oversight policies, procedures, and practices to ensure ongoing EU AI Act compliance.
Oversight Component Description
Oversight Body Committee responsible for overseeing AI development and use
Policies and Procedures Guidelines for AI development, deployment, use, data management, model training, and testing
Roles and Responsibilities Clearly defined roles and accountability for AI activities
Incident Reporting Process for reporting AI system incidents, errors, biases, or other issues
Corrective Actions Actions taken to address incidents, such as updating models, retraining data, or implementing new procedures
Continuous Improvement Ongoing review and improvement of AI oversight policies, procedures, and practices

6. Stay Updated and Prepare for Changes

Monitor Regulatory Updates

To maintain compliance with the EU AI Act, it's crucial to stay informed about any updates or changes to the regulation. Here's how you can do this:

  • Regularly check the European Commission's website for announcements related to the EU AI Act
  • Subscribe to newsletters and alerts from industry associations and legal firms
  • Attend industry events and conferences to learn about the latest developments and best practices
  • Assign a team or person to monitor regulatory updates and ensure compliance

Adapt to New Requirements

As the EU AI Act evolves, new requirements and guidelines may emerge. To adapt, you should:

  • Establish a process to review and update your AI governance frameworks and policies
  • Identify areas where your current practices need modification or improvement
  • Develop a plan to implement new requirements and guidelines
  • Continuously assess and refine your AI systems to ensure ongoing compliance

Continuous Learning and Improvement

Maintaining compliance requires ongoing learning and improvement. Here's what you can do:

  • Provide regular training and education for employees involved in AI development and deployment
  • Encourage a culture of continuous learning and improvement
  • Stay up-to-date with the latest research and developments in AI governance and ethics
  • Regularly review and refine your AI systems to align with the latest best practices and guidelines
Action Description
Monitor Updates Regularly check official sources, subscribe to newsletters, and attend industry events to stay informed about changes to the EU AI Act.
Adapt to Changes Review and update your AI governance frameworks, policies, and practices to comply with new requirements and guidelines.
Continuous Learning Provide training, encourage a culture of learning, and stay up-to-date with the latest research and best practices in AI governance and ethics.

Conclusion

The EU AI Act sets clear rules for businesses using AI systems in the EU market. This 6-step guide helps ensure compliance:

1. Identify AI Systems and Roles

  • Determine which systems qualify as AI systems under the Act's definition.
  • Identify the roles involved (provider, deployer, importer, distributor, manufacturer).

2. Assess Risk Levels

  • Categorize AI systems into risk levels: unacceptable, high-risk, limited-risk, or minimal-risk.
  • Understand the specific requirements for each risk level.

3. Implement High-Risk AI Requirements

  • For high-risk AI systems, conduct risk assessments, manage data, provide human oversight, document systems, and establish quality management processes.

4. Address Limited-Risk AI

  • Inform users when interacting with limited-risk AI systems like chatbots or virtual assistants.
  • Clearly label AI-generated content and explain its limitations.

5. Establish Governance and Monitoring

  • Set up an oversight body and develop policies for AI development, deployment, and use.
  • Implement incident reporting and corrective action processes.
  • Continuously improve AI oversight practices.

6. Stay Updated and Prepare for Changes

  • Monitor regulatory updates and adapt to new requirements.
  • Provide ongoing training and encourage a culture of continuous learning.

FAQs

What is the timeline for the EU AI Act?

EU AI Act

The European Parliament voted on and adopted the EU AI Act on 13 March 2024. The Act will come into effect after its adoption by Parliament and 20 days after being published in the Official Journal of the European Union. Here's the implementation timeline:

Timeline Requirement
6 months after coming into effect The ban on prohibited AI systems will start.
12 months after coming into effect Rules for general-purpose AI (GPAI) governance will apply.
24 months after coming into effect The AI Act will be fully in force, including all obligations for high-risk AI systems.
36 months after coming into effect The Act will apply to products that need third-party conformity assessments.
Additional period High-risk AI systems already on the market will only be regulated by the Act if they undergo major design changes. GPAI systems already on the market will have an additional two years to comply.

Related posts

Read more