Back to Blog
AI Governance

EU AI Act: The Complete Guide for Businesses

Dariusz Zalewski
Dariusz Zalewski
Founder & CEO
January 27, 202618 min read
EU AI Act: The Complete Guide for Businesses

Key Takeaways

  • 1 The EU AI Act is the world's first comprehensive AI regulation-it will shape global AI governance
  • 2 AI systems are classified into 4 risk categories: Unacceptable, High, Limited, and Minimal risk
  • 3 Fines can reach €35 million or 7% of global turnover-higher than GDPR
  • 4 Applies to any company serving EU users-regardless of where you're based
  • 5 Key compliance deadlines start February 2025-preparation must begin now

What is the EU AI Act?

The EU Artificial Intelligence Act (AI Act) is a landmark regulatory framework that establishes comprehensive rules for the development, deployment, and use of artificial intelligence systems within the European Union. Adopted in March 2024, it represents the world's first major attempt to create horizontal legislation governing AI technology.

Think of it as "GDPR for AI"-just as GDPR transformed how organizations handle personal data globally, the AI Act will fundamentally reshape how companies develop and deploy AI systems, with ripple effects far beyond Europe's borders.

🌍

Global Reach

Applies to any AI system used in the EU, regardless of where the provider is based

⚖️

Risk-Based

Requirements scale based on the potential harm an AI system could cause

🔬

Innovation-Friendly

Includes regulatory sandboxes and exemptions for research and open-source

Who Does the AI Act Apply To?

The AI Act has an extraterritorial scope similar to GDPR. It applies to:

Providers (Developers)

Organizations that develop AI systems or have AI systems developed for them to be placed on the market or put into service under their name or trademark-regardless of whether they're based in the EU.

Deployers (Users)

Organizations that use AI systems under their authority-except for purely personal, non-professional activities. This includes companies using third-party AI tools in their operations.

Importers & Distributors

Organizations that bring AI systems from outside the EU into the European market or make them available on the market.

⚠️ Important: The "Brussels Effect"

Even if you're a US or Asian company with no EU presence, if your AI system is used by EU residents or affects EU citizens, you likely fall under the AI Act's scope. Many companies will need to comply globally to avoid maintaining separate systems.

The Risk Classification System

The cornerstone of the AI Act is its risk-based approach. AI systems are classified into four categories, with requirements escalating based on the potential for harm:

🚫 UNACCEPTABLE RISK

Banned completely

⚠️ HIGH RISK

Strict requirements, conformity assessment, registration

📋 LIMITED RISK

Transparency obligations

✅ MINIMAL RISK

No specific requirements (voluntary codes of conduct)

Unacceptable Risk (Prohibited AI)

These AI practices are completely banned as they pose an unacceptable threat to fundamental rights:

🚫
Social scoring systems

Government classification of citizens based on behavior leading to detrimental treatment

🚫
Manipulative AI

Systems using subliminal techniques or exploiting vulnerabilities to manipulate behavior

🚫
Real-time biometric identification in public spaces

By law enforcement (with limited exceptions for serious crimes)

🚫
Emotion recognition in workplace/education

Inferring emotions of employees or students (except for medical/safety purposes)

🚫
Biometric categorization for sensitive attributes

Using biometrics to infer race, political opinions, religious beliefs, sexual orientation

🚫
Predictive policing (individual)

Predicting individual criminal behavior based solely on profiling

🚫
Untargeted facial recognition databases

Scraping facial images from internet or CCTV to build recognition databases

High-Risk AI Systems

High-risk AI systems are permitted but subject to strict requirements. These fall into two categories:

Category Examples
Biometrics Remote biometric identification, biometric categorization, emotion recognition
Critical Infrastructure AI managing safety in water, gas, electricity, heating, or traffic
Education & Training Determining access to education, evaluating learning outcomes, detecting cheating
Employment CV screening, job advertising, interview analysis, performance monitoring
Essential Services Credit scoring, insurance pricing, emergency dispatch prioritization
Law Enforcement Risk assessment, lie detection, evidence analysis, crime analytics
Migration & Border Visa/asylum application assessment, identity verification, risk screening
Justice & Democracy Assisting judicial decisions, influencing election outcomes

Limited Risk AI

These systems have transparency obligations-users must be informed they're interacting with AI:

Chatbots & Virtual Assistants

Must clearly disclose AI interaction unless obvious from context

Deepfakes & Synthetic Content

AI-generated images, video, or audio must be labeled

Emotion Recognition

Users must be informed when emotion detection is used

Biometric Categorization

Users must be notified of categorization systems

Requirements for High-Risk AI Systems

If you're developing or deploying high-risk AI, you must meet these requirements:

1

Risk Management System

Implement a continuous risk management process throughout the AI system's lifecycle. Identify, analyze, estimate, and evaluate risks. Adopt risk mitigation measures.

2

Data Governance

Training, validation, and testing datasets must be relevant, representative, free of errors, and complete. Implement data governance practices addressing data collection, bias examination, and gap identification.

3

Technical Documentation

Maintain comprehensive technical documentation demonstrating compliance. This must be updated throughout the system's lifecycle and available for regulatory inspection.

4

Record-Keeping (Logging)

Enable automatic logging of events throughout the system's operation. Logs must enable traceability and monitoring, with retention periods appropriate to the system's purpose.

5

Transparency & Instructions

Provide clear instructions for use, including intended purpose, performance levels, limitations, and human oversight requirements. Information must be accessible to deployers.

6

Human Oversight

Design systems to enable effective human oversight. Humans must be able to understand capabilities and limitations, monitor operation, and intervene or override when necessary.

7

Accuracy, Robustness & Cybersecurity

Achieve appropriate levels of accuracy, robustness, and cybersecurity. Systems must be resilient to errors, faults, inconsistencies, and potential adversarial attacks.

General-Purpose AI (GPAI) & Foundation Models

The AI Act includes special provisions for general-purpose AI models (like GPT, Claude, Gemini) that can be adapted for many uses:

All GPAI Models Must:

  • Maintain technical documentation
  • Provide information for downstream providers
  • Implement copyright compliance policies
  • Publish training data summaries

GPAI with Systemic Risk Must Also:

  • Perform model evaluations
  • Assess and mitigate systemic risks
  • Report serious incidents
  • Ensure adequate cybersecurity

💡 Systemic Risk Threshold

GPAI models are presumed to have systemic risk if trained with compute power exceeding 10^25 FLOPs. This currently affects only the largest foundation models from major AI labs.

Compliance Timeline

The AI Act entered into force on August 1, 2024, with a phased implementation:

1

February 2, 2025 (6 months)

Prohibitions on unacceptable risk AI practices take effect

2

August 2, 2025 (12 months)

GPAI model obligations apply; Governance structure established; Penalties framework active

3

August 2, 2026 (24 months)

Full AI Act application; High-risk AI systems in Annex III must comply; Transparency obligations

4

August 2, 2027 (36 months)

High-risk AI systems covered by specific EU legislation (Annex I) must comply

Penalties and Enforcement

The AI Act introduces significant penalties-even higher than GDPR for the most serious violations:

Violation Type Maximum Fine Or % of Turnover
Prohibited AI practices €35 million 7% global annual turnover
High-risk AI non-compliance €15 million 3% global annual turnover
Incorrect information to authorities €7.5 million 1% global annual turnover

⚠️ For SMEs and Startups

The regulation includes proportionate fines for SMEs and startups: the lower of the two amounts (fixed fine or percentage) applies instead of the higher.

How to Prepare Your Organization

Whether you're an AI developer or deployer, here's your roadmap to compliance:

Phase 1: Discovery & Assessment (Now - Q1 2025)

AI Inventory

  • • Catalog all AI systems in use or development
  • • Document purpose, functionality, and data used
  • • Identify who developed each system

Risk Classification

  • • Assess each system against risk categories
  • • Identify prohibited practices
  • • Determine high-risk systems

Phase 2: Gap Analysis & Planning (Q1-Q2 2025)

For High-Risk Systems

  • • Evaluate against Article 8-15 requirements
  • • Identify documentation gaps
  • • Assess human oversight capabilities

Governance Setup

  • • Define AI governance structure
  • • Assign roles and responsibilities
  • • Establish AI ethics oversight

Phase 3: Implementation (Q2 2025 - Q2 2026)

Technical Measures

  • • Implement risk management systems
  • • Establish data governance practices
  • • Build logging and monitoring
  • • Design human oversight interfaces

Documentation

  • • Create technical documentation
  • • Prepare conformity assessments
  • • Develop user instructions
  • • Document bias testing results

Phase 4: Operationalize (Q2 2026+)

Ongoing Compliance

  • • Register high-risk systems in EU database
  • • Conduct regular audits and reviews
  • • Monitor for regulatory updates

Culture & Training

  • • Train staff on AI Act requirements
  • • Embed responsible AI practices
  • • Establish incident response procedures

AI Act Compliance Checklist

Complete AI system inventory across the organization
Classify each AI system by risk level
Eliminate or modify any prohibited AI practices
Establish AI governance framework and assign accountability
Implement risk management for high-risk systems
Document data governance and bias testing procedures
Create comprehensive technical documentation
Implement logging and monitoring capabilities
Design effective human oversight mechanisms
Prepare conformity assessment procedures
Add transparency disclosures for limited-risk AI
Train staff on AI Act requirements and responsibilities
Review and update vendor/supplier AI contracts

The Bottom Line

The EU AI Act is not just another compliance checkbox-it represents a fundamental shift in how organizations must approach AI development and deployment. Companies that embrace this proactively will gain competitive advantage through increased trust, better risk management, and readiness for the global regulatory landscape that's sure to follow Europe's lead.

The time to act is now. With prohibitions taking effect in February 2025 and full compliance required by August 2026, organizations need to start their AI Act journey today.

Ready to tackle AI Act compliance?

Meewco helps you inventory AI systems, assess risks, and build compliant AI governance frameworks.

Integrated with ISO 27001, SOC 2, and other frameworks you're already managing.

Dariusz Zalewski

About Dariusz Zalewski

Founder and CEO of Meewco. With over 15 years of experience in information security and compliance, Dariusz helps organizations build robust security programs and achieve their compliance goals.

Ready to simplify your compliance?

Meewco helps you manage AI Governance and other frameworks in one unified platform.

Request a Demo