.png)
The AI Act is the European Union's landmark legislation governing artificial intelligence. Its primary focus is high-risk AI applications, but its scope is broad, affecting AI systems across various risk categories. It introduces strict requirements for transparency, accountability, and compliance.
As of February 2, 2025, several key parts of the AI Act are now in effect:
- Definitions of AI systems: The AI Act establishes clear definitions of what constitutes an AI system.
- AI literacy initiatives: Organizations must promote awareness and understanding of AI systems.
- Prohibited AI practices: AI applications that pose an “unacceptable risk” become unlawful under Article 5.
While the AI Act originates from the EU, its reach extends far beyond European borders. The legislation has extraterritorial scope, meaning non-EU companies offering AI systems in the EU or affecting EU individuals must comply with these rules. This includes companies that "place on the market" or "put into service" AI systems in the EU, even without a physical EU presence. For example, if a US company uses AI to screen EU job applicants or provides AI-powered services to European customers, they must follow the same requirements as EU-based companies. Notably, providers of high-risk AI systems based outside the EU must appoint an authorised EU representative, similar to GDPR requirements. The Act can enforce compliance through substantial fines and coordinated cross-border actions, making it essential for global AI developers to understand and prepare for these regulations.
Disclaimer: This blog post is intended for informational purposes only and does not constitute legal advice. I am not a lawyer, and this summary is based on my review of available sources. While the guidance provided aligns with general best practices for machine learning, it is important to consult with a legal professional for advice specific to your situation.
How Does the AI Act Categorize AI Systems?

The AI Act classifies AI systems based on their risk level, imposing different regulatory requirements depending on their impact. The ones we need to be aware of as developers are the unacceptable, high and limited risk categories. Below these categories there is a 'minimal risk' category which is exempt from most of the requirements.
Prohibited AI Systems
The AI Act's Article 5 bans AI systems that:
- Manipulate users subliminally or deceptively to cause harm.
- Exploit vulnerabilities of specific groups (e.g., children, elderly, disabled individuals) to distort behavior.
- Facilitate government-run social scoring systems.
- Enable predictive policing that profiles individuals based on AI-driven risk assessments.
- Collect facial recognition data indiscriminately (e.g., scraping images from the internet or CCTV without consent).
- Deploy real-time biometric identification in public spaces, such as live facial recognition by law enforcement, except in narrowly defined exceptions (e.g., searching for a missing child).
- Use biometric categorization systems to infer sensitive characteristics like race, political orientation, or religious beliefs.
- Deploy AI-driven emotion recognition in workplaces, schools, or law enforcement settings.
High-Risk AI Systems
High-risk AI systems fall into two main categories:
1. AI in regulated products
These include AI systems used as safety components in existing EU-regulated sectors such as:
- Medical devices
- Automotive systems (e.g., AI-assisted driving technologies)
- Aviation and aerospace
- Machinery (e.g., industrial automation AI)
- Toys (e.g., interactive learning AI)
2. AI in sensitive areas
These systems significantly impact individuals' rights and safety and are explicitly listed in Annex III of the AI Act, including:
- Education and vocational training (e.g., automated exam grading, university admissions screening)
- Employment and HR (e.g., CV screening, AI-assisted employee monitoring)
- Access to essential services (e.g., credit scoring, AI-driven welfare eligibility decisions)
- Law enforcement (e.g., predictive crime analytics, automated evidence evaluation tools)
- Border control and migration (e.g., AI-powered risk assessment, automated visa processing, lie detection at border crossings)
- Justice administration (e.g., AI-assisted legal decisions, predictive case outcome analysis)
- Biometric identification and categorization (e.g., AI-powered face recognition for authentication, sorting individuals by biometric characteristics)
These systems must meet stringent requirements for transparency, documentation, and human oversight.
Limited-Risk AI Systems
Between high-risk and minimal-risk systems, the AI Act defines a category of AI systems that have transparency obligations but do not require extensive oversight. These include:
- AI that interacts with humans: Users must be informed that they are engaging with an AI system (e.g., chatbots, voice assistants).
- AI-generated synthetic media: AI-generated images, videos, or audio that could mislead users must be labeled as artificial unless used in artistic or entertainment contexts (e.g., deepfake disclosures in journalism).
- Emotion recognition or biometric categorization AI (where not outright prohibited): Organizations must disclose when AI analyzes individuals' emotions or biometric data in permissible settings.
Failure to meet transparency obligations can lead to regulatory penalties.
Penalties for Non-Compliance

The EU AI Act establishes a comprehensive penalty framework with substantial financial consequences for violations. For the most severe infractions, particularly the deployment of prohibited AI systems or deliberate circumvention of the Act's restrictions, organisations face fines of up to €35 million or 7% of worldwide annual turnover, whichever is higher. This unprecedented penalty cap, which exceeds even the GDPR's 4% maximum, reflects the EU's serious stance on AI regulation and its commitment to preventing harmful AI practices.
The Act implements a tiered penalty structure based on violation severity. Failures to comply with high-risk AI system requirements, such as inadequate risk management, insufficient documentation, or deploying non-compliant systems, can result in fines up to €15 million or 3% of global turnover. Less severe violations, such as providing misleading information to regulators or notified bodies during certification processes, may incur penalties up to €7.5 million or 1% of turnover.
Regulatory authorities will assess each violation individually, considering factors such as the infringement's nature and gravity, intent or negligence, number of affected individuals, mitigation efforts, and previous violations. The Act emphasises that penalties should be "effective, proportionate, and dissuasive." Companies demonstrating wilful non-compliance or causing significant harm are likely to face maximum penalties, while good-faith errors may receive more lenient treatment.
Beyond monetary penalties, regulators possess broad enforcement powers. They can issue remedial orders requiring the withdrawal of non-compliant AI systems from the market, mandate the deletion of problematic datasets, or impose periodic penalty payments for ongoing non-compliance. These daily fines serve as a powerful deterrent against persistent violations.
The Act's penalties extend beyond regulatory fines to encompass broader legal and liability risks. Non-compliance can expose organisations to civil litigation, particularly under the forthcoming AI Liability Directive, which will facilitate legal action for AI-related harms. Evidence of non-compliance may support negligence claims in court proceedings. In severe cases involving prohibited AI practices that cause harm, criminal liability might arise under Member State laws. Furthermore, AI deployments that violate other regulations, such as the GDPR, can trigger additional penalties from relevant authorities.
Actionable Steps for Engineers and ML Teams
.png)
Engineering and platform teams must take several concrete technical steps to ensure compliance with the AI Act. First and foremost, teams must immediately cease using any AI systems that fall under the prohibited categories. This includes reviewing and potentially shutting down existing applications and pipelines that may have used data collected through now-prohibited methods, such as indiscriminate facial recognition data collection or emotion recognition systems in workplace settings.
Teams should implement robust safeguards and monitoring systems to prevent future violations. This includes establishing technical controls that can flag and prevent the deployment of prohibited AI use cases. A critical component is implementing comprehensive model versioning and lineage tracking capabilities that can demonstrate the provenance of training data and document the methods used throughout the model development lifecycle.
For data governance, teams should conduct thorough reviews of existing datasets, paying particular attention to any biometric data. This includes auditing historical data collection methods and ensuring they align with the Act's requirements. Organizations should update their internal AI governance policies to reflect the Act's immediate requirements, implementing mandatory compliance reviews for new AI applications and establishing clear protocols for handling sensitive data.
To support these compliance efforts, teams should leverage appropriate technical tools:
- Metadata tracking systems to document model and data characteristics
- Experiment tracking platforms to maintain detailed records of model development
- Model registries to version and catalog AI assets
- MLOps platforms like ZenML that combine these capabilities with governance features
These tools can help teams maintain the necessary documentation and traceability required for compliance while streamlining the implementation of governance controls within existing development workflows.
Upcoming AI Act Milestones
.png)
- May 2, 2025: EU Commission releases AI governance guidance.
The European Commission will issue detailed guidance and codes of practice specifically targeting providers of general-purpose AI (GPAI) models. This guidance will primarily impact organisations developing and training foundation models, providing clarity on compliance requirements and best practices.
- August 2, 2025: General-purpose AI (GPAI) model governance rules apply.
GPAI providers must begin complying with comprehensive governance rules enforced by EU member states. This includes providing detailed technical documentation, ensuring copyright compliance during model training, delivering transparent summaries of training data, and conducting thorough assessments of potential systemic risks. Member states will establish specific penalties for non-compliance with these obligations.
- August 2026: High-risk AI obligations become enforceable.
This marks the full enforcement date for high-risk AI systems, encompassing areas like biometric identification, critical infrastructure, education, employment, essential services, law enforcement, migration control, and justice administration. Systems must undergo rigorous conformity assessments and meet extensive requirements including: implementing comprehensive risk management, using high-quality unbiased training data, enabling operation logging for traceability, maintaining detailed technical documentation, providing clear user instructions, ensuring human oversight capabilities, and maintaining robust security measures. The Act's penalty regime becomes fully operational, with significant fines for non-compliance. Additionally, limited-risk AI systems must meet transparency obligations, such as disclosing AI interactions and labeling AI-generated content.
- August 2027: Compliance deadlines for pre-existing AI systems.
Final compliance deadline for GPAI systems deployed before August 2, 2025, and high-risk AI systems embedded in products requiring third-party certification under other EU regulations (such as medical devices, automobiles, aviation equipment, and toys). This represents the last major milestone in the Act's implementation timeline, after which all AI systems within scope must be fully compliant.
How to Be Proactive About Compliance
As AI regulation evolves globally, implementing robust governance and lineage tracking capabilities is becoming essential - not just for compliance, but for building trustworthy AI systems that serve users effectively. Here's how to prepare your organisation for AI Act compliance, with a focus on practical technical implementations:
For High-Risk AI Applications
If your AI system falls into the high-risk category, you'll need to implement comprehensive controls including:
- A documented risk management system
- High-quality training data with bias mitigation
- Extensive logging and traceability
- Detailed technical documentation
- Human oversight mechanisms
- Robust security measures
Technical Implementation Steps
- Audit and Categorise Your AI Systems
- Inventory existing AI applications
- Assess risk categories
- Identify compliance gaps
- Implement Governance Infrastructure
- Deploy data versioning and metadata tracking
- Establish model registries
- Create standardised MLOps pipelines
- Implement explainability reporting
- Set up monitoring and incident response
- Enhance Development Practices
- Embed compliance checks in CI/CD
- Add bias detection and guardrails
- Improve data cleaning processes
- Implement comprehensive logging
- Enable human oversight for high-risk cases
How ZenML Supports Compliance
ZenML provides comprehensive technical infrastructure that makes implementing AI Act requirements straightforward and efficient. At its core, ZenML offers robust traceability features that maintain complete lineage of data, models, and deployments throughout the ML lifecycle. This traceability is automatic and seamless - as teams work within their normal development workflows, ZenML captures and stores detailed metadata about every step of the process, from initial data processing to final model deployment
.

The platform's lineage tracking capabilities go beyond simple version control. When data scientists and engineers use ZenML to develop and deploy models, the system automatically documents the relationships between different components of the ML pipeline. This means you can trace any prediction back to its source, understanding exactly which version of the model made it, what data that model was trained on, and what preprocessing steps were applied. This level of detail is crucial for meeting the AI Act's documentation requirements, particularly for high-risk systems.
In terms of governance and compliance, ZenML integrates sophisticated tools directly into the development workflow. The platform includes built-in bias detection capabilities that can analyze training data and model outputs for potential fairness issues. These checks can be automated as part of your CI/CD pipeline, ensuring that problematic models don't make it to production. ZenML also facilitates the generation of standardized model cards (using the model control plane), which document model characteristics, limitations, and intended uses - a key requirement for transparency under the AI Act.
.png)
For production environments, ZenML implements comprehensive monitoring systems that track model performance in real-time. This goes beyond basic accuracy metrics to include custom fairness metrics, data drift detection, and automated alerting when models behave unexpectedly. The platform maintains detailed audit logs of all system interactions, making it possible to reconstruct exactly what happened in any given situation. This is particularly valuable for high-risk applications where human oversight is required - ZenML can integrate with human-in-the-loop workflows, ensuring that critical decisions receive appropriate review.
One of ZenML's key strengths is its pipeline templating system. Teams can create standardized pipeline templates that embed compliance requirements directly into the development process. These templates can enforce data quality checks, require specific documentation steps, and implement mandatory review phases for high-risk applications. By standardizing these processes, organizations can ensure consistent compliance across all their AI development efforts while still maintaining flexibility for different use cases.

The platform's artifact management system provides a secure, centralized repository for all ML assets. This includes not just models and datasets, but also evaluation results, preprocessing functions, and deployment configurations. Each artifact is versioned and its lineage is tracked, making it possible to demonstrate compliance with data quality requirements and prove that models were developed using approved methodologies.
ZenML's approach to supporting compliance isn't just about meeting regulatory requirements - it's about building better, more trustworthy AI systems. The platform's features help teams implement best practices that improve model quality and reliability while simultaneously satisfying regulatory obligations. This means that investing in ZenML's compliance capabilities pays dividends beyond just regulatory conformance: it results in more robust, maintainable, and transparent AI systems that better serve their users.
Remember to maintain proportionality in your compliance efforts - focus the most stringent controls on high-risk systems while keeping minimal bureaucracy for low-risk applications. By implementing these capabilities through ZenML, you're not just meeting regulatory requirements - you're building better, more trustworthy AI systems that benefit your users through increased transparency and reliability.
Practical Next Steps
While the AI Act introduces comprehensive regulations for high-risk AI systems, developers working on limited or minimal risk applications can take a proportionate approach to compliance. The key takeaways are that you need to understand your system's risk category, implement appropriate transparency measures, and maintain good engineering practices around documentation and monitoring.
For teams building lower-risk AI applications, focus on implementing foundational best practices: maintain clear documentation of your AI systems, implement basic monitoring and logging, and ensure transparency about AI use in your user interfaces. Consider adopting tools like ZenML to help automate these practices. While you may not need the full compliance infrastructure required for high-risk systems, having these basics in place will make it easier to adapt if requirements change or if you later develop higher-risk applications.
Some practical next steps include: auditing your current AI systems to confirm their risk category, documenting your AI development processes, implementing basic model and data versioning, and setting up monitoring for key metrics. Remember that even for lower-risk systems, maintaining good engineering practices around AI development isn't just about compliance - it helps build user trust and creates more reliable, maintainable AI applications.