• Alberto Job, Director |
  • Manousos Theodosiou, Expert |

Explore the EU AI Act's intricacies through our practical guide, decode the legal complexities and receive actionable insights for a seamless implementation. From simplified governance principles to real-world examples, empower yourself to not only understand the rules but effectively apply them in the dynamic landscape of AI.

Our recently published report “Decoding the EU AI Act” dives into the way we will use and regulate artificial intelligence in the future.

This blog helps you navigate through the complex legal requirements of the EU AI Act and sheds light on tangible measures helping you with implementing it in your organization. Know the rules and how to apply them.

Know the rules – legal implications of the EU AI Act

Companies need to consider three main questions to assess the implications of the EU AI Act on their operations.

1. Scope – which role applies to me?

The EU AI Act applies to organizations that place or bring into service AI systems on the EU market, or where a deployed AI system affects EU citizens. The EU AI act distinguishes between four different roles:

  • Provider/manufacturer: a natural or legal person or public authority, agency or other body that develops an AI system and intends to put it on the EU market.
  • Importer: a natural or legal person located or established in the EU that places an AI system on the market under the trademark of a natural or legal person established outside the EU.
  • Distributor: a natural or legal person in the supply chain, other than the provider or importer, who makes an AI system available on the EU market.
  • Deployer: any natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.

It is important to note that roles are not fixed. Importers, Distributors and Deployers can be considered Provider if:

  • the AI system is marketed under its own trademark
  • the intended purpose of the AI system is modified
  • major changes to the AI systems have been made.

Therefore, assess your role: Are you a provider, importer, distributor or deployer?

2. AI classification – what category does my AI system fall into?

The EU AI Act follows a risk-based approach. Depending on the risk classification, different obligations apply:

  • Prohibited systems: these AI systems are prohibited because they pose a threat to people and their fundamental rights. Social scoring systems are an example of a prohibited sytem.
  • High-risk systems: these AI systems can negatively affect the safety or fundamental rights and therefore need to comply with a comprehensive set of rules before being placed on the market.
  • Limited-risk systems: these AI systems must comply with transparency requirements to ensure that users can make informed decisions.
  • Low-risk systems: these AI systems don’t pose considerable risks and therefore don’t need to be formally regulated.

For this reason, it is critical to assess which risk category your AI system falls into – there are different legal obligations to comply with for each category.

3. Compliance requirements – which ones must I meet?

Depending on the role and the classification of the AI system being deployed, different compliance obligations apply, as shown in the table below for high risk AI systems:







Establishment of a risk management system





Data and data governance





Technical documentation










Transparency and provision of information to deployers





Human oversight





Accuracy, robustness and cybersecurity





Quality management system





Documentation keeping





Automatically generated logs





Conformity assessment





EU declaration of conformity





Registration obligation





Information of national competent authority upon request





Affix CE marking (Article 49)





Corrective actions and duty of information (Article 21)





Demonstrate conformity upon request





Comply with instructions for use





Consider relevance & quality of input data





Monitor operation of the system





Execution of data protection impact assessment





Therefore, assess the legal requirements you need to meet by combining your role and the classification of your AI system.

Note: If your AI systems qualifies as “limited risk”, only the transparancy requirement has to be fullfilled. “Minimal Risk” AI system can even be freely used.

By now, you should be aware the rules you need to abide by. But how do you actually go about following these rules in practice?

4. Applying the rules – How to implement the requirements

To give some insight into how these compliance requirements are applied, this article will continue by focusing on the accuracy requirements in the EU AI Act. Article 15 dictates that “AI systems should perform consistently throughout their lifecycle and meet an appropriate level of accuracy... The level of accuracy and accuracy metrics should be communicated to the users”.

Ensuring accuracy involves a scientific and methodical approach grounded in statistical theory and best practices in machine learning. These are segmented into three key phases: model development and validation, performance metric selection, and continuous evaluation and adaptation.

a. Model Development and Validation

In line with the EU AI Act, our Model Development and Validation processes are meticulously designed to ensure that AI systems achieve "an appropriate level of accuracy", critical to making AI applications trustworthy and effective in in vital sectors.

  • Data Pre-processing: adhering to the principle of “garbage in, garbage out”, it is vital to understand that the quality of input data directly affects the model’s performance. This includes tasks such as handling missing values, normalization, encoding categorical variables and selecting features that are most relevant to the prediction task. Note that pre-processing activities also relate to the EU AI Act’s data and data governance requirements.
  • Model Selection: in model selection, we scrutinize various algorithms to discern which best reveals the patterns inherent in the data, spanning from simple models such as logistic regression to complex deep neural networks, adapted to the problem's complexity and data structure. This entails fine-tuning hyperparameters to find a model that not only adheres to Occam's Razor for simplicity but also effectively balances the bias-variance trade-off, ensuring robust generalization. Another consideration in model selection is model explainability, which ties into the Transparency requirements of the EU AI Act.

b. Performance Metric Selection

In scenarios involving imbalanced datasets, typical in fraud detection or rare disease identification, relying solely on traditional accuracy metrics can provide a skewed view of an AI model's efficacy. Accuracy is indeed one measure, but not the only one, as we will explore here. For instance, a model tasked with diagnosing a rare disease in a dataset composed predominantly of negative instances (95% negative) may falsely appear proficient if it only predicts the majority class, thereby neglecting its critical objective.

Therefore, selecting appropriate performance metrics is critical to fully evaluate how a model performs. These metrics must align with the application’s specific objectives, ensuring a more accurate and holistic assessment of the model’s effectiveness. It’s important to note that, for the sake of brevity, this discussion will focus on a few key metrics from supervised learning – namely classification and regression. There are a variety of metrics for unsupervised learning, reinforcement learning, and even within supervised learning itself, which, while vital, aren’t included in this focused examination:

  • Classification Problems: for classification problems, adding precision and recall to the accuracy metric offers a more detailed view of model performance. These metrics give insight into the trade-off between false positives and false negatives, providing deeper insights into model reliability.
  • Regression Problems: in tasks predicting continuous outcomes, mean squared error (MSE), and mean absolute error (MAE) are commonly used for quantifying the difference between the predicted values and the actual values. Alternatively, R-squared value describes how well the variability in the target variable is explained by the model.

c. Continuous Evaluation and Adaptation

For AI models to match the “consistent performance throughout their lifecycle” requirement for model accuracy, a disciplined approach to continuous evaluation and adaptation is essential. This approach also enables “communication of accuracy and accuracy metrics”, as specified in the EU AI Act.

  • Automated Retraining: implementing automated retraining frameworks allows models to be periodically updated with new data, ensuring they evolve in line with changing data trends. ML-as-a-service product, such as Azure Machine Learning, facilitate this by automating the retraining process, simplifying maintenance of models and model performance.
  • Performance Monitoring and Data Drift Detection: continuous monitoring of model performance and detection of shifts in underlying data distributions are critical for timely intervention. This includes setting up alerts on performance metrics and data drift, enabling a rapid response to maintain model accuracy. ML-as-a-service products store these monitoring results as part of the logs that address the automatically generated logs the EU AI Act requires.


Knowing the general rules for using AI in the EU is important. Just as important, however, also to know how to comply with these rules. Complying with these rules may however be more challenging as people expect, as wie tried to show with the example of the “accuracy” requirement for “high risk” AI systems. Therefore, the assistance of technical experts is essential. 

Durchstöbern, Verwalten und Teilen

Verwalten Sie Ihre eigene Bibliothek und teilen Sie die Inhalte.