ISO/IEC 42001: The latest AI management system standard

10-04-2024
Unlock Trusted AI by navigating the ISO/IEC 42001 standard. Manage risk and use AI responsibly while balancing innovation, governance, and ethics.

ISO/IEC 42001 is the latest standard for AI management systems.

Leading the way for other regulations, it covers the requirements for the organizations to build a trustworthy AI management system, such as risk management, AI system impact assessment, system lifecycle management and third-party suppliers.

In December 2023, ISO introduced the 42001 International Standard, which offers valuable guidance for organizations to develop trustworthy AI management systems.

This standard ensures responsible development, deployment, and operation which is critical for successful AI adoption and broader digital transformation.

Many countries are now drafting laws, including the EU AI Act, with ISO/IEC 42001 serving as a cornerstone and providing essential guidance for compliance.

Reto P. Grubenmann

Director, Head of Certification & Attestation

KPMG Switzerland

Flavia Masoni
Flavia Masoni

Expert, Cyber Security

KPMG Switzerland

A first look at ISO/IEC 42001

The AI management system standard, ISO/IEC 42001, provides guidance for organizations to address AI challenges such as ethics, transparency and continuous learning.

This methodical approach helps businesses balance innovation and governance while managing risks and opportunities.

The standard's rigorous structure is in line with other management systems, notably Information Security Management System (ISO/IEC 27001) and the Privacy Information Management System (ISO/IEC 27701).

ISO/IEC 42001 includes all the phases of the Plan-Do-Check-Act cycle in respect to AI

  • It requires organizations to determine the scope of applicability of the AI management system. Organizations must produce a statement of applicability that must include the necessary controls.
  • It requires supporting the AI system development process by maintaining high standards for continual improvement and maintenance and to monitor the performance of the AI management system.  
  • Finally, it requires improving the system based on previous observations and implementing corrective actions. 

Inside ISO/IEC 42001: A Closer Look at Key Controls

Among the numerous controls included in the standard it is possible to identify some key elements that help us to better understand its focus: 

  • Risk Management: organizations are required to implement processes to identify, analyze, evaluate, and monitor the risks during the entire management system's lifecycle. 
  • AI impact assessment: organizations must define a process to assess potential consequences for users of the AI system. An impact assessment could be performed in different ways, but it must consider the technical and societal context where the AI is developed.  
  • System Lifecycle management: organizations must take care of all the aspects of the development of the AI System, including planning, testing and remediating the findings. 
  • Performance optimization: the standard also places a strong emphasis on performance, requiring organizations to continuously improve the effectiveness of their AI management system.  
  • Supplier management: the controls cover not only the organization's internal processes but also extend to suppliers, who must be aligned with the organization’s principles and approach.

Why companies should adopt the ISO/IEC 42001 approach

It is worthwhile for organizations to start researching and implementing the standard. Compliance with ISO/IEC 42001 requirements can bring several benefits to companies:  

  • Within the organization, there will be more rigorous and efficient risk management, which mitigates potential risks. This includes addressing AI-specific risks, such as treating individuals unfairly, making incorrect decisions based on inaccurate information, and other challenges unique to the AI landscape.
  • The company's reputation can also benefit from increasing trust in the products it develops, a crucial factor when selling AI products to third parties. It's equally important to manage the risks associated with using third-party AI products, ensuring a comprehensive approach to trust and reliability in the broader AI ecosystem.
  • Being compliant with standards provides a competitive advantage by instilling confidence in customers and stakeholders. It demonstrates a commitment to quality, ethical practices and adherence to industry-recognized benchmarks. This differentiates the organization from its competitors and fosters trust in its products and services.

It prepares companies for additional regulations that will be introduced in the next years, including the EU AI Act published in 2024. 

Practical steps for the implementation of ISO/IEC 42001

To advance toward robust AI governance, organizations should: 

  • Familiarize themselves with the standard to understand the requirements of ISO/IEC 42001. 
  • Communicate with stakeholders to get key stakeholders on board. 
  • Conduct a readiness assessment to evaluate current AI practices against ISO/IEC 42001 standards. 
  • Develop a detailed roadmap plan to implement the requirements effectively and efficiently.

Recognizing the gaps

While ISO/IEC 42001 provides the basis for AI management systems, it acts as an umbrella.

For in-depth technical details, including aspects such as AI model validation, organizations may need to explore more specific standards that address individual components of a robust AI system.

To confirm that AI models are working as intended, they must be validated against rigorous standards.

Additional controls such as these could assess the bias of AI models and test the robustness of the system, helping organizations build a more trustworthy AI system.