Ensuring compliance when using AI-based tools

30-06-2023
Ensuring responsible AI use: key steps for companies to navigate the evolving AI landscape and avoid potential risks.

Companies must take control of AI usage to prevent harm and ensure compliance. Establishing governance, raising awareness, ensuring data protection, transparency, ethics, control frameworks and documentation are vital to enable safe and ethical AI utilization in alignment with regulations.

Everyone needs AI today - but who is in control of its use?

AI technology is evolving rapidly and will soon permeate almost every aspect of our lives. Accelerators such as the AI-based "Microsoft Copilot", which will soon be available to anyone using Microsoft technologies, will bring AI applications to every company. This will expose even those companies to AI that are otherwise distant from the AI topic. Given the convenience and usefulness of AI tools, they are already widely used. This means that the risk of AI-induced errors, so-called "AI hallucinations", is increasing very rapidly. For example, the recent case of a US lawyer who wanted to use ChatGPT to prove six reference cases of passenger injuries caused by serving trolleys on airplanes. The court proved to him that the cases listed had never occurred.

Accordingly, it is important that companies think about the use of AI now and get the whole thing under control before any harm is done in addition to the many benefits. Because when using AI-based tools, compliance with legal, ethical and regulatory requirements is key. That's why various authorities, first and foremost the EU, are currently drafting guidelines and laws.

If one wants to keep AI under control, proactive measures that take effect during the entire lifecycle of the AI implementation must be established. These are:

  1. Establishing appropriate governance with roles and responsibilities that handle the use of AI.
  2. Raising awareness and training of all those who come into contact with AI.
  3. Complying with data protection and data security laws and regulations.
  4. Ensuring transparency and explainability: AI systems must be designed to provide transparent and understandable results. Decisions made by AI algorithms must be verifiable, explainable and justifiable, especially when their results affect people or important processes.
  5. Implementing ethics and fairness: AI models and datasets must take ethical principles into account; the models as well as their results must therefore be checked with regard to ethics and fairness requirements.
  6. Providing data governance: the quality of the data processed by AI determines the quality of the results. Accordingly, data governance is required, which specifies the data quality, the origin of the data and compliance with data protection regulations.
  7. Implementing control frameworks and audits: AI systems must be integrated into a control framework, not only to ensure their compliance, but also to systematically verify the accuracy of the results. Regular audits are mandatory in view of the rapid developments.
  8. Keeping records: for accountability and quality assurance purposes, documentation must be retained of the development process, the decisions made therein, the specifications used as well as the data sources and data quality specifications used, the model trained, the pre-processing steps, and the testing and validation procedures.

Enforcing compliance when using AI-based tools is therefore a multi-faceted task that requires a comprehensive approach. However, it can enable companies to safely leverage the benefits of AI while complying with legal and ethical standards.

Thomas Bolliger

Director, Information Management & Compliance

KPMG Switzerland