What the EU's Artificial Intelligence Act means for Swiss businesses
As artificial intelligence (AI) influences more and more aspects of our daily lifes, ethics surrounding the use of our data moves further into the spotlight. New EU legislation aims to establish an ethical framework to ensure that businesses consider the impacts on people, companies, environments and many other aspects of our live. With the law being applied outside the EU, too, and with financial penalties for non-compliance, what are the implications for companies in Switzerland?
What is the concept of AI?
Oxford Languages defines artificial intelligence (AI) as "the development of computer systems able to perform tasks normally requiring human intelligence".
To substitute human intelligence, AI uses algorithms, etc. that derive rules or recognize patterns from gigantic data sets. The algorithms make it possible to identify behavioral patterns or deviations that can lay the foundation for automated decisions. The existing and potential applications of AI-generated findings in business are huge, as we probably all recognize.
From assessing creditworthiness to making selections in hiring processes to carrying out medical diagnoses, the list is almost endless. Humans are being evaluated by, or exposed to, technical processes in more and more areas of their lives.
What are the challenges and risks of AI?
AI facilitates human existence in many ways. Self-driving cars, robots or the voice recognition software in our smartphones are already part of our lives. The enormous power attributed to AI triggers fears and calls for control and transparency, however. The question is if, how and to what extent an automated process can be compatible with society’s expectations with respect to ethics and morals.
Predicting the future isn't magic, it's artificial intelligence.
How to ensure that ethical aspects are taken into account?
The European Commission drafted a legislative proposal termed the "Artificial Intelligence Act" in April 2021. The act has an extraterritorial scope, meaning it is likely to be applicable to Swiss companies too. It is the first European legal framework for AI solutions, and it is interesting to note that it also contains an ethical context.
The resolution aims to establish a framework for trustworthy AI. It requires AI systems to be legally, ethically and technically robust, and to respect democratic values, human rights and the rule of law.
The fact that legal and technical requirements are addressed is obviously neither new nor surprising. What is striking, however, is the emphasis put on its claims of ethical criteria. The proposed AI regulation refers to a complete framework of ethical compliance, demanding the composition of principles instead of stand-alone guidelines.
The following values are explicitly addressed:
- respect for human autonomy
- prevention of harm
In other words, if an AI system could potentially pose a high risk to the health, safety or fundamental rights of natural persons, its providers are required to perform an ex ante conformity assessment.
This assessment is based on internal checks and aims to ensure compliance with the EU regulation. Providers of less-invasive AI systems are encouraged to apply these requirements voluntarily. The prerequisites for the development process or the system itself include establishing safeguards against potential biases in data sets, making use of adequate data governance and management systems, and ensuring acceptable levels of transparency, among other things. As the requirements must be compliant with the entire Artificial Intelligence Act, however, the ethical framework must be established with care.
How can companies achieve ethical readiness?
Effectiveness, feasibility and budgets are all considerations in this regard. Following a practicable approach, the key values as stated by the EU should be considered at every stage of the data lifecycle.
Starting with collection and acquisition, commitments like these examples are needed: personal weaknesses will not be exploited in the interest of obtaining data, intuitive and selectable types of consent will be offered and no "dark patterns" will be used.
In the second stage, storage and management, the following principles are considered a minimum: sufficient cyber security for all types of data storage, communication of changes in data management and transparency over data protection and access.
In the next section, analysis and decision making, an explanation of the logic of models used should be provided along with justification of the appropriateness of the algorithms employed and a definition of measures intended to eliminate discrimination.
The last stage should deal with the set-up of an effective monitoring system, including, for instance, assessment of compliance with ethical standards throughout the data lifecycle based on key performance indicators (KPIs), giving customers an opportunity to provide negative feedback and regular monitoring of automated decision-making systems for discrimination.
By considering these and similar ethical issues, principles and commitments as well as safeguarding and monitoring them throughout the entire lifecycle, it should be possible to adhere to the values required by the EU – namely respect for human autonomy, prevention of harm, fairness and explicability.
What happens if the ethical aspirations are not met?
As with GDPR, these rules also apply extraterritorially to providers and users outside the EU in specific cases. That means Swiss companies could be asked to pay up. Non-compliant behavior is punishable by a fine of up to EUR 30 million or six percent of annual global turnover, whichever is higher. Setting the moral imperative aside, that means complying with the EU's ethical requirements is advisable, otherwise your latest AI solution could become immorally expensive.