Artificial Intelligence (AI) can enrich our lives in many ways – increased efficiency and lower costs, huge improvements in healthcare and research, increased vehicle safety, and general convenience are just some of the promises of AI. The combination of AI and the use of personal data in various AI applications is receiving significant attention concerning privacy and other ethical issues. As with any new technology, the opportunities of AI come with an array of challenges for society and the law. 

AI solutions raise significant concerns - particularly regarding privacy, data protection, and ethics - ranging from questions of fairness and hidden biases in big data all the way to the possibility of truly autonomous machines that may directly and indirectly harm individuals. The purpose of this article is to highlight some of the challenges that AI presents in relation to data protection and ethics and how KPMG can help your organization meet regulatory requirements to maintain customer trust and loyalty.

Privacy is a fundamental human right, which may be threatened by some types of AI applications

The “Artificial Intelligence Act” drafted by the European Commission aims to establish an ethical framework to ensure that organizations consider the impact of their AI systems on people, other businesses, environments, and many other aspects of our lives.  Much like the GDPR, the EU AI Act has the potential to set global standards for the positive use of AI in our daily lives.

As personal data collection and creation becomes ever easier, AI increases the possibilities for tracking and analyzing people's daily habits, potentially leading to breaches of EU data protection rules. When organizations are not transparent about why and how data is collected and stored, privacy is at risk.

Control over personal information is an inherently important aspect of data protection, and AI brings further threats to data protection. Many of the features of the European General Data Protection Regulation (GDPR) explicitly address the impact of AI. Among those features applicable to AI are breach notifications, controller responsibilities, hefty financial penalties, data protection impact assessments, privacy by design, and the right to be forgotten. GDPR not only applies to processing of personal data, but it also influences the development and deployment of AI systems. Users have very limited obligations under the AI Act (e.g., Article 29), but they are still liable and accountable for data processing resulting from the use of AI systems under the GDPR regime. Providers of AI systems may even be qualified as "processors" under the GDPR if they provide support or maintenance services for AI systems involving the processing of personal data on behalf of users. [1]

Privacy rights must be safeguarded by data governance models that build in the GDPR core principles. Indeed, appropriate personal data protection helps foster trust in data sharing and facilitate data sharing models uptake. Data minimization and data protection should never be leveraged to hide bias or avoid accountability, and these should be addressed without harming privacy rights. Importantly, ethical issues can arise not only when processing personal data but also when the AI system uses non-personal data (e.g., racial bias).[2]

The European Union's AI Act also raises concerns about ethical dilemmas relating to AI

It is crucial for organizations to familiarize themselves with the ethical concepts surrounding AI. They must recognize that ensuring ethical use of advanced or AI-powered algorithms has become a vital concern today. As technology continues to evolve, new AI models and methods are emerging continuously, and uptake is on the rise. They are often deployed without proper understanding or regulation, leading to unethical outcomes, despite attempts to minimize biases in the underlying systems.[3]

One of the main concerns related to AI is the potential for bias in the design of systems and the use of biased data sets. The concern is not so much with data accuracy, but rather the potential for AI systems to perpetuate and amplify existing societal biases present in the data used to train the algorithms, leading to unfair outcomes. As illustrated by the credit scoring system in the United States, where individuals who rent and do not have a credit card may have a poor credit score and be offered loans with higher interest rates and stricter terms.

As organizations continue to implement AI solutions and applications, it is crucial that they understand and comply with the EU's AI Act.

Although the GDPR was welcomed as a major step forward for the protection of individual privacy, it will be substantially challenged as digital technologies continue to evolve. This is already demonstrated by cloud computing, which in practice reverses the role and oversight of data controllers and where it is increasingly difficult for data subjects to exercise their rights as enshrined in the Regulation.

Under the GDPR, Data Protection Impact Assessments (DPIAs) must be conducted for large-scale data processing. However, this may not fully address the specific ethical considerations that AI systems impose. In practice, a user may deploy a high-risk AI system without having to conduct any form of impact assessment in situations where a DPIA is not required. To remedy this, impact assessments and audits should be conducted on high-risk AI systems, such as those used in self-driving cars and decision-support systems in education, immigration, and employment. Organizations can also use the "Assessment List for Trustworthy Artificial Intelligence" as a practical guide for assessing their AI systems.

What are the risks for companies?

As with GDPR, non-compliance with AI Act will be punishable by a fine of up to EUR 30 million or six percent of annual global turnover, whichever is higher. AI-related ethical issues may also carry broad and long-term reputational, and strategic risks. As such, it is important to engage the Board to address AI risks ( ideally it should fall to a technology or data committee of an organization). This means that complying with the EU's ethical requirements is highly advisable, otherwise the latest AI solution implemented within an organization may turn out as not only unethical but also expensive.

What companies can do?

It is essential to address the potential biases and ethical issues associated with AI to ensure that it remains a force for good in society. Privacy plays a crucial role in making ethical choices about how we use AI. Balancing technological innovation and privacy considerations promotes the development of socially responsible AI that can assist in the long-term creation of public value.

Companies must understand that safeguarding ethics in the use of advanced or AI-powered algorithms has become one of the most important aspects when introducing new technologies. The pressure to implement AI in an ethical manner is not only internal but also comes from outside investors, who do not want the risk of their investments being tarnished by the perception of using AI unethically. This should be taken seriously by companies, as AI-related ethical issues may carry broad and long-term reputational, financial, and strategic risks. It is therefore prudent to engage the board when addressing AI risks. Ideally, the task should fall to a technology or data committee composed of board members, or, if no such committee exists, the entire board.

KPMG helps organizations to develop a clear vision of their business principles and create a governance framework to ensure that their use of AI technology is aligned with their values. Our specialists are there to help to develop frameworks and tools for ethical risk due diligence process and ensure that any new risk mitigation plan is compatible and not redundant with existing risk mitigation practices. We can offer tailor-made guidelines with minimum ethical standards. These are important for gaining the trust of customers and clients, while demonstrating that the company’s due diligence has been performed should regulators investigate whether the organization has deployed a discriminatory model.

 

Authors: Kamila Kaczmarek and Johanna Vandervorst