While within Europe the promotion of innovation and digitisation is seen as an important objective, European lawmakers aim to ensure that the fundamental rights of citizens are not affected. With existing and future legislation – such as the GDPR and the AI Act – the EU is trying to strike the right balance. In this third part of our three-part blog series on 'free' AI applications, we examine the implications in terms of privacy and AI surveillance.
The GDPR and the AI Act
The GDPR (“General Data Protection Regulation”) is a regulation that has been in force since 25 May 2018. The purpose of this regulation is to protect people's privacy and regulate how organisations handle personal information. The GDPR includes rules for processing personal data such as the need for a legal basis before processing of personal data is allowed. There are six different forms of the legal bases, which can be summarily described as: consent, performance of a contract, legal obligations, protecting vital interests, the public interest and a legitimate interest. Parties that do process personal information have to be clear and transparent on how and why they do so. The persons whose information is processed retain various rights, such as a right to object, right of rectification and a right of deletion of your personal information.
The GDPR applies to all organisations situated within the European Union that process personal data and also to organisations outside the EU that process the personal data of EU citizens. Processing means anything done in relation to personal data. Examples are collecting, using, transferring, hosting, storing, maintaining, collating and deleting personal data. As such, processing of personal data also includes the collection of large amounts of data by AI (“Artificial Intelligence”) applications such as ChatGPT. As has become very clear Italy banned ChatGPT for a while stating there were privacy concerns relating to the model. These concerns have since been cleared and ChatGPT can be used again in Italy. However, this does not mean that all is well. Various Privacy watchdogs have already expressed other concerns, with the Dutch Data Protection Authority (“Autoriteit Persoonsgegevens” or “AP”) addressing a letter to ChatGPT inquiring about various aspects of their handling of data and safeguarding of privacy.
The GDPR is not the only hurdle to be overcome for the free use of AI applications like ChatGPT. Just this week the European Commission agreed to the AI act. This new regulation aims to control the use of AI within all sectors. The aim of the AI Act is to promote security, innovation and trust related to AI. This regulation looks at AI applications at the product level. An AI application will therefore have to meet the requirements arising from the AI Act. This ultimately means that AI applications within all sectors will be regulated by one and the same regulation. In this blog, we highlight two areas of concern regarding the GDPR and the AI Act that could have a major impact on the availability of AI applications like ChatGPT.
Information gathering by ChatGPT
The manner in which OpenAI, the developer of ChatGPT, collects data is still not completely clear. Presumably it is largely done by so-called 'data scraping', possibly in combination with buying data and using data from partners. Data scraping is a computer technique where automated software collects information from websites and restructures it into a usable format. This is a particularly effective way of quickly collecting large amounts of data, which, however, is not readily permitted. A good example is Clearview AI, a model that collects images of human faces from the internet via data scraping. Investigative agencies can deploy Clearview AI by uploading a picture of a suspect to find out who that person is. The Italian, French and Greek privacy regulators, among others, ruled that this was a gross invasion of the privacy of anyone whose photo was unlawfully collected. Taken together, this resulted in the US parent company of Clearview AI being fined €60 million.
Just because information is available online does not mean it can be used legally. Many websites will prohibit this in their own terms and conditions. In addition, the GDPR needs to be taken into account. For example, the GDPR talks about data minimisation. This means that, as an organisation, you should not collect more data than is strictly necessary for the purpose for which you are collecting it. The idea behind this is that the less data you own, the less likely you are to lose it. AI applications like ChatGPT, on the contrary, collect as much data as possible in order to then train their model with it. Here, each piece of data can be seen as an exercise. The more of these exercises the model performs, the better the results. More data therefore leads to better results. While well-functioning AI applications can offer many benefits, the end does not necessarily justify the means. Unbridled information gathering is undesirable given the risks it poses to privacy and other fundamental rights. It is therefore highly doubtful whether ChatGPT's method of operation is permissible (within the EU) in its current form. This will become even more questionable with the arrival of the AI act.
ChatGPT and the AI Act
The free use of AI applications like ChatGPT is not only restricted by the GDPR. As mentioned above, European lawmakers have recently voted in favour of the AI act. The objective of the AI Act is to promote security, innovation and trust related to AI. One way the AI act tries to achieve this is by classifying AI applications into different risk categories. In the current version of the AI act, text-generating AI applications are qualified as ‘high-risk’ generative foundation models. This includes ChatGPT. If the AI act goes into effect in its current shape ChatGPT would have to comply with additional transparency requirements, like disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content and publishing summaries of copyrighted data used for training. These additional requirements prompted OpenAI CEO Sam Altman to say that OpenAI would cease operating in the EU if complying with new regulation would be too complicated and costly. Even though Sam Altman has since retracted that statement, it shows how impactful the AI act may be on applications like ChatGPT.
Regulatory pressure and innovation
That there is a certain level of constraint on innovation – insofar as it benefits the protection of our fundamental rights – is quite understandable. However, in doing so, it is important not to unnecessarily hamper innovation by taking an (overly) strict view on new developments. Limiting the possibilities for unrestrained data collection through the GDPR seems a necessary measure to protect our fundamental rights. Additionally, having a framework ready for the emergence of (commercially viable) AI applications can hopefully help prevent the misuse of this technology. However, the way applications like ChatGPT will be regulated in the future, might still lead to them being unavailable or less accessible in the EU. Which would clearly be a negative side-effect to the positive aims of the AI act. The exact effects of the regulation and the further development of AI technology remain to be seen. What is clear is that a lot is going to change.
Want to learn more?
With our blogs and other publications, we aim to address relevant legal issues in a pragmatic way. We always do so from a regulatory, intellectual property and contractual perspective. If you want to know more or have (other) questions, please feel free to contact us. For more information about our team and our services, please also visit our website.
Discover more
Contact us
Cagla Tagi
Senior Consultant, TechLaw
KPMG Netherlands
Anouk Atema
Manager, TechLaw
KPMG Netherlands
Jasper Oomen
Consultant, TechLaw
KPMG Netherlands
We will keep you informed by email.
Enter your preferences here.