Risks from within
Intellectual property
Among the top risks around the use of generative AI are those to intellectual property. Generative AI technology uses neural networks that can be trained on large existing data sets to create new data or objects like text, images, audio or video based on patterns it recognizes in the data it has been fed.1 That includes the data that is inputted from its various users, which the tool retains to continually learn and build its knowledge. That data, in turn, could be used to answer a prompt inputted by someone else, possibly exposing private or proprietary information to the public. The more businesses use this technology, the more likely their information could be accessed by others.
Amazon has already sounded the alarm with its employees, warning them not to share code with ChatGPT.2 A company lawyer specifically stated that their inputs could be used as training data for the bot and its future output could include or resemble Amazon’s confidential information.3
Also, generative AI content created according to an organization’s prompts could contain another company’s IP. That could cause ambiguities over the authorship and ownership of the generated content, raising possible allegations of plagiarism or the risk of copyright lawsuits.
Organizations will need to figure out how to protect their intellectual property, while still being able to gain the benefits of generative AI. Given organizations’ desire to use AI for competitive advantage and to harvest their existing data, specific considerations around how their data is used for training and public consumption should be evaluated. One solution is data anonymization and de-identification, but that would have to be a service offered by the vendor, or applied prior to sending data to the vendor, and based on contract agreements. The downside would be that businesses could be limited in reaping the benefits from other organization’s data.
Employee misuse
Using generative AI offers business great efficiencies but also powerful temptations for misuse by employees. Educators have voiced concern that students could use generative AI to write their essays and other assignments; there have already been cases where a teacher has alleged a student has used ChatGPT to write an essay for a class.4 Employees, too, might be tempted to use generative AI and pass off the result as their own.
A related misuse would be for contract workers to pass off generative AI work as their own and billing the company for hours of work they didn’t in fact perform.
A more serious example of employee misuse is using generative AI to automate legal confirmations or reviews that may skirt appropriate ethics and compliance, independence or other programs, which may affect regulatory culpability.