The report Generative AI models — the risks and potential rewards in business examines what the future holds for ChatGPT and other generative artificial intelligence (AI) applications, including how they work and the risks and potential benefits.
We believe that generative AI models have the potential to transform businesses through automating and executing certain tasks with unprecedented speed and efficiency. This is particularly true when human expertise and ingenuity is paired with deep understanding of how to use these programs and effectively harness their capabilities.
However, it will take time and human expertise to unlock their full potential in a way that’s responsible, trustworthy and safe. If you’re considering using generative AI applications, it’s important to establish a set of internal processes and controls for everyone in your organization to follow.
With generative AI use expected to grow rapidly this decade, there’s no time like the present to get these conversations going and processes put in place. Read the full report to discover potential use cases and opportunities, as well as what to consider if you’re thinking of using generative AI applications in your organization.
10 things to know about generative AI
The most common generative AI solutions can roughly be divided into five categories: content generators, information extractors, smart chatbots, language translators and code generators.
Generative AI models can summarise articles, draft emails and produce images and videos. Trained by humans, some generative AI models have the conversational skills to, for example, answer follow-up questions, admit mistakes, challenge incorrect assumptions and filter or reject inappropriate requests.
ChatGPT is a chatbot trained on human instructions. Its initial underlying large language model, GPT-3.5, had 175 billion parameters and was trained with more than a million datasets or 500 billion tokens (words or word fragments). GPT-3.5 was not connected to the Internet and was trained on data from up to September 2021. GPT-4, OpenAI’s new large multimodal model, evolved from its earlier large language model.
Generative AI models have use across various business functions, from IT, human resources and operations, to finance, audit, legal and marketing. Suitable applications include drafting proposals, developing and testing code, and extracting and summarising complex information.
Generative AI takes data inputs or parameters to learn and build knowledge. Unless you explicitly restrict the application provider from doing so, that data may then be used to answer a prompt from someone else — possibly exposing an organisation’s proprietary information to the public. Depending on the application, you may also be signing over your copyrights. Referring to the respective terms and conditions may give you an idea of what happens with user-inputted data.
Depending what you use generative AI for and how you implement it, your activities could expose intellectual property or trade secrets and open your organisation to fraud risk. It’s important to be vigilant and make sure your organisation isn’t using AI in a way that contravenes applicable laws (including privacy laws), client agreements or professional standards.
Copying AI-produced information or code into any deliverable or product may constitute copyright or other intellectual property infringement. This could potentially cause your organisation legal and reputational harm.
We expect both open source and boutique versions of generative AI will continue to be integrated into many common applications, systems and processes, ranging from internet browsers to AI-connected technology that organisations license, such as cloud-based software and instant messaging programs.
Creating safe usage guidelines within your organisation is key to helping ensure proper and effective use of generative AI applications. Your organisation should also upskill its people, as the human in the loop brings unique insights and understanding that generative AI alone can’t replicate.
KPMG takes a responsible approach to designing, building and deploying AI systems in a safe, trustworthy and ethical manner. This approach helps companies accelerate value for consumers, organisations and society.