With generative AI evolving faster than the controls designed to manage it, organizations need to clearly define how they’re using it and how they’re readying their workforce for it.

Organizations are starting to experiment with generative artificial intelligence that allows for the quick production of content, including text, images, and video, with limited need for human intervention. It shows promise in summarizing meetings, drafting emails, and creating code. Yet, by leveraging intellectual property, organizations could inadvertently expose confidential information and open themselves up to the risk of fraud or theft.

Creating proprietary generative AI tools takes time, money, and resources. As a result, most organizations are relying on third-party solutions like ChatGPT, OpenAI, and Stability AI instead. But understanding the risks involved with generative AI is critical in helping to guide against its misuse—and to find ways where it can lead to greater productivity and innovation.

In time, we may experience ‘singularity,’ which is when generative AI becomes smarter than humans. While it’s difficult to plan for that scenario, business leaders would do well to adopt mid- to long-term strategies that account for how generative AI will change their organizations. Short-term strategies won’t work because they will quickly become outdated.

With these potential risks, some organization are banning some AI tools or quickly educating their workforce on “best practices”. Many in the industry are also calling for a halt on further development so that we can collectively reckon with the implications of AI and GPT as it goes mainstream at unprecedented speed. While banning generative AI tools might serve as a temporary measure for organizations as the technology evolves, a permanent ban isn’t a permanent solution—employees will always find ways around it. Here are some key considerations in adopting a longer-term strategy and readying your workforce for generative AI.

Develop a policy—even if you’re not yet using generative AI

A well-defined policy with specific use cases and examples is a must. Organizations should be aware of how employees are using third-party generative AI solutions, and then create appropriate policies and procedures to protect sensitive data. Similarly, they need to establish and build trust in employees to use it wisely.

Educate your workforce

As more people become comfortable and open to experiment with AI tools, it's important that the euphoria and excitement does not lead to any “high-risk” actions like accidentally leaking sensitive information. Policy can be retroactive on consequences; education can be a proactive mechanism for organizations to start on the right foot.

Understand the impact on regulatory compliance

Leaders need to fully understand the impacts of generative AI on data privacy, consumer protections, copyright infringement, and other compliance or legal requirements. That means training AI models on legally obtained data and doing so in compliance with laws such as the General Data Protection Regulation (GDPR) in the EU. Even if users are only working with internal data, you don’t want them to inadvertently expose private or proprietary information to the public—or your competitors.

Protect against security and privacy risks

An AI engine is constantly learning, so there’s a danger it could ingest confidential IP and make it available to other parties. It is important to safeguard the data used to train AI models by implementing security protocols, such as access controls, encryption, and secure storage. At Samsung, for example, developers using ChatGPT inadvertently uploaded proprietary source code onto OpenAI servers. Policies should guide which datasets can be fed into an AI engine to ensure they don’t violate any privacy or IP laws.

Test for bias and inaccuracies

As AI engines ingest data and ‘learn,’ they could inadvertently introduce bias into the process. Identify which applications aren’t particularly prone or vulnerable to bias (such as a chatbot that routes calls) and start your generative AI journey there. Certain team members should be responsible for evaluating the output to help control for bias. This can involve analyzing the training data to identify potential sources of bias, testing the system on diverse populations and use cases to ensure that it performs accurately and fairly for all groups, and evaluating the design of the system to identify and address any potential sources of bias. Ultimately, testing for bias is an essential step in ensuring that AI systems are fair, equitable, and work for everyone.

Upskill your workforce

Organizations also need to consider how they plan to upskill their workforce for a future enabled by generative AI. For example, with generative AI, virtual training environments that simulate real-world scenarios will allow learners to practice and apply their skills in a safe and controlled environment. Will an AI generated custom and adaptive micro-credential certificate be more valuable than a university degree? If so, how do you adapt to this mindset?

What’s next?

It’s possible to foster a culture of experimentation while keeping your business objectives top of mind through protected sandboxes, which use an isolated environment for testing. While sandboxing is not a new concept, it still requires a careful approach in terms of who has access and which datasets the AI engine can draw upon. However, it allows users to start training AI engines with datasets that can be bounded, managed, and controlled.

Understanding the risks and ensuring protections are in place can help leaders focus on the potential benefits of generative AI, such as process improvements or enhanced customer experiences. The sweet spot for generative AI in its current iteration will be finding business opportunities with limited ethical or regulatory consequences, such as helping chatbots better route customer calls.

AI opens doors to a lot of opportunities. Organizations must be very careful to balance pace of adoption with enterprise readiness. By setting realistic expectations, leaders can ready their workforce for generative AI and reap the benefits, while mitigating risk. Find out how KPMG can help.

Our eight core principles guide our approach to responsible AI across the AI/ML lifecycle:

  1. Fairness: Ensure models are free from bias and equitable.
  2. Explainability: Ensure AI can be understood, documented and open for review.
  3. Accountability: Ensure mechanisms are in place to drive responsibility across the lifecycle.
  4. Security: Safeguard against unauthorized access, corruption or attacks.
  5. Privacy: Ensure compliance with data privacy regulations and consumer data usage.
  6. Safety: Ensure AI does not negatively impact humans, property and environment.
  7. Data integrity: Ensure data quality, governance and enrichment steps embed trust.
  8. Reliability: Ensure AI systems perform at the desired level of precision and consistency.

Connect with us

Connect with us