How government agencies can protect themselves from generative AI risk
There’s no denying that generative AI solutions such as ChatGPT and DALL E have grabbed the attention of many. Their allure can be difficult to resist, but they’re not without their risks—especially for government agencies.
Many organizations and individuals have already started using generative AI for content creation for websites, social media posts, research papers, cover letters, emails, text summarization, and software source code generation. But without the proper safeguards and governance structures in place, agencies can open themselves up to embarrassment, manipulation—or worse.
In the article "Addressing the siren call of generative AI" we take a look at those risks and what government agencies can do to help address them.
Addressing the siren call of generative AI
How government agencies can protect themselves from generative AI risk
Download PDFBuild a workforce to support your digital journey
Explore new ways to create a culture that will attract, engage, and retain your digital workforce
Modern government: Connected. Powered. Trusted.
Organizations that provide human-centric experiences earn the public’s trust and attract next-generation workers.
Deliver a secure digital experience every time
Learn real-world applications governments can use to deliver on their responsibility to secure data and assets from idea through delivery
Navigate a human-centric digital journey
Uncover what digital transformation means, where to start, and practical methods to guide and accelerate your digital journey
Create a human-centric experience based on users’ needs
Discover effective approaches to help governments understand users so they can deliver better, more equitable experiences