Industries

Helping clients meet their business challenges begins with an in-depth understanding of the industries in which they work. That’s why KPMG LLP established its industry-driven structure. In fact, KPMG LLP was the first of the Big Four firms to organize itself along the same industry lines as clients.

How We Work

We bring together passionate problem-solvers, innovative technologies, and full-service capabilities to create opportunity with every insight.

Learn more

Careers & Culture

What is culture? Culture is how we do things around here. It is the combination of a predominant mindset, actions (both big and small) that we all commit to every day, and the underlying processes, programs and systems supporting how work gets done.

Learn more

Optimizing AI in State and Local Government | Challenges, security, and governance

Webcast

Webcast overview

Listen as executives from KPMG LLP, Microsoft, and Cranium discuss the utilization of AI in the State and Local Government sectors. This conversation will cover a range of current struggles, pain points and common myths surrounding AI.

During this webinar, our expert panel covers the following key areas:

  1. How agencies can get started and find the assistance they need.
  2. Clarifying and dispelling common myths surrounding AI usage in the SLG sector.
  3. Developing an AI usage strategy: Learn how to secure your organization with the proper technology and comprehensive strategy for long-term succcess

Our panel

  • Moderator: Joe Morris – CIO, e-Republic
  • Panelist: Katie Boswell – Managing Director, KPMG Advisory
  • Panelist: Jonathan Dambrot – CEO, Cranium AI Inc.
  • Panelist: Michael Mattmiller – Senior Director, State Government Affairs at Microsoft Corporation

Transcript

Joe Morris   0:19

Hello and welcome to today's event Optimizing AI in the state and local government sector challenges, security and governance.
My name is Joe Morris, Chief innovation Officer with government technology and the Republic, and I'm excited to serve as today's moderator for the event.
I want to say thank you for joining us.
I know we're in for an informative session today.
Let's take a look at our agenda.
We're going to kick things off with the technology outlook, talking about the trends and priorities across state and local government, we're going to dig into what we're seeing with AI.
Some of the challenges and pain points, they're going to demystify some of the myths that we're hearing circulating the marketplace.
We're going to talk about responsible AI, dig into security and compliance, talk about some of the guardrails that need to be present today for your success, and then we're gonna answer probably the most important question, which is how the heck do you get started and how do you leverage AI in a meaningful way in the public sector?
Joining me is an esteemed panel of experts today.
Katie, once you just take a moment to introduce yourself.


Boswell, Katie  
1:58
Yeah.
Thanks so much, Joe.
Hi everybody, I'm Katie Boswell.
I'm a managing director in KPMG cyber security practice and my role is that I lead AI security which means helping clients establish and maintain trust in their AI systems.


Joe Morris  
2:13
Jonathan.


Jonathan Dambrot  
2:14
Hey everyone, thanks for having me here today.
Jonathan dambrot.
I'm the CEO of Cranium.
So cranium is a platform that we built and incubated inside of the KPMG studio.
So I was a former partner and last year we had the grand opportunity to spin it out.
We're also Microsoft Pegasus partner and we focus on how to help organizations really think about how to get visibility into their AI systems.
How to secure them?
How to look at exposure of those systems against Key vulnerabilities and then ultimately how to drive compliance and third party risk?


Joe Morris  
2:45
Right.
And Michael.


Michael Mattmiller (CELA)  
2:48
Great.
Great to be with everyone today.
Michael Mattmiller I lead A-Team within Microsoft government affairs that supports our state and local public sector business.
Prior to joining the company, I served as the CTO for the City of Seattle.
It's a very excited for the conversation today.


Joe Morris  
3:02
Well, thank you for joining us today.
We're gonna be digging into that evolving landscape of AI in generative AI across state and local government.
You know this period promises to be, you know, fairly significant with the advancements that we're already seeing in the transformative impact that AI is having and is going to continue to have on how governments operate and serve their constituents from what we've seen in our research, the rapid adoption of generative AI is a testament to the transformative potential of this technology.
We're seeing States and local governments rapidly move from experimentation to finding more enterprise use cases for the technology, and such a rapid timeframe.
In fact, our our market forecasts show that by 2027, generative AI alone will account for 10% of the IT budget, and that's kind of underscoring its importance to the overall IT and government agenda.
We're seeing this technology move though quite rapidly and I I think back to my personal experience and and the cloud adoption that we saw a decade ago, we went from, you know, discussing cloud and it's place in, in government to doesn't have a place in government to where we find ourselves today, but that unfolded over a decade.
It feels like this adoption, particularly on AI and generative AI, it's been much rapidly compressed and much rapidly moving from shiny object to implementation.
So I'm super excited to dig into this conversation today and look at how these technologies are gonna address some of the challenges that we see in the state and local government workforce, from cyber security to work for us to legacy technology modernization.
What is the role that AI can play in solving governments, most pressing challenges?
And I think from from our perspective, Addie Republic in the coming years, we expect to see governments leveraging AI to create smarter cities, enhance public safety.
But I'm super excited to hear from the experts that we've assembled today on this panel to dig into what trends are you all seeing?
What impact are you seeing AI have today?
Maybe Michael, we can kick it off with you in terms of some of those major technology trends, whether it is workforce, whether it is cyber security, what are you seeing now and what are you experiencing that States and localities are going to deal with over the over the next five years and how do you anticipate AI playing a role here?


Michael Mattmiller (CELA)  
5:15
It's a great question, Joe.
And and you hit on many of the issues that we see governments facing in your opening remarks word a period of time coming out of the pandemic where there is still far more demand for government service, government technology then governments can deliver, they can't hire the people they need to deliver on their mission.
Compared to 2019, five years ago, state and local governments have about 920,000 fewer employees across the country than they did previously.
And while many positions are posted, we're also seeing structural vacancy rates increasing.
So while we see fewer people to do government work, the public's expectations continue to grow.
And it's interesting, much as we talked about 10 years ago, when Amazon was able to ship a package overnight or in two days and people asked the question, why can't my government serve me the same way Amazon can?
Why can't I use an app to get my service?
We see those same expectations coming up in our questions with lawmakers, with executives.
In that light, we also see budget tightening pandemic era funds that have sustained many government programs and innovations over the past few years expire this September.
We also know governments are more concerned about cyber security than ever, and especially in this AI conversation that raises questions about can we use AI if we don't know what our data protection story is?
All of that is to say we think AI offers tremendous potential.
If you can't hire enough new people, how do you help support the people you have and make sure they don't feel burnt out that they have the support they need to automate some of their work or hand off their work to a AI personal assistant like copilot?
Those are the types of things that we see coming to fruition quickly, opportunities to enhance customer service and ultimately to make sure that we are serving constituents when they need help from their government and to be able to achieve more outcomes more quickly.
So that's what we see in the landscape over the next five years and we're just getting started.


Joe Morris  
7:09
No, I absolutely have.
Yet it feels like if you if you take a day off from following the latest developments in AI, you're like, what what's happened?
What?
What?
What did I miss?
How do I?
How do I keep up Katie?
I wanna dig in.
You've got the you know your background and security.
One of the things that I often see come up is that the fears and anxiety related to just that AI and security.
So what challenges and opportunities do you see kind of in terms of public sectors, integration of AI?
And then how are you seeing them mitigate some of those challenges today?


Boswell, Katie  
7:41
Yeah.
I think Michael pointed out a lot of great opportunities for state and local government to bring AI in to help them enable their business.
It gives them the opportunity to do things much more efficiently.
I think it can also make up for some of the gaps and head count that we see that are definitely a struggle.
Everybody is being asked to do more with less and that that's definitely challenging.
I think it's unique here for state and local government is you are not just not just serving your customers but you're serving your citizens.
And I think that that means that we have an obligation to think very carefully about the ethical considerations of how we are using AI.
Obviously, we're always thinking about what is the value, right?
What is the cost that this AI is going to have?
What are the risks that it's going to bring in with that?
You need to be thinking about things like transparency, security, security and privacy, and in order to do that, in order to mitigate those risks, you need to bring together a group of stakeholders who are going to enable your organization.
You have to have understanding at the top about why this is important to the business AI adoption.
They they have to have an understanding of what are the real risks and that's gonna be very different even within state and local government, from organization to organization.
I work with one organization where their business where their use cases are all about enabling nurses and this is within state and local government and that's going to be very different from other areas.
So understanding what those unique needs are, what your ecosystem is getting that full picture and getting leadership alignment that allows you to move forward and align to a particular framework to establish that controls and to get them into place.
And then when your organization is looking at different business cases and trying to decide is this something we wanna move forward with?
Are these risks risks that we are willing to undertake?
You have a set process and you have a group of people who are aligned and understand what risk are you willing to undertake.
And I think that that's where businesses really can enable themselves to move forward with AI adoption with a really risk based mindset.


Joe Morris  
9:49
You look at there, there are tremendous amount of miss out there around AI.
I think the work that we did pre pandemic on AI Michael and it kind of talked about, you know aligned to what you were saying, there was a lot of fear and anxiety that AI was going to come and take away the jobs and impact the people.
Then we got in through the course of the pandemic.
We we came back and did the study again and all of a sudden the mood in the room changed, right?
It was like how do I free my people up?
How do I create an environment where they can do more rewarding work and into both of your points?
I'm not gonna be able to hire my way out of this, so how can I optimize more?


Michael Mattmiller (CELA)  
10:24
Yeah.


Joe Morris  
10:26
But there still are a number of myths that are circulating out there today, and I imagine as our elected leader, as an appointed leaders are traveling the country, they're probably going into event sessions and there's probably an AI track or two.
And at the event and they're coming back to home base and maybe they're saying, hey, let's go or they have some, some anxiety.
Michael, maybe we could start with you and then the group can can feel free to weigh in here.
How are our elected leaders are appointed leaders feeling about AI?


Michael Mattmiller (CELA)  
10:58
Yeah, many are still in learning mode.
I'm constantly surprised when I go out and meet with our elected officials.
How many have yet to meaningfully use AI technology themselves in their daily work?
At the time when these tools are becoming very quickly available and adding real value to their people, there are some very legitimate questions that I always receive about deep fakes.
How is AI going to affect our democracy?
How's it going to affect elections and what we saw across legislative sessions this year were a number of bills related to deep fakes leading the regulatory space for AI.
And yet we all know that this technology is so much more Broad.
And what we like to focus on is the workforce piece.
How do we make sure that your government is being efficient and responsive?
And let's look at some really great low risk, high value pilots that can set the example of what good looks like.
And I think that examples like New York City debuting their my city chat bot has been a really good proof point.
This is chat bot which many of us have seen before generative AI, but now they can do so much more, and in this particular use case where you had an entrepreneur, small business owners who had to interact with six city departments to get answers to basic questions about how to start a business, that can now use one interface and the city has been very transparent.
This is a beta, we're learning.
They've had to refine the technology based on feedback and some of the mistakes that it's made, but it's been a journey that has set the example of how this technology can deliver at the low end.
We're also seeing great examples we like to point out to our elected leaders in the benefits space.
Companies like Young Williams that has built a chat bot to answer questions about SNAP benefits and what's interesting is after several months of a tool like this, operating the chat bot is demonstrating that it can return results faster, be more accessible.
And here's what's interesting.
Be more accurate than what an individual calls a call center, and so while the technology might not be perfect, but we're trying to do is build trust in that it can be as accurate to human that there are ways Katie as you were talking about building control frameworks and other types of risk evaluation scores.
If we educate our people on how to start deploying these technologies in ways where it can add incremental value, that's how we'll get the confidence to continue to do more.
And so my my call to action for anyone watching today is if you do work with elected or appointed officials, show them how you're using the technology.
Put it in front of them and let them try it out so that they understand first hand what AI is and what it's not in the context of government service.


Joe Morris  
13:31
And that that's great, Kate.
I imagine you're out there kind of here in the similar things and and working with the, you know similar, you know, appoint officials in the public sector.
There can also be this myth that it's easy, right?
These tools exist today.
You can just go on to the tool of your choice and ask it questions and people can bring their tools of choice to the workplace.
Absent a policy, perhaps, what are some of the challenges that exist?
Or even the opportunities to further integrate this into your public sector operations and maybe if you're sitting and listening to today's session, what are some of the things that you might wanna mitigate on the challenge front or on the governance of AI?


Boswell, Katie  
14:13
Yeah, I think I'm gonna build off of what Michael is saying, which is it's all about the workforce.
I recently saw a study that was conducted by Microsoft and LinkedIn that showed that 75% of knowledge workers are using AI, but only 78% of them are doing it with their employers.
Being aware of that.
So what I take away from that is, even if you think that you don't have AI related risk yet because you have not been an early adopter or you have not yet brought AI into your organization, you do have AI risk there.
So I think really one of the best things that you can do is to educate your workforce about the dangers and challenges of using AI.


Joe Morris  
14:51
Umm.


Boswell, Katie  
15:00
What happens when you take some code and put it into an LLM.
Prompt.
What does that mean?
Having them have an understanding is very important.
I you know what?
Sorry, were you gonna say something, Joe?
OK, so uh, I think in addition to educating the workforce, I'm a security professional.
I think a lot about the controls that we put in place, the tooling is evolving rapidly to help enable organizations and to bring visibility.
So once you have an educated workforce, once you understand the risk of the AI that you have within your organization, then you can work on getting visibility into that risk, testing those AI systems, making sure they're gonna perform the way that you're expecting them to.
And that's critical.


Joe Morris  
15:46
You know our our survey findings that that we've done in the public sector show exactly that, whether you've got the policies, whether you've got the guardrails, whether you think that you've made that effort, these tools exist in your environment today, whether they're approved, whether they're the ones that you want your employees to use, people are bringing their own AI, uh with them.
And Michael, I don't know from what you've been seeing, if you're seeing similar things in the government space that you've that you're talking about in terms of some of the additional security risks or challenges?


Michael Mattmiller (CELA)  
16:16
Yeah, I mean, Katie, I I think you you hit the nail on the head with our latest work Trends Index survey where people are using these tools that work, they are using, whether it's their personal chat, GPT or free versions of copilot or Gemini, whatever might be out there.
But it's because people find value they find value in generative AI to get their work done a little faster, or to have time to focus on more valuable work.
And so how governments begin to plan for and capitalize on that interest I think is really important.
The other trend that I see from governments right now is some really interesting work around getting people excited for generative AI.
I think that the work that Governor Doug Burgum in North Dakota has done where he has actually challenged his senior leaders, so the manager level and up in government to learn about AI to form cohorts, and then he actually had a pitch competition where each of these cohorts of government managers could say, here's a use case we think is really great for AI and they picked winners to champion how their employees are actually thinking proactively about better serving their public.
We've also seen great examples across the 13 governors to have issued AI executive orders saying that our government will benefit from this technology.
We will do so responsibly and we'll do so inclusively, and I think Governor Newsom down in California really led the charge when he put out an executive order that said my departments will develop use cases that we will use to seed potential pilots and other interest.
We've seen other executive orders from leaders like Governor Yankin and Virginia and Governor Stitt in Oklahoma.
The talked about sandboxes making sure that it is creating responsible environments where employees or staff can test AI and know Katie to your point that they are following the security rules and won't break into JL as it were.
So those are some of the exciting innovations we're seeing in the government space right now.


Joe Morris  
18:03
Great.
Jonathan, you've had the luxury.
You you've been patient, you've been listening to the conversation so far with the plethora of tools that are available, the bailable to your your government worker available to that person working in the education space.
How do you, from your area of expertise, from what you do at cranium, look to to help them secure it?
Either kind of individually or holistically.


Jonathan Dambrot  
18:28
I think what Michael said is is right on, I mean we've been working with a couple of those states that were just mentioned as well as in New Jersey with Governor Murphy and and their executive order there, New York, California.
And you know, when you look at the competition, it's almost a competition, right, Michael?
Like they're all they all want to own and drive AI into the state because it really is going to be a competitive differentiator for each state to really start to carve out their niche and AI.


Michael Mattmiller (CELA)  
18:42
Yeah.


Jonathan Dambrot  
18:53
And I think when we look at how do we actually start to think about how AI comes in, you're seeing a tremendous number of rfis, right?
People want to understand from a state level, you know, what are the technologies and what are those use cases and how are they going to apply.
And I think there's a such an interest by those, those States and the agencies they're they're in to really drive this appropriately.
I think there's a willingness to listen to the use cases of willingness and an understanding that they need to think about the data and and the security aspects of this for the citizens that are going to be served by this, but they want to go faster than they normally do.
So you're seeing a lot of frameworks starting to get developed.
California, I think has done a great job of of looking at building a generative AI framework and a risk framework that enables we're gonna, you know, the agencies to understand how to bring generative AI systems and use cases through to get that visibility.
New Jersey opened up an AI hub at Princeton to really start to think about and look at what are those key risks and challenges that we really need to look at across areas like financial services and life sciences for use cases that are really important to to the state.
And so I think you know, really understanding what's important to the state as you're going through that process, building rfis that make sure that you align to the executive order there as well as in a lot of cases to the presidential executive order and the NIST AI Risk management framework have been paramount to really starting to think about going at a pace that most States and local governments are not used to with technologies that are starting to get embedded into every application.
Umm so I I think this is super exciting time and I think we're gonna see it tremendous challenges that we've never been able to solve before get addressed.
But I I think people are a little nervous still.
How do we really do this securely, safely and meet all of the legal obligations that are like none of those have gone away? Right?
So how do we deal with bias?
How do we deal with the legal implications?
How do we help support small business and minority business?
And then how do we not create an energy crisis?
I think these things are all on the top of people's minds and it's all coming out at once.


Joe Morris  
20:57
It's it's very interesting and I'll stick with that for a second, cuz we'll come back to the security and compliance and in a moment.
But you brought up that, like the competition and the speed, and I think that's an important thing.
That's very different than cloud and mobility and some of the other equally transformative technologies that we've seen in the in the public sector is that this has gone from, you know, especially on the generative Bay front, not being on anyone's radar at all.
Not that long ago to now, like, hurry up and let's go.
Why haven't?
Why haven't we moved?
Do you do?
Do you like in that to?
Maybe it's Michael's comments earlier that, that, that changing of expectations coming out of the the pandemic and the the move to contactless government and people found, hey, government can move quickly now.
Can they sustain it?
Do you?
Do you liken it to the workforce challenges now or at a level that the only way forward maybe through these?
What do you attribute that?
Hurry up and move mentality to.


Jonathan Dambrot  
21:58
You know, I think this the the promise, right?
So obviously you know the people talk about this this as a hype cycle.
I I I've never seen something go this fast before, even in a in a hype environment where you're starting to embed technologies that you know through the ecosystem, at the pace we're going.
So I think that there is definitely a look towards the promise of solving these really, really gnarly problems that have been created over decades with capabilities that now we've never seen it.
It feels like magic in a way that people, when you look at these use cases and people show you how they've been able to solve these problems.
So on the back of that, I think you know you'll you'll see these smaller projects start up, they'll solve specific problems like I've seen Broadway projects, which are really interesting, right, using vision and roadway to see how to solve, like pothole issues, right, like things that you know are costing States, money that they can do pretty easily today and then go attack.
And so we think about that and then you start looking at the byproducts of, OK, we can solve that.
But what's the next problem that that creates downstream and how do we think about that?
You know, we look at it also from a privacy perspective.
So if I have a medical use case and a Med tech use case where you know we're taking patient information and citizen information for these States and we're putting them through these generative models, how do we build things like retrieval, augmented generation models like capabilities and infrastructure to support the use of that data?
And these tremendous large language models like Azure open AI or others that we can start to leverage.
But how do we do that in a way so that we are not, I mean, Cirillo, training those models with our citizens, data and and how do we make sure that that's set up correctly and and we can continue to get massive leverage there.
So I think it's the promise that's creating it, but it's also now when people are testing, they're actually seeing the benefit and now it's like how do we go faster?
How do we get the return on investment?
How do we go do that?
And it just creates these now downstream questions.


Michael Mattmiller (CELA)  
23:59
And Jonathan, if I could just add on, I I think yes to everything you said and and part of what is so transformative is the intractable problems that have existed in government service delivery or society at large that can now be tackled with generative AI are just eye opening and sticking with medical for a second.


Jonathan Dambrot  
24:13
Yeah.


Michael Mattmiller (CELA)  
24:15
One of the challenges we have in this country is a nursing shortage.
What, by some measures, I think were 200,000 nurses short in this country and you wouldn't necessarily say AI is gonna fix that.
It's a high touch industry.
You need to have that professional personal encounter with the nurse practitioner.
And yet, if you think about where an NP spends their time for every 30 minute encounter with a patient, they spend 15 minutes doing charting and notes.
That's a 50% overhead on the value and what they actually enjoy doing.
Using a technology like Microsoft Nuance DAX, which can ambiently listen into your encounter and essentially scribe it can also generate the first draft of the encounter notes so that 15 minutes that were spent typing, remembering what happened, looking back at some tests now is drafted and ready to go.
So all the nurse has to do is review the notes, make some edits, hit submit, saves 10 minutes per 30 minute encounter.
If you think about 8 encounters a day, you're almost creating room for three more patient visits, and now you're saving real time, and that 200,000 nurse shortage.
Mental math goes down to what like 160 hundred 70 just by using AI, so it's not going to solve every problem.
But that way of thinking differently changes the problem.
Space changes.
What we can now solve together in partnership with government.


Joe Morris  
25:32
That's great.
Yeah, you're right.
It is that that promise.
Right.
And then you you dip your toes in that pilot and all of a sudden you're like, Oh my gosh, it it's doing things I'd never thought was possible.
How do we scale and?
And I think that's a good transition for us is if I'm sitting and I'm listening to today's session, it's all right, where do I start?
How do I?
How do I get started?
What are the right steps that I can take?
The wonderful thing about the public sector is you can steal the good ideas, right?
So whether it's California's policy or what this work or a city or county are doing, you can you can.
You can lift that.
You can borrow things from it.
You can be inspired from it, but Katie, there are some other things that maybe you can't just lift and and borrow and maybe not.
Not as easily and and some of those come from, you know, the things that, you know, Jonathan just talked about security and compliance and the the real nitty gritty around making sure those right guard rails are in place.
So how and and I guess, how and where should people get started based on their existing environment today?


Boswell, Katie  
26:33
Yeah, that's a great question, Joe.
I think first organizations need to take stock of where AI might already be used within their organization, so don't assume that somebody is not already using AI to do interesting things.
Understand how your workforce and somebody mentioned this earlier, but understand how your workforce is interested in using AI.
Taking stock of ideas that they have that you can help make possible understanding.
Jonathan mentioned obviously the executive orders and regulation, make sure you understand what it is you need to be compliant with.
Most organizations that I work with have and understanding of different environments or restrictions they might have.
Do you need to be strictly creating things that are on premise?
Are you only going to be looking at investing in AI through third parties?
Third party is definitely an area of additional risk that you need to understand, so understand where your third parties are using AI.
So really taking stock of the AI, the environments, the ecosystems, the regulations that will help you get started and where you can move forward without introducing large amounts of risk.


Joe Morris  
27:51
That's great, John.
Jonathan, you're out there.
Yeah.
We just talked to, you know, getting ready for these sessions about where you've been and who you're working with and and how you're working with them.
Maybe you can kind of take that those experiences from the places that and organizations that you're working with and share with us a little detail on what is good look like, what does a proper implementation include to make sure that you're checking all the right boxes?


Jonathan Dambrot  
28:15
Yeah.
And I'll start with the people piece of this first, because I think it's really important we talk about AI and the technology, but I I do think you know on top of what Katie said and and a really good understanding comes with a structure for governing that.
And I think you're starting to see AI governance really becoming part of that conversation where you have an AI governance leader.
That may be the Chief AI officer that's going to be part of that discussion now around the executive orders.
How do we think about having somebody who's responsible and then bringing the other stakeholders around the table, whether that be privacy, legal, you know, the the AI teams, the people who are responsible for policy, how do we bring them together?
So and and to have a coherent conversation around the responsibility that each agency has, or each state has.
And then how do we start to put things in place to automate?
So we're talking about exponential technologies.
In many cases I've seen states with hundreds of use cases trying to push that through their pipelines.
They are ill prepared to do that at scale and so they need to figure out how do we actually start to put those pieces in place once we have an understanding of the policy and we have written our own requirements, we understand the use cases, we understand the people who are now responsible internally and then how do we scale this up so that we can very quickly help guide the key stakeholders that want to use these technologies against those use cases and enable them to go faster.
And until we start seeing all of those pieces together, I think it's going to be a bit of a slog, but there's gonna be a lot of learning.
And then once you start to see the coordination of those pieces, you'll start to see these flow through.
And just the amazing opportunities that they bring.


Joe Morris  
29:49
That's that's tremendous.
You you look at, we've got a across our organization, a tremendous amount of editorial every single day that's coming across the radar on a city, launching a new AI chat bot, a government organization, testing the latest and greatest technology.
You know, Michael, you had many of the examples throughout your comments, but things around program integrity, fraud, waste and abuse, obviously a lot of focus on the constituent engagement space.
Our latest survey findings, though, particularly around around AI that we just completed in May, show some very interesting things.
Uh, our audience again state the the community that's gonna be consuming this this webinar right?
The CIO community, the IT implementer community generally unanimously agree that AI is a critical priority for their organization.
Today, they're in favor of implementing it.
But where we see though is a lot of work still to be done, a lot of work to create the policies needed, a lot of work around the ethics and security.
What I do find is very interesting.
The same survey results show that like cyber, security is can is a top potential investment area.
There are the concerns around security, but they're also saying, hey, this is a major pain point for us that we're not gonna be able to hire.
Uh, you know, at the rate we need to hire for and maybe there's a real real potential there.
So when you still see that like, not that, we're still very much in the pilot phase, but that we're still very much in the policy creation phase.
What advice would you have for those that fall in that bucket?
I think roughly don't hold me to the exact percentage, but I want to say it was.
It was nearly 50% of our survey audience and all we'll make sure to link it to you, but that are in that bucket today of, like, drafting a policy or thinking about drafting a policy, any resources for them, places to go, whether it's another state, another city, another county or some specific things that, hey, you gotta make sure that you're you're thinking with this today because two years from now, it's gonna be very different.


Michael Mattmiller (CELA)  
31:54
Yeah.
I mean, the first is well done to the governments that realize that it is important to have governance and to have a policy.
And I think the important thing top lined is to balance the perfect from the good as Katie pointed at those stats earlier, 75% of knowledge workers are already using generative AI at work, 78% of them are bringing their own tools, which is what's putting government at risk.
So if government doesn't put opportunity in place to have responsible use of tools, it's not going to be a node that it won't get used in.
Government generative AI will get used, but in less controllable, less monitored ways.
So I would say take a look at the state of California.
Jonathan, you're right.
What they have done with the AI toolkit there has been really solid and I think is a great model that governments can scale down to fit their needs.
Also out of California, what Mayor Mahan and the city of San Jose have done with their generative AI coalition of government has produced some tremendous model policies, model use cases, other things to consider.
So I just encourage governments to start there, scale to their needs, then if they are developing something authentic to themselves, take a principle based approach.
Don't try and control for a specific technology.
Say here's what's important to us and our government as we use this technology around fairness around, transparency around accountability, accessibility, those foundational principles that will continue to be applicable as the technology evolves.
The next most important thing is to focus on data.
With cybersecurity being top of mind, we know that governments likely have a data classification policy that is designed around their particular regulatory needs rather than, say, generative AI is incapable of meeting our data protection and classification needs.
Look specifically for technologies that can comply.
Companies like Microsoft that have a tremendous history serving government customers and have obtained Fedramp certifications that have already attested to our compliance with the NIST AI risk management not compliance.
Excuse me, implementation of the NIST AI Risk management framework.
Those are the companies that you should be able to ask and say.
Can you meet these data protection requirements not sharing our data, not using our data to train your models, making sure that there is accountability so that we can comply with public disclosure requests?
That's the questions that should be embedded within a policy to help it move forward more quickly.
And many of my colleagues here at Microsoft, as well as on the call, are always happy to jump in and be resources to governments as they're thinking through this governance process.


Joe Morris  
34:27
That's great.
Yeah, one of the other pieces in the the same survey that stuck out to me was one of the biggest challenges that our audience reported wasn't data security at the top.
I mean the the primary challenge it it wasn't technology modernization challenges are not having the foundation, those things are still present, but the top challenge was really like AI literacy, an understanding of it, trying to clear up some of those miss any, any advice?


Michael Mattmiller (CELA)  
34:51
Yeah.


Joe Morris  
34:57
You know, Katie.
Jonathan, you know, Michael, go across the group in terms of how people that are that find themselves in that space in terms of resources that they can turn to to, to increase their understanding and maybe that helps alleviate some of that fear and anxiety to take that first step.


Michael Mattmiller (CELA)  
35:15
I'll just add on real quick, Microsoft through LinkedIn, we make available a free generative AI learning path opportunity.linkedin.com five hour foundational learning that comes with a micro credential on completion that helps anyone understand the potential benefits, how to write a good prompt as well as how to mitigate potential risks of using AI technology.


Jonathan Dambrot  
35:15
So.


Michael Mattmiller (CELA)  
35:38
And I believe we also have about 200 other courses that governments can assign to their employees to get to the more specific sectoral or outcome based uses of AI.
So that's a great training resource that we hope folks are leveraging.


Jonathan Dambrot  
35:51
Joe, we have an AI security fundamentals course that we're happy to make available to this as well to get some of those fundamentals down.
If if anybody comes through the the webinar wants to see those, I do think getting literate is actually one of the states major requirements for the states that we've worked with, like the they're really concerned that there's going to be a group of people displaced by this technology and really driving literacy around.
AI is one of those things that I think is in all of the executive executive orders and requirements for states to really think about using this.
So I think using the resources Michael just mentioned, but also trying to figure out how AI is going to be used for your specific use case and thinking about your own personal use no matter where you are in the government.
If you're listening to this, you know, thinking about, OK, how am I going to use AI effectively so that I can actually not be impacted, but use it in an impactful way?
I think it's just as important as learning the fundamentals and then figuring out how to apply them.
Umm, you know securely.
So I I I look at that and I I go, I think there's a tremendous opportunity.
I think those that people that look at lifelong learning as an opportunity will love this AI experience.
Those that generally don't take that opportunity, we'll get displaced.
So I think people need to be aware of that.


Boswell, Katie  
37:08
Maybe if I can add on to that, there are definitely some nuances around securing AI.
I've heard a number of people kind of allude to that securing AI is no different than application security and I think we might get there one day.
But right now there are some really unique differences.
So educating whether it's through peer sharing forums Jonathan and I both contributed to the global resilience federations white papers on securing AI, which there is both a leadership guide and a practitioners guide for that.
So there are publicly available documents out there that can really help.
I especially cyber leaders, privacy leaders, really have a good understanding of what those differences are so that you can uplift your current governance security controls in order to meet those different needs from AI.


Joe Morris  
37:58
I'm going.


Jonathan Dambrot  
37:59
The last thing I'll say and Joe, the other thing I'll say here is what's interesting to me.


Boswell, Katie  
37:59
And I'm happy to link to those as well.


Jonathan Dambrot  
38:03
When I look at the state leadership, it is sort of a competitive environment, but people are so excited to help each other in the ecosystem and I think there's massive opportunities for that, whether it be through conferences or otherwise.


Boswell, Katie  
38:09
OK.


Jonathan Dambrot  
38:13
I've seen just just great conversations happening, so we'd love to link up with anyone who wants to do that with Microsoft and KPMG and thinking about how we can help support that and serve that across all of these great states.


Joe Morris  
38:25
Well, that's a a great point.
I mean, our survey findings even show that the 1st place that a lot of governments are turning to are to their industry partners and they're saying, hey, help us see around that corner.
What are you doing within your organizations?
What are you seeing?
What's the latest research saying?
My God, Michael, I know you and and Microsoft have have been doing a ton of research out there.
So what are some of the survey findings that you've conducted indicate around public sector readiness or how does it inform them?


Michael Mattmiller (CELA)  
38:53
Yeah, I would encourage anyone who is looking for data on how AI is being used in the workplace to go to AKA Ms Work lab.
That's the part of Microsoft where we take a look at how trends in business and government and elsewhere are shaping productivity in the workplace.
And I know we've talked a little bit about the study we just put out that showed 7576% of people are already using generative AI in the workplace.
I'll add 1 interesting note to that, which is folks may assume it is the Gen Zers or the millennials that work who are using these generative AI tools in our data shows that is not true.
It is quite consistent across generations.
Even baby boomers, high 60%, are using generative AI tools at work.
The other types of data that we've been seeing is how generative AI tools are adding business impact.
75% of employees say they would not give up the generative AI tools that their employer has been providing.
Tools like copilot that are returning up to an hour a day because emails can be glanced over and summarized without someone having to go through their inbox writing the first draft of the memo based on the types of documents that already exist within an organization.
But some of the most impactful stats that I've seen so far on how AI is affecting the workplace pertain to a product that we released over a year ago now called GitHub copilot.
For those that may not be familiar with GitHub, that is the online code repository service and integrated development environment that many coders use with GitHub copilot based on base simple instructions about what a developer wants to achieve, copilot can draft the code necessary for that use case.
Since it's been released, we have learned that 48% of coders using GitHub copilot now use AI to generate their code, and that may sound like a lot.
But if you think about what coding is, there's a lot of manual braces and writing regular expressions, really tedious, repetitive work that AI is very good at doing because 48% of code is now being generated by copilot, 55% faster is how much more quickly these coders can work to produce that first working version of of of a tool for feedback from their peers from their internal customers, from the public.
But here's what's really standing out to me.
The developers using this functionality report being 75% more satisfied in their jobs because they're not having to do that mundane, repetitive work.
They can actually now focus on higher value strategic thinking about solving more problems instead of thinking about how to debug their code.
And that's just the starting point.
So we think that with experiences like that, there are ways for governments to benefit from this technology, make their workforce happier, to give them tools that are more satisfied, serve the public and that we're just on the front end of more experiences.
And as we've all been talking about that education piece that bringing employees with US, building the trust through responsible use and governance will be those key factors that ensure this technology continues to benefit government.


Joe Morris  
42:01
Those are those are insightful findings for sure.
Now this is probably gonna be the hardest question.
I'm gonna hit you.
So you're all gonna need to take out your crystal ball, and we're gonna do a little bit of predictions and and forward looking perspective here, but based on where you all find yourselves today, the survey findings that you've all heard about or or conducted, what do you think is on the horizon?
Where are we going with AI utilization in the public sector and I'm not going to give you a five year horizon, I think I think things are changing far too quickly for us to maybe even look that far ahead sometimes.
But let's just say, like over the next 1824 months, where where do you think the public sector is gonna gonna be?


Boswell, Katie  
42:41
Well, maybe I'll start with.


Joe Morris  
42:41
Who wants a difference?


Boswell, Katie  
42:42
I think I think there will be a lot more incoming regulation executive orders.
Again, Jonathan mentioned those earlier that are going to, I think drive a lot of accountability around AI adoption, but also remove a lot of the the myths that you talked about earlier because as we see more of that come forward and there's an alignment as to how organizations can adopt AI successfully, then they will be empowered to do so.
And that's my.
That's my big prediction.


Jonathan Dambrot  
43:16
So it's kind of a like I heard this joke.
So you either believe AI is going to solve cancer in three years, right?
Or it's gonna kill us all in two years, right?
So depending on where you are on that spectrum, I think of myself as a utopian here.
I really believe strongly that in the next 18 months, all of these use cases in these investments, you're just seeing massive investment here, right?
These investments are going to start to yield really, really huge opportunities to start solving real problems for citizens of these states.
I I just think you know, it's gonna probably take a little more time.
Just given the nature of where we are to get those at scale, but I think you're gonna start to see those come into reality.
Michael talked about a lot of those that on a personal basis of how we're actually seeing those happen today, whether you're a developer, whether you're somebody who's just wants to get your mailbox cleaned up and do those things, I think those are immediate opportunities.
And I think those very large opportunities and medical solving things in states that just have been really narly, I think we're probably a little bit longer will be further out, Joe.
But I think we're going to start to see all of those investments take form and every state is making major investments, right, whether it be like off the balance sheet or do building fundamentally opportunities for the state to bring AI professionals into these states to really bring their talent, bring the right opportunities, that competition will be fierce.
But I think you'll start to see those benefits in, in in that time frame.
And I think it's going to be really exciting.
So I'm a utopian.
I don't know, Michael probably is too.
But you know, I think if we do this the right way, the the opportunities are are are limitless.


Michael Mattmiller (CELA)  
44:54
Completely agree with both Katie and Jonathan.
And This is why I think it's so important that we have such a, a strong and committed technology leadership ecosystem and state local government because realizing those benefits is going to be very dependent on building trust in the technology, both with the public and with our elected leaders who, let's be honest, they want to see some form of regulation over this technology.
And I think we saw more than 700 bills related to AI in state legislatures this year.
So the more we can do to take those first steps to do so responsibly, to demonstrate the success this technology can have while focusing on how we mitigate potential risks, will allow us to achieve all of the outcomes that were just described.
And so very excited for the path we're on.
And for KPMG/Microsoft, all of us in the ecosystem to be resources to help governments use this technology, right?


Joe Morris  
45:43
As a wonderful end to it, to our conversation today, I think all three of you wouldn't be here if you weren't an optimist. Right?
You're you're you're engaging everyday to to help make government a little bit better and in a very important conversation.
And I think a conversation that would is just the beginning, Michael, to your point, we are we're in the first inning of what I think will be a very exciting turn of events in across state and local government.
I do wanna be respectful of our time commitment today, so we're gonna.
We're gonna wrap it up here, but I'd like to thank our our three expert speakers for joining me on this very exciting conversation today and sharing their insights.
If you want to connect with them and you want to connect with us, you can see our LinkedIn profiles or our linked below along with our corresponding websites are available to you here.
But in addition, we wouldn't be here today if it weren't for KPMG/Microsoft and cranium for making this important conversation a reality.
So I want to thank the three organizations for bringing this conversation to light.
So with that said, thank you for joining us on this conversation.


Jonathan Dambrot  
46:52
Thank you very much.


Michael Mattmiller (CELA)  
46:53
Thank you.


Boswell, Katie  
46:54
Thank you.

Meet our webcast team

Image of Jim Booth
Jim Booth
Advisory Managing Director, Platforms, KPMG LLP

Explore more

AI security framework design
Webcast Replay Webcast Upcoming Listen Now

AI security framework design

KPMG AI Security Services

KPMG Studio
Webcast Replay Webcast Upcoming Listen Now

KPMG Studio

Your idea. Bigger.

Webcast Replay Webcast Upcoming Listen Now

Securing the AI revolution

Thank you!

Thank you for contacting KPMG. We will respond to you as soon as possible.

Contact KPMG

Use this form to submit general inquiries to KPMG. We will respond to you as soon as possible.

By submitting, you agree that KPMG LLP may process any personal information you provide pursuant to KPMG LLP's Privacy Statement.

An error occurred. Please contact customer support.

Job seekers

Visit our careers section or search our jobs database.

Submit RFP

Use the RFP submission form to detail the services KPMG can help assist you with.

Office locations

International hotline

You can confidentially report concerns to the KPMG International hotline

Press contacts

Do you need to speak with our Press Office? Here's how to get in touch.

Headline