- Sponsored
Bringing bold, responsible and secure generative AI to the mission
Generative AI is a new approach to artificial intelligence that has the potential to create more efficient ways for government organizations to engage with citizens, deliver relevant information and provide more timely services.
“The primary reason generative AI is so exciting is because it’s driving so many new opportunities, really focused on operational efficiencies, cost savings and value creation,” shared Raj Rana, specialist and customer engineering leader for Google Public Sector, in a recent panel discussion, produced by Scoop News Group and underwritten by Google for Government.
And earlier this year, the National Institute of Science and Technology (NIST) released its AI Risk Management Framework, marking the introduction of the first-ever AI framework to advise the government on private sector delivery of AI to mission operations.
“What the NIST AI risk management framework does is allow us all to address risk in a similar taxonomy—in a similar manner—and really, because risk management is a shared community ecosystem, it’s necessary to address the risks in that context as well,” added Addie Cooke, global AI public policy lead at Google Cloud in the panel.
The value this common framework adds, explained Cooke, is “ensuring that the public understands how you’re using the technology and that you’re able to clearly communicate the value that AI is bringing to you as a receiver of public services.”
Rana noted that at a recent Washington D.C. event, he met with several agency CIOs who shared a wide range of responses to generative AI capabilities, some of whom are allowing employees to actively build proof of concept or pilot programs and others who have completely paused activities.
“What was the common theme across all of [their responses] is a recognition that…we can’t stop this trend, we have to enable it. But there’s really a critical importance around doing it in a way that is responsible and that protects the agency’s data,” said Rana.
A key concern is ensuring security and compliance support for the infrastructure that supports AI capabilities and the data it accesses.
Google has been a leader in AI for years, says Rana. The company takes the lessons it learns and implements them into the services it develops.
“[Google’s] AI framework takes six core elements, including expanding strong security foundations, extending and detecting response to bring AI into a threat universe, automating defense to keep pace with existing and new threats, harmonizing platform level controls, adapting controls to adjust mitigations and then contextualizing AI risks around the business processes. It’s a very comprehensive approach that aligns very closely with the NIST framework.”
For agency leaders who are just getting started with a generative AI project, working with a partner with capabilities that align with the NIST framework will be a key factor in the success of their project.
“The NIST AI Risk Management Framework is user agnostic. And I say that because it’s applicable to the private sector, it’s applicable to any level of public sector, and that includes state and local. I think that’s important because, again, one of the things that that we really strive for is interoperability. We want our products in D.C. to work as well in Texas and California.”
Watch the full discussion to learn more about bringing bold, responsible and secure generative AI to government missions and hear more from our government leaders on Accelerating the Mission with Artificial Intelligence.
This video panel discussion was produced by Scoop News Group for DefenseScoop, and underwritten by Google for Government.