Senators propose new DOD-led prize competition for tech to detect and watermark generative AI
Spotlighting concerns about potential threats posed by artificial intelligence, lawmakers want the Defense Department to create and run a new prize competition to assess — and potentially deploy — technological applications, tools and models that can detect and watermark any use of generative AI.
A provision included in the Senate Armed Services Committee’s approved language for the fiscal 2024 National Defense Authorization Act, would mandate the new competition to be steered by the undersecretary of defense for research and engineering through 2025.
“The term ‘detection’ means a technology that can positively identify the presence of generative artificial intelligence in digital content” and the “term ‘watermarking’ means embedding a piece of data onto detected artificial intelligence generated digital content, conveying attribution to the source generation,” the bill text states.
It also notes that identified technologies could support each military department and the combatant commands — and possibly be transitioned into operational prototypes.
Gaining increasing popularity and interest across over the last nine months, generative AI underpins the making of large language models that can generate realistic and high-quality images and videos, sophisticated software code, entirely new datasets and more. The models continue to get more “intelligent” as humans train and use them.
In a report accompanying their NDAA bill, SASC lawmakers acknowledged how they recognize the “tremendous” possibilities AI offers for breakthroughs that could transform healthcare, education, cybersecurity, defense and scientific research.
“However, the committee is concerned about present and unaddressed challenges to, and from, generative AI, including deepfakes, misinformation, malicious code, and harmful or biased content. These areas must be addressed as generative AI continues to advance and be used in a militarized fashion,” the report noted.
One of their top areas of apprehension involves the potential outputs and lack of transparency around existing and future capabilities of this type of technology.
“The committee received testimony stating the risks that generative AI presents, including the application of some large models to develop very capable cyber weapons, very capable biological weapons, and disinformation campaigns at scale. Being able to quickly identify and label AI generated content will be critical in enabling real-time accountability, attribution, and public trust in government and Department of Defense systems,” per the report.
This prize competition lawmakers envision could “provide benefits far beyond the specific technologies delivered, and also provide an opportunity to leverage the widest network of innovation providers possible to unearth new, innovative, or less-well-known techniques to address a less well-understood challenge,” they wrote.
According to SASC’s NDAA proposal, the secretary of defense would need to brief congressional committees on the department’s framework for implementing the competition, within 120 days of the bill’s enactment — and the initial event would be hosted within 270 days after it’s passed.
Each year by Oct. 1 — until the project’s termination in 2025 — DOD would also need to supply appropriate defense committees with a full report on the results of the competition.
Sen. Angus King, I-Maine, is a key proponent of this provision and a new DOD-led competition to better pinpoint generative AI.
As “a member of the Senate’s informal AI Task Force, and with his experience and expertise as Co-Chair of the Solarium Commission, he saw this as a useful way to have the government’s capabilities be as current and innovative as the private sector’s thinking,” an aide for King told DefenseScoop this week.
Beyond the SASC, others in the government are pushing for solutions as well. The Biden administration’s top cyber advisor recently urged industry leaders in closed-door meetings to consider watermarking to help combat risks of AI-generated disinformation.
A reconciled version of the NDAA must be passed by the Senate and House and signed by the president before becoming law.