The US Department of Defense has invested $2.4 million over two years in deepfake detection technology from a startup called Hive AI. It’s the first contract of its kind for the DOD’s Defense Innovation Unit, which accelerates the adoption of new technologies for the US defense sector. Hive AI’s models are capable of detecting AI-generated video, image, and audio content.
Although deepfakes have been around for the better part of a decade, generative AI has made them easier to create and more realistic-looking than ever before, which makes them ripe for abuse in disinformation campaigns or fraud. Defending against these sorts of threats is now crucial for national security, says Captain Anthony Bustamante, a project manager and cyberwarfare operator for the Defense Innovation Unit.
“This work represents a significant step forward in strengthening our information advantage as we combat sophisticated disinformation campaigns and synthetic-media threats,” says Bustamante. Hive was chosen out of a pool of 36 companies to test its deepfake detection and attribution technology with the DOD. The contract could enable the department to detect and counter AI deception at scale.
Defending against deepfakes is “existential,” says Kevin Guo, Hive AI’s CEO. “This is the evolution of cyberwarfare.”
Hive’s technology has been trained on a large amount of content, some AI-generated and some not. It picks up on signals and patterns in AI-generated content that are invisible to the human eye but can be detected by an AI model.
“Turns out that every image generated by one of these generators has that sort of pattern in there if you know where to look for it,” says Guo. The Hive team constantly keeps track of new models and updates its technology accordingly.
The tools and methodologies developed through this initiative have the potential to be adapted for broader use, not only addressing defense-specific challenges but also safeguarding civilian institutions against disinformation, fraud, and deception, the DOD said in a statement.
Hive’s technology provides state-of-the-art performance in detecting AI-generated content, says Siwei Lyu, a professor of computer science and engineering at the University at Buffalo. He was not involved in Hive’s work but has tested its detection tools.
Ben Zhao, a professor at the University of Chicago, who has also independently evaluated Hive AI’s deepfake technology, agrees but points out that it is far from foolproof.
“Hive is certainly better than most of the commercial entities and some of the research techniques that we tried, but we also showed that it is not at all hard to circumvent,” Zhao says. The team found that adversaries could tamper with images in a way that bypassed Hive’s detection.
And given the rapid development of generative AI technologies, it is not yet certain how it will fare in real-world scenarios that the defense sector might face, Lyu adds.
Guo says Hive is making its models available to the DOD so that the department can use the tools offline and on their own devices. This keeps sensitive information from leaking.
But when it comes to protecting national security against sophisticated state actors, off-the-shelf products are not enough, says Zhao: “There’s very little that they can do to make themselves completely robust to unforeseen nation-state-level attacks.”
Source : Technology Review