Rijul Gupta, founder and CEO of Deep Media, said the disruption of truth due to deepfakes and other unethical uses of generative artificial intelligence presents a systemic risk to the U.S. military and other government agencies.
In this article published on Carahsoft.com, Gupta wrote that the ease of use and accessibility of generative AI tools have transformed the nature of disinformation.
The Deep Media chief executive cited how the company’s AI models help detect deepfakes and other media manipulations.
“Such technology will never be 100% accurate because that’s now how it works, but we regularly achieve more than 95% accuracy on identifying the use of generative AI in images, audio and videos. That alone is a force multiplier for analysts,” he noted.
Gupta discussed the company’s partnership with academia and government agencies, such as the Defense Advanced Research Projects Agency and the National Institute of Standards and Technology, to promote the ethical use of AI.
“Ensuring the ethical use of AI is a complex challenge that can’t be resolved by one organization, so we’re doing our best to build a community to address it,” he added.
He also mentioned the company’s work with various partners to integrate its work into other open source intelligence platforms and advance the use of AI to analyze images, videos and audio in support of analysts and other users.