Reality Defender
Stopping deepfakes before they become a problem
When deepfake photos, voices, or videos succeed at twisting reality, the deception can disrupt targets as large as countries and as small as individuals. In 2021 a deepfake voice was used to dupe the branch manager of a Japanese company into transferring $35 million to fraudsters. In 2022 a Canadian family gave $9,800 to a phone scammer who used their son’s cloned voice to convince them he needed bail money. And now that even smartphone software can generate deepfakes, the amount of intentionally misleading online content is truly exploding — the World Economic Forum estimates that the number of online deepfake videos is growing by 900 percent per year.
Of course, many deepfakes fail to deceive anyone. But even so, the rise of synthetic media — text, images, video, or audio generated by AI — makes it harder to distinguish reality from disinformation and undermines public trust in government, media, and other institutions. In Gabon in 2019, widespread suspicion that a video address recorded by the country’s president was a deepfake sparked a coup attempt, even though the video was authentic. (People think they can spot deepfakes on their own, but the evidence shows they can’t.)
Some observers, such as Tufts cognitive scientist Daniel Dennett, have proposed counteracting future waves of AI-generated “counterfeit people” by mandating a global high-tech watermarking system in which AI-generated content would carry indelible signals, similar to the Eurion Constellation system used to prevent counterfeiting of banknotes. This may be a good and useful idea, but criminals and scammers won’t watermark what they spread. Media companies, financial institutions, governments, and other organizations know they need to expand their concept of cybersecurity to include deepfake detection. Research firm HSRC says spending on deepfake detection systems is growing at a compound annual growth rate of 42 percent, from a 2020 base of $3.86 billion.
The world needs stronger protection from the tsunami of AI-generated fraud, immediately. So we are delighted to announce that we’ve led a $15 million Series A funding round for Reality Defender. The company has the best technology we’ve seen for screening digital content and identifying images, video, audio, or text that’s been created by generative AI systems such as diffusion models, large language models, and generative adversarial networks. See coverage of the news in TechCrunch here.
Reality Defender stands out for its ability to use an ensemble of the latest existing and theoretical AI models to quickly judge whether a video, image, voice recording, or text passage is likely to be fake. And Ben Colman, the company’s co-founder and CEO, has not only built the industry-leading multimodal platform for deepfake detection, but is at the forefront of explaining the deepfake problem to the business community and government and improving literacy about the potential solutions.
Our own tests show that Reality Defender’s algorithms spot synthetic media with great accuracy, based on very little input — just a few characters of text or frames of video. Customers have tested it, too, including some very large organizations across the worlds of finance, media, and government. They’ve used it to detect and respond to deepfake content in the form of customer phone calls, social media images, and political propaganda and misinformation.
DCVC has long invested in companies working on the creation side of generative AI, and we believe the newest generative AI models have the potential to make us vastly more productive in the workplace, the classroom, and many other environments. Our investment in Reality Defender is a counterweight to that, representing the safety side of our thesis about AI. Deepfakes are an alarming new problem for the world, and Ben and Reality Defender are making sure organizations both understand it and have the tools to solve it.
Ali Tamaseb is a General Partner at DCVC.