Resemble AI has secured 13 million US dollars in a fresh strategic investment round, strengthening its position in the fast-growing field of AI-driven deepfake detection. With this round, the company’s total venture funding has reached 25 million US dollars. The investment attracted support from prominent names including Berkeley CalFund, Berkeley Frontier Fund, Comcast Ventures, Craft Ventures, Gentree, Google’s AI Futures Fund, IAG Capital Partners, and other technology-focused investors. This development reflects a broader industry push to address the rising risks associated with synthetic media and AI-generated impersonation.
The timing of this funding is closely tied to increasing pressure on organisations to verify the authenticity of digital content. Advances in generative AI have significantly lowered the barrier for producing realistic fake audio, video, images, and text. As a result, fraud driven by deepfakes and AI-assisted scams has escalated rapidly, contributing to losses exceeding 1.56 billion US dollars in 2025. Industry analysts warn that if current trends continue, generative AI could enable tens of billions of dollars in fraud losses in the coming years, pushing enterprises to rethink how they protect trust and identity in digital interactions.
Recent real-world incidents underline how sophisticated these threats have become. In Singapore, multiple victims collectively lost a large sum after scammers impersonated a telecommunications provider and financial authorities. The attackers combined caller ID spoofing, voice deepfakes, and social engineering tactics designed to create urgency and exploit public trust in well-known institutions. Such cases demonstrate how deepfakes are no longer experimental risks but practical tools used in coordinated fraud campaigns.
Resemble AI’s approach to deepfake detection
Resemble AI focuses on real-time verification technologies that help enterprises identify AI-generated content across multiple formats. Its solutions are designed to detect synthetic audio, video, images, and text as they appear, enabling faster response and mitigation. The company plans to use the new funding to expand global access to its platform and further develop its detection capabilities for large-scale enterprise use.
Two recent product releases highlight this strategy. DETECT-3B Omni is a deepfake detection model built specifically for enterprise environments, with reported high accuracy across more than 38 languages. This multilingual capability is critical for global organisations operating across diverse regions. Alongside it, Resemble Intelligence provides explainability for multimodal and AI-generated content, helping security and compliance teams understand why specific content is flagged. This platform leverages advanced large language and multimodal models to add transparency to detection decisions, which is increasingly important for regulated industries.
According to the company, these tools are already being used in sectors such as entertainment, telecommunications, and government. Public benchmark results indicate strong performance in both image and speech deepfake detection, with lower average error rates compared to many competing models. This performance has helped position Resemble AI as a practical option for organisations seeking to integrate verification directly into their digital workflows.
Industry perspective and enterprise demand
Industry stakeholders view the rapid improvement of generative AI as a fundamental shift in how enterprises approach content trust and identity systems. Representatives from investment partners such as Google’s AI Futures Fund, Sony Ventures, and Okta have pointed out that organisations are increasingly adopting verification layers to protect authentication processes and maintain confidence in digital communications. Rather than relying solely on traditional security controls, enterprises are looking for AI-native solutions that can adapt as synthetic media techniques evolve.
How deepfake risks may shape enterprise planning in 2025
Resemble AI’s outlook suggests several trends that could influence how organisations prepare for deepfake-related risks throughout 2025. One expected shift is the move toward making deepfake verification standard for official communications. As incidents involving impersonation of public officials and executives continue to surface, real-time verification for video conferencing and recorded messages may become a baseline requirement, particularly in the public sector.
Another key factor is organisational readiness. With AI regulations emerging across multiple regions, companies that invest early in governance frameworks, staff training, and compliance processes are likely to be better positioned to respond to both regulatory and operational challenges. Identity is also becoming a central focus in AI security strategies, as many AI-enabled attacks rely on impersonation. This is driving interest in identity-centric security models, including zero-trust approaches that apply to both human users and AI agents.
There is also growing attention on the impact of deepfakes on cyber insurance. As corporate incidents increase, insurers may reassess coverage terms, potentially raising premiums or limiting policies for organisations that lack adequate detection and verification tools. This adds another financial incentive for enterprises to proactively address deepfake risks.
A growing focus on trust in the AI era
The latest investment in Resemble AI highlights how generative AI is reshaping enterprise risk landscapes. Across industries, organisations are evaluating how content verification, identity protection, and incident readiness can be integrated into broader security and compliance strategies. As AI-generated media becomes more convincing and widespread, solutions that help distinguish real from synthetic content are moving from optional safeguards to essential infrastructure for maintaining trust in the digital economy.



