87 deepfake scam rings taken down across Asian in Q1 2025: Bitget Report
The rise in AI technology has also fueled an increase in AI compatible fraud. In T1 2025 alone, 87 swindle rings based on Deepfake were dismantled. This alarming statistic, revealed in the research report on the 2025 anti-SCAM month co-written by Bitget, Slowmist and Elliptical, underlines the growing danger of IA scams in the cryptographic space.
The report also reveals a 24% increase in annual shift in global loss of cryptographic scams, reaching a total of $ 4.6 billion in 2024. Almost 40% of high-value fraud cases involved Deepfake technologies, with crooks using more and more sophisticated imitations of public figures, founders and platform executives for users misleading.
In relation: How AI and Deepfakes feed new cryptocurrency scams
GRACY, CEO of Bitget, told Cintelegraph: “The speed at which crooks can now generate synthetic videos, coupled with the viral nature of social media, gives Deepfakes a unique advantage both within reach and to credibility.”
Defense against scams focused on AI goes beyond technology – it requires a fundamental change in the state of mind. At a time when synthetic media such as Deepfakes can convincingly imitate real people and events. Confidence must be carefully won by transparency, constant vigilance and rigorous verification at each stage.
Deepfakes: an insidious threat in modern cryptography scams
The report details the anatomy of modern crypto scams, pointing to three dominant categories: Imitations Defake deeply generated by AI, social engineering projects and Ponzi style fraud disguised as Defi or Gamefi projects. Deepfakes are particularly insidious.
AI can simulate text, voice messages, facial expressions and even actions. For example, the false video approvals of the investment platforms of public figures such as the Prime Minister of Singapore and Elon Musk are tactics used to exploit the confidence of the public via Telegram, X and other social media platforms.
AI can even simulate reactions in real time, which makes these scams increasingly difficult to distinguish from reality. Sandeep Narwal, co-founder of the Polygon blockchain platform, raised the alarm in an article on May 13 on X, revealing that the bad actors identify it via Zoom. He mentioned that several people had contacted him on Telegram, asking him if he was zooming with them and he asked them to install a script.
In relation: AI scammers are now usurpension of the big cups of the US government, explains the FBI
The CEO of Slowmist also issued a warning on Zoom Deepfakes, urging people to pay particular attention to the domain names of zoom ties to avoid being the victim of these scams.
New threats of scam call for more intelligent defenses
As scams fueled by AI become more advanced, users and platforms need new strategies to stay safe. Deepfake videos, false job tests and phishing links make it more difficult than ever to identify fraud.
For institutions, regular safety training and solid technical defenses are essential. Companies are advised to execute phishing simulations, protect messaging systems and monitor the leaks code. Building a culture first of security – where employees check before their confidence – is the best way to stop scams before starting.
GRACY offers users every day a simple approach: “Check, isolate and slowing down.” She also said:
“Always check the information via official websites or trustworthy social media accounts – Never rely on shared links in telegram cats or Twitter comments.”
She also highlighted the importance of isolating risky actions using separate portfolios when exploring new platforms.
Review: Baby-boomers worth $ 79 t finally get on board with Bitcoin