Back

Deepfakes Emerge as Major Tool for Political Manipulation and Fraud

Severity: High (Score: 72.5)

Sources: identifai.net, Securitybrief, Biometricupdate

Summary

A report by identifAI reveals that deepfakes are increasingly used as tools for political manipulation and financial fraud, with the U.S. accounting for 46.9% of incidents from 2020 to 2026. The analysis of over 10,000 deepfake cases shows that political manipulation constitutes 24.6% of the threat landscape, while financial fraud represents 20.1%. Video deepfakes are the most common format, making up 45.6% of incidents, followed by mixed media and images. The primary distribution platform for these deepfakes is X, which accounts for 51.2% of propagation. The report emphasizes the growing sophistication of deepfakes, which are now designed to bypass verification checks and spread rapidly on social media. As geopolitical tensions rise, particularly between Israel and Iran, the use of deepfakes in state-sponsored cyber operations is expected to increase. The findings call for enhanced digital provenance standards and technical controls to combat the misuse of synthetic media. Key Points: • The U.S. is the most targeted country for deepfake incidents, accounting for 46.9%. • Political manipulation and financial fraud are significant categories, comprising 24.6% and 20.1% respectively. • X is the leading platform for deepfake propagation, with 51.2% of incidents distributed through it.

Key Entities

  • Phishing (attack_type)
  • Australia (country)
  • India (country)
  • Iran (country)
  • Israel (country)
  • South Korea (country)
  • E-commerce (industry)
  • Financial Services (industry)
  • Telecommunications (industry)
  • T1566 - Phishing (mitre_attack)
  • Instagram (platform)
  • Telegram (platform)
  • TikTok (platform)
  • WhatsApp (platform)
  • X (company)
  • YouTube (company)
Loading threat details...

Threat Not Found

The threat cluster you're looking for doesn't exist or has been removed.

Return to Feed