cyber threats 3 min read | January 27, 2026 | HD Intelligence Desk

Urgent Security Warning: AI Deepfake Phishing Campaign Targeting Bitcoin Holders

A sophisticated, active social-engineering campaign is targeting Bitcoin users through AI-generated deepfake video calls, designed to bypass even security-aware individuals by impersonating trusted contacts in real time.

deepfake phishing Bitcoin social engineering AI security
A shadowy figure overlaid with digital hack symbols

A sophisticated, active social-engineering campaign is currently targeting Bitcoin users through AI-generated deepfake video calls, designed to bypass even security-aware individuals.

The attack has been publicly confirmed by Martin Kuchař, co-founder of BTC Prague, and Ed Juline, Bitcoin treasury strategist, after a near-compromise involving a highly convincing impersonation attempt. See the original report on X.

How Does the Attack Work?

According to the report, attackers combine trusted relationships, real-time video deepfakes, and urgency to gain full system access:

  1. Victims receive a Telegram message initiating contact
  2. A follow-up Zoom or Microsoft Teams call is initiated
  3. The caller appears on video as a trusted, known individual using AI deepfake technology
  4. Attackers claim audio issues and instruct the victim to install a “plugin” or “update”
  5. The plugin provides full system access, enabling account compromise
  6. Compromised accounts are then weaponized to propagate the attack further

Ed Juline narrowly avoided compromise after receiving what appeared to be a legitimate call from Martin Kuchař. Despite prior awareness of deepfake threats, the realism of the video and contextual trust nearly led to system compromise. The attack was stopped only after an urgent warning to physically unplug the computer immediately.

Why Is This Different From Other Phishing Attacks?

This is not a theoretical threat. These attacks are:

  • Live — happening right now
  • Targeted — aimed at specific high-value individuals
  • Professionally executed — using cutting-edge AI video synthesis
  • Optimized for high-value Bitcoin holders, executives, and public figures

Even experienced operators are vulnerable when:

  • Familiar faces appear on live video
  • Requests seem routine and time-sensitive
  • Trust is exploited instead of software vulnerabilities

Traditional security awareness is no longer sufficient when identity itself can be convincingly forged in real time.

What Should You Do Immediately?

  • Do not accept Zoom or Teams calls initiated via Telegram
  • Treat all Telegram messages as untrusted, even from known contacts
  • Never install plugins, updates, or audio/video fixes during a call
  • Establish out-of-band verification (phone call, Signal message, known safe channel)
  • Prefer safer meeting platforms and pre-scheduled calls
  • Use dedicated devices for wallet operations where possible
  • Ensure endpoint protection and OS-level security monitoring are active

This attack vector will only accelerate as AI tooling improves and spreads.

Key Takeaways

  • AI deepfake video calls are actively being used to impersonate trusted contacts in real time
  • Even security-savvy individuals can be deceived when visual identity is convincingly forged
  • Out-of-band verification is now essential — never trust a video call alone to confirm identity
  • Never install software at someone’s request during a call, regardless of who appears to be asking
  • Bitcoin holders, executives, and public figures are the primary targets of this campaign

Begin a Confidential Conversation