cyber threats 4 min read | February 2, 2026 | Brandon Thomas, Managing Partner

Insider Threat: Google Engineer Convicted of AI Secrets Theft

A former Google engineer was convicted of stealing thousands of confidential AI files to support an AI startup in China — part of a broader pattern of insider-driven economic espionage targeting U.S. technology firms.

insider threat IP theft China AI security espionage
A hooded figure at a computer representing an insider threat

A former Google software engineer, Linwei Ding, was convicted by a 12-person federal jury in San Francisco of stealing thousands of confidential AI files to support the creation of an AI startup in China, according to an article in the New York Times. The stolen material related to Google’s AI supercomputer infrastructure — among the most strategically sensitive technologies in the industry.

Prosecutors describe it as part of a broader pattern involving China’s use of insiders within U.S. technology firms to obtain sensitive intellectual property, including in other cases brought by the U.S. attorney’s office against former Apple developers.

What Are the Key Risk Themes?

  • Insider Threat: Trusted employees with deep system access can exfiltrate high-value IP over time without immediate detection
  • Geopolitical Risk: The case sits squarely within escalating U.S.–China technology competition, especially around AI compute and infrastructure
  • Regulatory & Legal Exposure: U.S. authorities are aggressively prosecuting economic espionage tied to strategic technologies
  • Data Exfiltration Methods: Use of legitimate cloud tools (personal cloud accounts) to move sensitive data highlights detection gaps
  • Talent Programs as Risk Vectors: Foreign government “talent plans” can act as incentives for IP transfer

What Does This Mean Depending on Your Role?

If You Build, Use, or Invest in AI

  • AI IP is now a national-security-level asset. Scrutiny is intensifying, not easing.
  • Founders and investors face heightened diligence. The origin of technology, training data, and architectures will be questioned more aggressively.
  • Cross-border collaboration risk is rising. Even legitimate international teams may trigger reviews.

If You Run or Advise a Company

  • Insider risk must be addressed now. Engineers and others with legitimate access can quietly move massive value.
  • Cloud does not equal safe by default. Personal accounts and sanctioned tools can still be used for theft.
  • Compliance expectations are rising. Prosecutors and shareholders expect proactive safeguards, not reactive apologies.

If You’re an Employee or Technical Leader

  • Personal liability is real. This case shows that “I helped build it” is not a defense.
  • What has been done to vet your employees? Exit behavior is scrutinized — sudden departures, travel patterns, and data access are relevant for detecting insider threat.

What Should You Do Next?

Immediate Actions (0-30 days)

  • Audit access controls for AI-related systems and IP
  • Lock down cloud exfiltration paths (personal cloud accounts, USB, email forwarding)
  • Review employee onboarding and offboarding security checks

Near-Term Actions (30-90 days)

  • Formalize insider-threat programs with clear governance and reporting
  • Update training for employees handling sensitive IP
  • Conduct investor and partner diligence on technology provenance

Strategic Actions (90+ days)

  • Treat AI IP like crown-jewel infrastructure with dedicated protection
  • Scenario-plan for geopolitical escalation affecting talent and IP
  • Align legal, security, and engineering leadership on insider risk posture

Key Takeaways

  • A Google engineer was convicted of stealing AI supercomputer secrets for a China-based startup — using legitimate cloud tools to exfiltrate data undetected
  • This is part of a pattern — U.S. authorities are prosecuting multiple cases of insider-driven IP theft tied to Chinese government talent programs
  • AI intellectual property is now treated as a national security asset with intensifying scrutiny on origins and access
  • Personal cloud accounts and sanctioned tools are primary exfiltration vectors that most organizations aren’t monitoring
  • Every organization building or using AI should formalize insider-threat programs and audit access controls immediately

Begin a Confidential Conversation