
A sophisticated AI-powered phishing campaign targeting senior US government officials exposes how rapidly deceptive messaging can spread through trusted networks, threatening national security while the FBI scrambles to contain the ongoing attack.
Story Highlights
- FBI confirms malicious actors impersonate senior federal and state officials using AI voice cloning and text messages since April 2025
- Attackers exploit trusted government contacts to deliver malicious links, enabling chain attacks across official networks
- DOD restricts commercial AI tools since 2023 amid data spillage concerns, highlighting vulnerabilities Biden-era policies failed to address
- White House AI National Security Memorandum mandates testing for impersonation risks, but active campaign remains unresolved as of May 2025
AI Impersonation Campaign Targets Government Insiders
The FBI’s Internet Crime Complaint Center issued a public service announcement in May 2025 warning that malicious actors launched a smishing and vishing campaign in April 2025, impersonating senior US officials through text messages and AI-generated voice calls. These attackers contact government officials and their associates, building rapport through fake conversations before delivering malicious links disguised as secure platforms. Once victims click these links, their accounts become compromised, allowing attackers to access trusted contact lists and expand their reach across government networks. This represents a dangerous evolution from traditional phishing, exploiting AI technology to create highly believable impersonations that erode trust in digital communications.
Fake DOD memo about ‘compromised’ apps shows swift spread of deceptive messaging https://t.co/NvXprcbKFd via @DefenseOne
— Ghost Dansing 🐀☠️ 👻👽🐸 ghostdansing.bsky.social (@ghostdansing) March 2, 2026
Pentagon Scrambles to Prevent Data Leaks Through Commercial AI
The Department of Defense has maintained strict policies against unapproved commercial AI use since September 2023, when the Navy Chief Information Officer issued warnings about data retention and hacking vulnerabilities in platforms like ChatGPT. Lt. Col. Thomas Hong from Army JAG emphasized the urgent need for DOD-specific AI policies to prevent sensitive information spillage into commercial systems that could be exploited by hostile actors. The Pentagon is developing its own internal generative AI models to counter these threats while waiting for comprehensive official guidance. This cautious approach stands in stark contrast to the previous administration’s lax oversight of emerging technologies, which left critical national security gaps that adversaries now exploit with alarming efficiency.
Swift Spread Exploits Trusted Networks
The campaign’s effectiveness stems from its ability to weaponize trust within government circles. After compromising an official’s account, attackers gain access to authentic contact lists containing colleagues, subordinates, and associates who recognize the compromised official’s name or phone number. Spoofed caller IDs and AI-generated voices mimicking real officials make detection extremely difficult, even for security-conscious targets. The FBI acknowledges that current AI detection capabilities struggle to identify these sophisticated fakes in real-time. This chain-attack methodology allows rapid spread across interconnected government networks, creating cascading security failures that threaten classified information and national security operations. The economic costs of potential data theft compound with social impacts as deepfake technology undermines confidence in legitimate communications.
National Security Framework Falls Behind Evolving Threats
The White House issued an AI National Security Memorandum in October 2024 requiring agencies to test AI systems for misuse risks including impersonation and cyber operations, with specific mandates for the Department of Energy on nuclear risks and NSA on cybersecurity threats. However, this reactive policy framework emerged only after adversaries already weaponized AI capabilities against government targets. The FBI’s May 2025 guidance urges officials to verify unexpected contacts through known numbers and enable multi-factor authentication, yet the campaign continues unabated with no reported takedown. These defensive measures place burden on individual officials rather than addressing systemic vulnerabilities. This pattern reflects failures from previous years when emerging technologies outpaced government preparedness, leaving patriots who serve our nation exposed to sophisticated attacks while bureaucrats draft memos instead of implementing robust protections.
Fake DOD memo about ‘compromised’ apps shows swift spread of deceptive messaging https://t.co/cSVvxXIPT2
— robonikki (@robonikki143003) March 2, 2026
The ongoing nature of this threat underscores broader concerns about government overreliance on commercial technology platforms without adequate security vetting. As Trump administration officials work to secure federal systems, the lessons from this campaign highlight why strong cybersecurity measures and American-controlled AI development matter for protecting constitutional government functions. The FBI continues accepting victim reports through IC3.gov while agencies coordinate responses, but citizens deserve leadership that anticipates threats rather than reacting after adversaries strike trusted officials who safeguard our freedoms and national interests.
Sources:
Lawfare – White House Releases Memo on AI and National Security
Inside Government Contracts – White House Issues New Cybersecurity Executive Order
Covington & Burling LLP – White House Issues National Security Memorandum on Artificial Intelligence
Department of the Navy Chief Information Officer – Generative AI Policy Guidance


