By David Klepper
Washington (AP) – The phone rings. The Secretary of State calls. Or is it?
For Washington insiders, they no longer believe in seeing and hearing, thanks to recent incidents involving deepfakes who are impersonating the top officials in President Donald Trump’s administration.
Corporate American digital fakes are also coming, so that criminal gangs and hackers associated with the enemy, including North Korea, can also use synthetic video and audio to impersonate CEOs and low-level job seekers to access critical systems and business secrets.
Thanks to advances in artificial intelligence, creating realistic deepfakes is easier than ever, creating security issues for governments, businesses and individuals, and trusting the most valuable currency of the digital age.
To meet the challenges, we need more AI and laws to combat AI, better digital literacy and technical solutions.
“As humans, we are very susceptible to the effects of deception,” said Vijaybarasbramaniyan, CEO and founder of Pindrop Security, a high-tech company. But he believes that a solution to the deepfake challenge might be within reach: “We’re going to fight back.”

AI deepfakes pose national security threats
This summer, someone tried to use AI to create Secretary of State Marco Rubio to contact foreign ministers, US senators and governors with texts, voicemails and signal messaging apps.
In May, someone pretended to be Trump’s Chief of Staff Susie Will.
Another fake Rubio came out on Deepfaak earlier this year and said he wanted to cut off access to Ukraine’s Elon Musk’s Starlink Internet Service. The Ukrainian government later rebutted the false claims.
The meaning of national security is huge. For example, those who think they are chatting with Rubio or Will may discuss sensitive information about diplomatic negotiations and military strategy.
Kinny Chan, CEO of cybersecurity company QID, said of the possible motivation: “You’re trying to extract sensitive or competitive information, you’re trying to chase access, you’re making access before you get access.”
Synthetic media also aims to change behavior. Last year, New Hampshire Democratic voters received a robocall urging them not to vote in the state’s upcoming primary. Cole’s voice sounded suspicious like then President Joe Biden, but it was actually created using AI.
The ability to deceive causes AI to deepfake powerful weapons for foreign actors. Both Russia and China use disinformation and publicity directed towards Americans as a way to undermine trust in democratic alliances and institutions.
Political consultant Steven Kramer, who admitted to sending fake Biden Robocalls, said he wanted to send a message to the American political system that deepfakes would pose. Kramer was acquitted last month for oppressing voters and impersonating candidates.
“I did what I did for $500,” Kramer said. “Can you imagine what would happen if the Chinese government decided to do this?”
Scammers are targeting the financial industry with deepfakes
Increased availability and refinement of the program means deepfakes are increasingly being used for corporate espionage and garden diversity scams.
“The financial industry is cross-sectional,” said Jennifer Eubank, former deputy director of the CIA, who tackled cybersecurity and digital threats. “Even people who know each other are sure they will transfer huge sums of money.”
In the context of a corporate espionage, employees can be asked to impersonate CEOs who ask them to hand over their passwords and routing numbers.
Deepfakes allow fraudsters to apply for jobs and even do them under the supposed or false identity of the con artist. In some cases, this is how to access a sensitive network, steal secrets, or install ransomware. Others may just want a job, and at the same time they do some similar jobs in different companies.
US authorities say thousands of North Koreans with information technology skills have been dispatched to live abroad and using stolen identities to get jobs at high-tech companies in the US and elsewhere. Workers have access to the company’s network and pay. In some cases, workers can install ransomware and use it to force more money later.
The scheme has generated billions of dollars for the North Korean government.
Research by cybersecurity firm Adaptive Security predicts that within three years, one in four job applications will be fake.
“We’ve entered an era where anyone with access to laptops and open source models is convincingly impersonating the real people,” said Brian Long, CEO of Adaptive. “It’s not about hacking the system anymore. It’s about hacking trust.”
Experts deploy AI to counterattack AI
Researchers, public policy experts and technology companies are currently investigating the best ways to address the economic, political and social challenges posed by deepfakes.
The new regulations require tech companies to do more to identify, label and potentially remove deepfakes on their platforms. Lawmakers can also impose greater penalties if they can be caught using digital technology to deceive others.
A bigger investment in digital literacy can increase people’s immunity for online deceptions by teaching them how to find fake media and avoid being prey to scammers.
The best tool for catching AI could be another AI program. One is trained to smell the small flaws of deepfakes that will be noticed by people.
Systems like Pindrop analyze millions of data points in speeches from everyone and quickly identify irregularities. This system can be used during job interviews and other video conferences to detect, for example, audio cloning software.
Similar programs may be common one day and may run in the background as people chat online with colleagues and loved ones. One day, deepfakes may go down the path of Spam’s email, a technical challenge that once threatened to overturn the usefulness of email, said Balasubramaniyan, CEO of Pindrop.
“You can take the defeatist view and say we will be subordinate to disinformation,” he said. “But that doesn’t happen.”
Original issue: July 28, 2025, 11:42am EDT