China is pilfering U.S.-developed artificial intelligence (AI) technology to enhance its own aspirations and to conduct foreign influence operations, senior FBI officials said Friday.
The officials said China and other U.S. adversaries are targeting American businesses, universities and government research facilities to get their hands on cutting-edge AI research and products.
“Nation-state adversaries, particularly China, pose a significant threat to American companies and national security by stealing our AI technology and data to advance their own AI programs and enable foreign influence campaigns,” a senior FBI official said during a background briefing call with reporters.
China has a national plan to surpass the U.S. as the world’s top AI power by 2030, but U.S. officials say much of its progress is based on stolen or otherwise acquired U.S. technology.
“What we’re seeing is efforts across multiple vectors, across multiple industries, across multiple avenues to try to solicit and acquire U.S. technology … to be able to re-create and develop and advance their AI programs,” the senior FBI official said.
The briefing was aimed at giving the FBI’s view of the threat landscape, not to react to any recent events, officials said.
FBI Director Christopher Wray sounded the alarm about China’s AI intentions at a cybersecurity summit in Atlanta on Wednesday. He warned that after “years stealing both our innovation and massive troves of data,” the Chinese are well-positioned “to use the fruits of their widespread hacking to power, with AI, even more powerful hacking efforts.”
China has denied the allegations.
The senior FBI official briefing reporters said that while the bureau remains focused on foreign acquisition of U.S. AI technology and talent, it has concern about future threats from foreign adversaries who exploit that technology.
“However, if and when the technology is acquired, their ability to deploy it in an instance such as [the 2024 presidential election] is something that we are concerned about and do closely monitor.”
With the recent surge in AI use, the U.S. government is grappling with its benefits and risks. At a White House summit earlier this month, top AI executives agreed to institute guidelines to ensure the technology is developed safely.
Even as the technology evolves, cybercriminals are actively using AI in a variety of ways, from creating malicious code to crafting convincing phishing emails and carrying out insider trading of securities, officials said.
“The bulk of the caseload that we’re seeing now and the scope of activity has in large part been on criminal actor use and deployment of AI models in furtherance of their traditional criminal schemes,” the senior FBI official said.
The FBI warned that violent extremists and traditional terrorist actors are experimenting with the use of various AI tools to build explosives, he said.
“Some have gone as far as to post information about their engagements with the AI models and the success which they’ve had defeating security measures in most cases or in a number of cases,” he said.
The FBI has observed a wave of fake AI-generated websites with millions of followers that carry malware to trick unsuspecting users, he said. The bureau is investigating the websites.
Wray cited a recent case in which a Dark Net user created malicious code using ChatGPT.
The user “then instructed other cybercriminals on how to use it to re-create malware strains and techniques based on common variants,” Wray said.
“And that’s really just the tip of the iceberg,” he said. “We assess that AI is going to enable threat actors to develop increasingly powerful, sophisticated, customizable and scalable capabilities — and it’s not going to take them long to do it.”