FBI Warns About China Theft of US AI Technology

China is pilfering U.S.-developed artificial intelligence (AI) technology to enhance its own aspirations and to conduct foreign influence operations, senior FBI officials said Friday.

The officials said China and other U.S. adversaries are targeting American businesses, universities and government research facilities to get their hands on cutting-edge AI research and products.

“Nation-state adversaries, particularly China, pose a significant threat to American companies and national security by stealing our AI technology and data to advance their own AI programs and enable foreign influence campaigns,” a senior FBI official said during a background briefing call with reporters.

China has a national plan to surpass the U.S. as the world’s top AI power by 2030, but U.S. officials say much of its progress is based on stolen or otherwise acquired U.S. technology.

“What we’re seeing is efforts across multiple vectors, across multiple industries, across multiple avenues to try to solicit and acquire U.S. technology … to be able to re-create and develop and advance their AI programs,” the senior FBI official said.

The briefing was aimed at giving the FBI’s view of the threat landscape, not to react to any recent events, officials said.

FBI Director Christopher Wray sounded the alarm about China’s AI intentions at a cybersecurity summit in Atlanta on Wednesday. He warned that after “years stealing both our innovation and massive troves of data,” the Chinese are well-positioned “to use the fruits of their widespread hacking to power, with AI, even more powerful hacking efforts.”

China has denied the allegations.

The senior FBI official briefing reporters said that while the bureau remains focused on foreign acquisition of U.S. AI technology and talent, it has concern about future threats from foreign adversaries who exploit that technology.

“However, if and when the technology is acquired, their ability to deploy it in an instance such as [the 2024 presidential election] is something that we are concerned about and do closely monitor.”

With the recent surge in AI use, the U.S. government is grappling with its benefits and risks. At a White House summit earlier this month, top AI executives agreed to institute guidelines to ensure the technology is developed safely.

Even as the technology evolves, cybercriminals are actively using AI in a variety of ways, from creating malicious code to crafting convincing phishing emails and carrying out insider trading of securities, officials said.

“The bulk of the caseload that we’re seeing now and the scope of activity has in large part been on criminal actor use and deployment of AI models in furtherance of their traditional criminal schemes,” the senior FBI official said.

The FBI warned that violent extremists and traditional terrorist actors are experimenting with the use of various AI tools to build explosives, he said.

“Some have gone as far as to post information about their engagements with the AI models and the success which they’ve had defeating security measures in most cases or in a number of cases,” he said.

The FBI has observed a wave of fake AI-generated websites with millions of followers that carry malware to trick unsuspecting users, he said. The bureau is investigating the websites.

Wray cited a recent case in which a Dark Net user created malicious code using ChatGPT.

The user “then instructed other cybercriminals on how to use it to re-create malware strains and techniques based on common variants,” Wray said.

“And that’s really just the tip of the iceberg,” he said. “We assess that AI is going to enable threat actors to develop increasingly powerful, sophisticated, customizable and scalable capabilities — and it’s not going to take them long to do it.”

Prospect of AI Producing News Articles Concerns Digital Experts 

Google’s work developing an artificial intelligence tool that would produce news articles is concerning some digital experts, who say such devices risk inadvertently spreading propaganda or threatening source safety. 

The New York Times reported last week that Google is testing a new product, known internally by the working title Genesis, that employs artificial intelligence, or AI, to produce news articles.

Genesis can take in information, like details about current events, and create news content, the Times reported. Google already has pitched the product to the Times and other organizations, The Washington Post and News Corp, which owns The Wall Street Journal.

The launch of the generative AI chatbot ChatGPT last fall has sparked debate about how artificial intelligence can and should fit into the world — including in the news industry.

AI tools can help reporters research by quickly analyzing data and extracting it from PDF files in a process known as scraping.  AI can also help journalists’ fact-check sources. 

But the apprehension — including potentially spreading propaganda or ignoring the nuance humans bring to reporting — appears to be weightier. These worries extend beyond Google’s Genesis tool to encapsulate the use of AI in news gathering more broadly.

If AI-produced articles are not carefully checked, they could unwittingly include disinformation or misinformation, according to John Scott-Railton, who researches disinformation at the Citizen Lab in Toronto.  

“It’s sort of a shame that the places that are the most friction-free for AI to scrape and draw from — non-paywalled content — are the places where disinformation and propaganda get targeted,” Scott-Railton told VOA. “Getting people out of the loop does not make spotting disinformation easier.”

Paul M. Barrett, deputy director at New York University’s Stern Center for Business and Human Rights, agrees that artificial intelligence can turbocharge the dissemination of falsehoods. 

“It’s going to be easier to generate myths and disinformation,” he told VOA. “The supply of misleading content is, I think, going to go up.”

In an emailed statement to VOA, a Google spokesperson said, “In partnership with news publishers, especially smaller publishers, we’re in the earliest stages of exploring ideas to potentially provide AI-enabled tools to help their journalists with their work.”

“Our goal is to give journalists the choice of using these emerging technologies in a way that enhances their work and productivity,” the spokesperson said. “Quite simply these tools are not intended to, and cannot, replace the essential role journalists have in reporting, creating and fact-checking their articles.”

The implications for a news outlet’s credibility are another important consideration regarding the use of artificial intelligence.

News outlets are presently struggling with a credibility crisis. Half of Americans believe that national news outlets try to mislead or misinform audiences through their reporting, according to a February report from Gallup and the Knight Foundation.

“I’m puzzled that anyone thinks that the solution to this problem is to introduce a much less credible tool, with a much shakier command of facts, into newsrooms,” said Scott-Railton, who was previously a Google Ideas fellow.

Reports show that AI chatbots regularly produce responses that are entirely wrong or made up. AI researchers refer to this habit as a “hallucination.”

Digital experts are also cautious about what security risks may be posed by using AI tools to produce news articles. Anonymous sources, for instance, might face retaliation if their identities are revealed.

“All users of AI-powered systems need to be very conscious of what information they are providing to the system,” Barrett said.

“The journalist would have to be cautious and wary of disclosing to these AI systems information such as the identity of a confidential source, or, I would say, even information that the journalist wants to make sure doesn’t become public,” he said. 

Scott-Railton said he thinks AI probably has a future in most industries, but it’s important not to rush the process, especially in news. 

“What scares me is that the lessons learned in this case will come at the cost of well-earned reputations, will come at the cost of factual accuracy when it actually counts,” he said.  

Vietnam Orders Social Media Firms to Cut ‘Toxic’ Content Using AI

HO CHI MINH CITY, VIETNAM – Vietnam’s demand that international social media firms use artificial intelligence to identify and remove “toxic” online content is part of an ever expanding and alarming campaign to pressure overseas platforms to suppress freedom of speech in the country, rights groups, experts and activists say.

Vietnam is a lucrative market for overseas social media platforms. Of the country’s population of nearly 100 million there are 75.6 million Facebook users, according to Singapore-based research firm Data Reportal. And since Vietnamese authorities have rolled out tighter restrictions on online content and ordered social media firms to remove content the government deems anti-state, social media sites have largely complied with government demands to silence online critiques of the government, experts and rights groups told VOA.

“Toxic” is a term used broadly to refer to online content which the state deems to be false, violent, offensive, or anti-state, according to local media reports.

During a mid-year review conference on June 30, Vietnam’s Information Ministry ordered international tech firms to use artificial intelligence to find and remove so-called toxic content automatically, according to a report from state-run broadcaster Vietnam Television. Details have not been revealed on how or when companies must comply with the new order.

Le Quang Tu Do, the head of the Authority of Broadcasting and Electronic Information, had noted during an April 6 news conference that Vietnamese authorities have economic, technical and diplomatic tools to act against international platforms, according to a local media report. According to the report he said the government could cut off social platforms from advertisers, banks, and e-commerce, block domains and servers, and advise the public to cease using platforms with toxic content.

“The point of these measures is for international platforms without offices in Vietnam, like Facebook and YouTube, to abide by the law,” Do said.

Pat de Brun, Amnesty International’s deputy director of Amnesty Tech, told VOA the latest demand is consistent with Vietnam’s yearslong strategy to increase pressure on social media companies. De Brun said it is the government’s broad definition of what is toxic, rather than use of artificial intelligence, that is of most human rights concern because it silences speech that can include criticism of government and policies.

“Vietnamese authorities have used exceptionally broad categories to determine content that they find inappropriate and which they seek to censor. … Very, very often this content is protected speech under international human rights law,” de Brun said. “It’s really alarming to see that these companies have relented in the face of this pressure again and again.”

During the first half of this year, Facebook removed 2,549 posts, YouTube removed 6,101 videos, and TikTok took down 415 links, according to an Information Ministry statement.

Online suppression

Nguyen Khac Giang, a research fellow at Singapore’s ISEAS-Yusof Ishak Institute, told VOA that heightened online censorship has been led by the conservative faction within Vietnam’s Communist Party, which gained power in 2016.

Nguyen Phu Trong was elected as general secretary in 2016, putting a conservative in the top position within the one-party state. Along with Trong, other conservative-minded leaders rose within government the same year, pushing out reformists, Giang said. Efforts to control the online sphere led to 2018’s Law on Cybersecurity, which expands government control of online content and attempts to localize user data in Vietnam. The government also established Force 47 in 2017, a military unit with reportedly 10,000 members assigned to monitor online space.

On July 19, local media reported that the information ministry proposed taking away the internet access of people who commit violations online especially via livestream on social media sites.

Activists often see their posts removed, lose access to their accounts, and the government also arrests Vietnamese bloggers, journalists, and critics living in the country for their online speech. They are often charged under Article 117 of Vietnam’s Criminal Code, which criminalizes “making, storing, distributing or disseminating information, documents and items against the Socialist Republic of Vietnam.”

According to The 88 Project, a U.S.-based human rights group, 191 activists are in jail in Vietnam, many of whom have been arrested for online advocacy and charged under Article 117.

“If you look at the way that social media is controlled in Vietnam, it is very starkly contrasted with what happened before 2016,” Giang said. “What we are seeing now is only a signal of what we’ve been seeing for a long time.”

Giang said the government order is a tool to pressure social media companies to use artificial intelligence to limit content, but he warned that online censorship and limits on public discussion could cause political instability by eliminating a channel for public feedback.

“The story here is that they want the social media platforms to take more responsibility for whatever happens on social media in Vietnam,” Giang said. “If they don’t allow people to report on wrongdoings … how can the [government] know about it?”

Vietnamese singer and dissident Do Nguyen Mai Khoi, now living in the United States, has been contacting Facebook since 2018 for activists who have lost accounts or had posts censored, or are the victims of coordinated online attacks by pro-government Facebook users. Although she has received some help from the company in the past, responses to her requests have become more infrequent.

“[Facebook] should use their leverage,” she added. “If Vietnam closed Facebook, everyone would get angry and there’d be a big wave of revolution or protests.”

Representatives of Meta Platforms Inc., Facebook’s parent company, did not respond to VOA requests for comment.

Vietnam is also a top concern in the region for the harsh punishment of online speech, Dhevy Sivaprakasam, Asia Pacific policy counsel at Access Now, a nonprofit defending digital rights, said.

“I think it’s one of the most egregious examples of persecution on the online space,” she said.

Ambassador: China Will Respond in Kind to US Chip Export Restrictions 

If the United States imposes more investment restrictions and export controls on China’s semiconductor industry, Beijing will respond in kind, according to China’s ambassador to the U.S., Xie Feng, whose tough talk analysts see as the latest response from a so-called wolf-warrior diplomat.

Xie likened the U.S. export controls to “restricting their opponents to only wearing old swimsuits in swimming competitions, while they themselves can wear advanced shark swimsuits.”

Xie’s remarks, made at the Aspen Security Forum last week, came as the U.S. finalized its mechanism for vetting possible investments in China’s cutting-edge technology. These include semiconductors, quantum computing and artificial intelligence, all of which have military as well as commercial applications.

The U.S. Department of Commerce is also considering imposing new restrictions on exports of artificial intelligence (AI) chips to China, despite the objections of U.S. chipmakers.

Wen-Chih Chao, of the Institute of Strategic and International Affairs Studies at Taiwan’s National Chung Cheng University, characterized Xie’s remarks as part of China’s “wolf-warrior” diplomacy, as China’s increasingly assertive style of foreign policy has come to be known. 

He said the threatened Chinese countermeasures would depend on whether Beijing just wants to show an “attitude” or has decided to confront Western countries head-on.

He pointed to China’s investigations of some U.S. companies operating in China. He sees these as China retaliating by “expressing an attitude.”

Getting tougher

But as the tit-for-tat moves of the U.S. and China seem to be “escalating,” Chao pointed to China’s retaliation getting tougher.

An example, he said, is the export controls Beijing slapped on exporters of gallium, germanium and other raw minerals used in high-end chip manufacturing. As of August 1, they must apply for permission from the Ministry of Commerce of China and report the details of overseas buyers.

Chao said China might go further by blocking or limiting the supply of batteries for electric vehicles, mechanical components needed for wind-power generation, gases needed for solar panels, and raw materials needed for pharmaceuticals and semiconductor manufacturing.

China wants to show Western countries that they must think twice when imposing sanctions on Chinese semiconductors or companies, he said.

But other analysts said Beijing does not want to escalate its retaliation to the point where further moves by the U.S. and its allies harm China’s economy, which is only slowly recovering from draconian pandemic lockdowns.

No cooperation

Chao also said China could retaliate by refusing to cooperate on efforts to limit climate change, or by saying “no” when asked to use its influence with Pyongyang to lessen tensions on the Korean Peninsula.

“These are the means China can use to retaliate,” Chao said. “I think there are a lot of them. These may be its current bargaining chips, and it will not use them all simultaneously. It will see how the West reacts. It may show its ability to counter the West step by step.”

Cheng Chen, a political science professor at the State University of New York at Albany, said China’s recent announcement about gallium, germanium and other chipmaking metals is a warning of its ability, and willingness, to retaliate against the U.S.

Even if the U.S. invests heavily in reshaping these industrial chains, it will take a long time to assemble the links, she said.

Chen said that if the U.S. further escalates sanctions on China’s high-tech products, China could retaliate in kind — using tariffs for tariffs, sanctions for sanctions, and regulations for regulations.

Most used strategy

Yang Yikui, an assistant researcher at Taiwan National Defense Security Research Institute, said economic coercion is China’s most commonly used retaliatory tactic.

He said China imposed trade sanctions on salmon imported from Norway when the late pro-democracy activist Liu Xiaobo was awarded the Nobel Peace Prize in 2010. Beijing tightened restrictions on imports of Philippine bananas, citing quarantine issues, during a 2012 maritime dispute with Manila over a shoal in the South China Sea.

Yang said studies show that since 2018, China’s sanctions have become more diverse and detailed, allowing it to retaliate directly and indirectly. It can also use its economic and trade relations to force companies from other countries to participate.

Yang said that after Lithuania agreed in 2021 to let Taiwan establish a representative office in Vilnius, China downgraded its diplomatic relations from the ambassadorial level to the charge d’affaires and removed the country from its customs system database, making it impossible for Lithuanian goods to pass customs.

Beijing then reduced the credit lines of Lithuanian companies operating in the Chinese market and forced other multinational companies to sanction Lithuania. Companies in Germany, France, Sweden and other countries reportedly had cargos stopped at Chinese ports because they contained products made in Lithuania. 

When Australia investigated the origins of COVID, an upset China imposed tariffs or import bans on Australian beef, wine, cotton, timber, lobster, coal and barley. But Beijing did not sanction Australia’s iron ore, wool and natural gas because sanctions on those products stood to hurt key Chinese sectors. 

Adrianna Zhang contributed to this report.

US Works With Artificial Intelligence Companies to Mitigate Risks

Can artificial intelligence wipe out humanity?

A senior U.S. official said the United States government is working with leading AI companies and at least 20 countries to set up guardrails to mitigate potential risks, while focusing on the innovative edge of AI technologies.

Nathaniel Fick, the U.S. ambassador-at-large for cyberspace and digital policy, spoke Tuesday to VOA about the voluntary commitments from leading AI companies to ensure safety and transparency around AI development.

One of the popular generative AI platforms is ChatGPT, which is not accessible in China. If a user asked it politically sensitive questions in Mandarin Chinese such as, “What is the 1989 Tiananmen Square Massacre?” the user would get information that is heavily censored by the Beijing government.

But ChatGPT, created by U.S.-based OpenAI, is not available in China.

China has finalized rules governing its own generative AI. The new regulation will be effective August 15. Chinese chatbots reportedly have built-in censorship to avoid sensitive keywords.

“I think that the development of these systems actually requires a foundation of openness, of interoperability, of reliability of data. And an authoritarian top-down approach that controls the flow of information over time will undermine a government’s ability, a company’s ability, to sustain an innovative edge in AI,” Fick told VOA.

The following excerpts from the interview have been edited for brevity and clarity.

VOA: Seven leading AI companies made eight promises about what they will do with their technology. What do these commitments actually mean?

Nathaniel Fick, the U.S. ambassador-at-large for cyberspace and digital policy: As we think about governance of this new tech frontier of artificial intelligence, our North Star ought to be preserving our innovative edge and ensuring that we can continue to maintain a global leadership position in the development of robust AI tools, because the upside to solve shared challenges around the world is so immense. …

These commitments fall into three broad categories. First, the companies have a duty to ensure that their products are safe. … Second, the companies have a responsibility to ensure that their products are secure. … Third, the companies have a duty to ensure that their products gain the trust of people around the world. And so, we need a way for viewers, consumers, to ascertain whether audio content or visual content is AI-generated or not, whether it is authentic or not. And that’s what these commitments do.

VOA: Would the United States government fund some of these types of safety tests conducted by those companies?

Fick: The United States government has a huge interest in ensuring that these companies, these models, their products are safe, are secure, and are trustworthy. We look forward to partnering with these companies over time to do that. And of course, that could certainly include financial partnership.

VOA: The White House has listed cancer prevention and mitigating climate change as two of the areas where it would like AI companies to focus their efforts. Can you talk about U.S. competition with China on AI? Is that an administration priority?

Fick: We would expect the Chinese approach to artificial intelligence to look very much like the PRC’s [People’s Republic of China] approach to other areas of technology. Generally, top down. Generally, not focused on open expression, not focused on open access to information. And these AI systems, by their very definition, require that sort of openness and that sort of access to large data sets and information.

VOA: Some industry experts have warned that China is spending three times as much as the U.S. to become the world’s AI leader. Can you talk about China’s ambition on AI? Is the U.S. keeping up with the competition?

Fick: We certainly track things like R&D [research and development] and investment dollars, but I would make the point that those are inputs, not outputs. And I don’t think it’s any accident that the leading companies in AI research are American companies. Our innovation ecosystem, supported by foundational research and immigration policy that attracts the world’s best talent, tax and regulatory policies that encourage business creation and growth.

VOA: Any final thoughts about the risks? Can AI models be used to develop bioweapons? Can AI wipe out humanity?

Fick: My experience has been that risk and return really are correlated in life and in financial markets. There’s huge reward and promise in these technologies and of course, at the same time, they bring with them significant risks. We need to maintain our North Star, our focus on that innovative edge and all of the promise that these technologies bring in. At the same time, it’s our responsibility as governments and as responsible companies leading in this space to put the guardrails in place to mitigate those risks.

In the Shadow of Giants, Mongolian Girls Learn to Code

A class that teaches teenage girls how to code – or write instructions for computers – is drawing lots of interest in Mongolia. For VOA, Graham Kanwit and Elizabeth Lee have the story about a program that prepares them for jobs in technology. Camera: Sam Paakkonen

Elon Musk Reveals New Black and White X Logo To Replace Twitter’s Blue Bird

Elon Musk has unveiled a new black and white “X” logo to replace Twitter’s famous blue bird as he follows through with a major rebranding of the social media platform he bought for $44 billion last year.

Musk replaced his own Twitter icon with a white X on a black background and posted a picture on Monday of the design projected on Twitter’s San Francisco headquarters.

The X started appearing on the top of the desktop version of Twitter on Monday, but the bird was still dominant across the phone app.

Musk had asked fans for logo ideas and chose one, which he described as minimalist Art Deco, saying it “certainly will be refined.”

“And soon we shall bid adieu to the twitter brand and, gradually, all the birds,” Musk tweeted Sunday.

The X.com web domain now redirects users to Twitter.com, Musk said.

In response to questions about what tweets would be called when the rebranding is done, Musk said they would be called Xs.

Musk, CEO of Tesla, has long been fascinated with the letter. The billionaire is also CEO of rocket company Space Exploration Technologies Corp., commonly known as SpaceX. And in 1999, he founded a startup called X.com, an online financial services company now known as PayPal,

He calls his son with the singer Grimes, whose actual name is a collection of letters and symbols, “X.”

Musk’s Twitter purchase and rebranding are part of his strategy to create what he’s dubbed an ” everything app ” similar to China’s WeChat, which combines video chats, messaging, streaming and payments.

Linda Yaccarino, the longtime NBC Universal executive Musk tapped to be Twitter CEO in May, posted the new logo and weighed in on the change, writing on Twitter that X would be “the future state of unlimited interactivity — centered in audio, video, messaging, payments/banking — creating a global marketplace for ideas, goods, services, and opportunities.”

Experts, however, predicted the new name will confuse much of Twitter’s audience, which has already been souring on the social media platform following a raft of Musk’s other changes. The site also faces new competition from Threads, the new app by Facebook and Instagram parent Meta that directly targets Twitter users.

Musk Says Twitter to Change Logo to “X” From The Bird  

Elon Musk said Sunday that he plans to change the logo of Twitter to an “X” from the bird, marking what would be the latest big change since he bought the social media platform for $44 billion last year. 

In a series of posts on his Twitter account starting just after 12 a.m. ET, Twitter’s owner said that he’s looking to make the change worldwide as soon as Monday. 

“And soon we shall bid adieu to the twitter brand and, gradually, all the birds,” Musk wrote on his account. 

Earlier this month, Musk put new curfews on his digital town square, a move that came under sharp criticism that it could drive away advertisers and undermine its cultural influence as a trendsetter. 

In May, Musk hired longtime NBC Universal executive Linda Yaccarino as Twitter’s CEO in a move to win back advertisers. 

Luring advertisers is essential for Musk and Twitter after many fled in the early months after his takeover of the social media platform, fearing damage to their brands in the ensuing chaos. Musk said in late April that advertisers had returned, but provided no specifics. 

 

AI Firms Strike Deal With White House on Safety Guidelines 

The White House on Friday announced that the Biden administration had reached a voluntary agreement with seven companies building artificial intelligence products to establish guidelines meant to ensure the technology is developed safely.

“These commitments are real, and they’re concrete,” President Joe Biden said in comments to reporters. “They’re going to help … the industry fulfill its fundamental obligation to Americans to develop safe, secure and trustworthy technologies that benefit society and uphold our values and our shared values.”

The companies that sent leaders to the White House were Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI. The firms are all developing systems called large language models (LLMs), which are trained using vast amounts of text, usually taken from the publicly accessible internet, and use predictive analysis to respond to queries conversationally.

In a statement, OpenAI, which created the popular ChatGPT service, said, “This process, coordinated by the White House, is an important step in advancing meaningful and effective AI governance, both in the U.S. and around the world.”

Safety, security, trust

The agreement, released by the White House on Friday morning, outlines three broad areas of focus: assuring that AI products are safe for public use before they are made widely available; building products that are secure and cannot be misused for unintended purposes; and establishing public trust that the companies developing the technology are transparent about how they work and what information they gather.

As part of the agreement, the companies pledged to conduct internal and external security testing before AI systems are made public in order to ensure they are safe for public use, and to share information about safety and security with the public.

Further, the commitment obliges the companies to keep strong safeguards in place to prevent the inadvertent or malicious release of technology and tools not intended for the general public, and to support third-party efforts to detect and expose any such breaches.

Finally, the agreement sets out a series of obligations meant to build public trust. These include assurances that AI-created content will always be identified as such; that companies will offer clear information about their products’ capabilities and limitations; that companies will prioritize mitigating the risk of potential harms of AI, including bias, discrimination and privacy violations; and that companies will focus their research on using AI to “help address society’s greatest challenges.”

The administration said that it is at work on an executive order that would ask Congress to develop legislation to “help America lead the way in responsible innovation.”

Just a start

Experts contacted by VOA all said that the agreement marked a positive step on the road toward effective regulation of emerging AI technology, but they also warned that there is far more work to be done, both in understanding the potential harm these powerful models might cause and finding ways to mitigate it.

“No one knows how to regulate AI — it’s very complex and is constantly changing,” said Susan Ariel Aaronson, a professor at George Washington University and the founder and director of the research institute Digital Trade and Data Governance Hub.

“The White House is trying very hard to regulate in a pro-innovative way,” Aaronson told VOA. “When you regulate, you always want to balance risk — protecting people or businesses from harm — with encouraging innovation, and this industry is essential for U.S. economic growth.”

She added, “The United States is trying and so I want to laud the White House for these efforts. But I want to be honest. Is it sufficient? No.”

‘Conversational computing’

It’s important to get this right, because models like ChatGPT, Google’s Bard and Anthropic’s Claude will increasingly be built into the systems that people use to go about their everyday business, said Louis Rosenberg, the CEO and chief scientist of the firm Unanimous AI. 

“We’re going into an age of conversational computing, where we’re going to talk to our computers and our computers are going to talk back,” Rosenberg told VOA. “That’s how we’re going to engage search engines. That’s how we’re going to engage apps. That’s how we’re going to engage productivity tools.”

Rosenberg, who has worked in the AI field for 30 years and holds hundreds of related patents, said that when it comes to LLMs being so tightly integrated into our day-to-day life, we still don’t know everything we should be concerned about.

“Many of the risks are not fully understood yet,” he said. Conventional computer software is very deterministic, he said, meaning that programs are built to do precisely what programmers tell them to do. By contrast, the exact way in which large language models operate can be opaque even to their creators.

The models can display unintended bias, can parrot false or misleading information, and can say things that people find offensive or even dangerous. In addition, many people will interact with them through a third-party service, such as a website, that integrates the large language model into its offering, but can tailor its responses in ways that might be malicious or manipulative.

Many of these problems will become apparent only after these systems have been deployed at scale, by which point they will already be in use by the public.

“The problems have not yet surfaced at a level where policymakers can address them head-on,” Rosenberg said. “The thing that is, I think, positive, is that at least policymakers are expecting the problems.”

More stakeholders needed 

Benjamin Boudreaux, a policy analyst with the RAND Corporation, told VOA that it was unclear how much actual change in the companies’ behavior Friday’s agreement would generate.

“Many of the things that the companies are agreeing to here are things that the companies already do, so it’s not clear that this agreement really shifts much of their behavior,” Boudreaux said. “And so I think there is still going to be a need for perhaps a more regulatory approach or more action from Congress and the White House.”

Boudreaux also said that as the administration fleshes out its policy, it will have to broaden the range of participants in the conversation.

“This is just a group of private sector entities; this doesn’t include the full set of stakeholders that need to be involved in discussions about the risks of these systems,” he said. “The stakeholders left out of this include some of the independent evaluators, civil society organizations, nonprofit groups and the like, that would actually do some of the risk analysis and risk assessment.”

Japan Signs Chip Development Deal With India 

Japan and India have signed an agreement for the joint development of semiconductors, in what appears to be another indication of how global businesses are reconfiguring post-pandemic supply chains as China loses its allure for foreign companies.

India’s Ashwini Vaishnaw, minister for railways, communications, and electronics and information technology, and Japan’s minister of economy, trade and industry, Yasutoshi Nishimura, signed the deal Thursday in New Delhi.

The memorandum covers “semiconductor design, manufacturing, equipment research, talent development and [will] bring resilience in the semiconductor supply chain,” Vaishnaw said.

Nishimura said after his meeting with Vaishnaw that “India has excellent human resources” in fields such as semiconductor design.

“By capitalizing on each other’s strengths, we want to push forward with concrete projects as early as possible,” Nishimura told a news conference, Kyodo News reported.  

Andreas Kuehn, a senior fellow at the American office of Observer Research Foundation, an Indian think tank, told VOA Mandarin: “Japan has extensive experience in this industry and understands the infrastructure in this field at a broad level. It can be an important partner in advancing India’s semiconductor ambitions.”

Shift from China

Foreign companies have been shifting their manufacturing away from China over the past decade, prompted by increasing labor costs.

More recently, Beijing’s push for foreign companies to share their technologies and data has increased uneasiness with China’s business climate, according to surveys of U.S. and European businesses there.

The discomfort stems from a 2021 data security law that Beijing updated in April and put into effect on July 1. Its broad anti-espionage language does not define what falls under China’s national security or interests. 

After taking office in 2014, Indian Prime Minister Narendra Modi launched a “Make in India” initiative with the goal of turning India into a global manufacturing center with an expanded chip industry.

The initiative is not entirely about making India a self-sufficient economy, but more about welcoming investors from countries with similar ideas. Japan and India are part of the QUAD security framework, along with the United States and Australia, which aims to strengthen cooperation as a group, as well as bilaterally between members, to maintain peace and stability in the region.

Jagannath Panda, director of the Stockholm Center for South Asian and Indo-Pacific Affairs of the Institute for Security and Development Policy, said that the international community “wants a safe region where the semiconductor industry can continue to supply the global market. This chain of linkages is critical, and India is at the heart of the Indo-Pacific region” — a location not lost on chip companies in the United States, Taiwan and Japan that are reevaluating supply chain security and reducing their dependence on China.

Looking ahead

Panda told VOA Mandarin: “The COVID pandemic has proved that we should not rely too much on China. [India’s development of the chip industry] is also to prepare India for the next half century. Unless countries with similar ideas such as the United States and Japan cooperate effectively, India cannot really develop its semiconductor industry.”

New Delhi and Washington signed a memorandum of understanding in March to advance cooperation in the semiconductor field.

During Modi’s visit to the United States in June, he and President Joe Biden announced a cooperation agreement to coordinate semiconductor incentive and subsidy plans between the two countries.

Micron, a major chip manufacturer, confirmed on June 22 that it will invest as much as $800 million in India to build a chip assembly and testing plant.

Applied Materials said in June that it plans to invest $400 million over four years to build an engineering center in Bangalore, Reuters reported.  The new center is expected to be located near the company’s existing facility in Bengaluru and is likely to support more than $2 billion of planned investments and create 500 new advanced engineering jobs, the company said.

Experts said that although the development of India’s chip industry will not pose a challenge to China in the short term, China’s increasingly unfriendly business environment will prompt international semiconductor companies to consider India as one of the destinations for transferring production capacity.

“China is still a big player in the semiconductor industry, especially traditional chips, and we shouldn’t underestimate that. I don’t think that’s going to go away anytime soon. The world depends on this capacity,” Kuehn said. 

He added: “For multinational companies, China has become a more difficult business environment to operate in. We are likely to see them make other investments outside China after a period of time, which may compete with China’s semiconductor industry, especially in Southeast Asia. India may also play a role in this regard.” 

Bo Gu contributed to this report.