Flashing ‘X’ Sign Removed From Former Twitter’s Headquarters

A brightly flashing “X” sign has been removed from the San Francisco headquarters of the company formerly known as Twitter just days after it was installed. 

The San Francisco Department of Building Inspection said Monday it received 24 complaints about the unpermitted structure over the weekend. Complaints included concerns about its structural safety and illumination. 

The Elon Musk-owned company, which has been rebranded as X, had removed the Twitter sign and iconic blue bird logo from the building last week. That work was temporarily paused because the company did not have the necessary permits. For a time, the “er” at the end of “Twitter” remained up due to the abrupt halt of the sign takedown. 

The city of San Francisco had opened a complaint and launched an investigation into the giant “X” sign, which was installed Friday on top of the downtown building as Musk continues his rebrand of the social media platform. 

 

 

The chaotic rebrand of Twitter’s building signage is similar to the haphazard way in which the Twitter platform is being turned into X. While the X logo has replaced Twitter on many parts of the site and app, remnants of Twitter remain. 

Representatives for X did not immediately respond to a message for comment Monday. 

China Curbs Drone Exports, Citing Ukraine, Concern About Military Use

China imposed restrictions Monday on exports of long-range civilian drones, citing Russia’s war in Ukraine and concern that drones might be converted to military use. 

Chinese leader Xi Jinping’s government is friendly with Moscow but says it is neutral in the 18-month-old war. It has been stung by reports that both sides might be using Chinese-made drones for reconnaissance and possibly attacks. 

Export controls will take effect Tuesday to prevent use of drones for “non-peaceful purposes,” the Ministry of Commerce said in a statement. It said exports still will be allowed but didn’t say what restrictions it would apply. 

China is a leading developer and exporter of drones. DJI Technology Co., one of the global industry’s top competitors, announced in April 2022 it was pulling out of Russia and Ukraine to prevent its drones from being used in combat. 

“The risk of some high specification and high-performance civilian unmanned aerial vehicles being converted to military use is constantly increasing,” the Ministry of Commerce said. 

Restrictions will apply to drones that can fly beyond the natural sight distance of operators or stay aloft more than 30 minutes, have attachments that can throw objects and weigh more than seven kilograms (15½ pounds), according to the ministry. 

“Since the crisis in Ukraine, some Chinese civilian drone companies have voluntarily suspended their operations in conflict areas,” the Ministry of Commerce said. It accused the United States and Western media of spreading “false information” about Chinese drone exports. 

The government defended its dealings Friday with Russia as “normal economic and trade cooperation” after a U.S. intelligence report said Beijing possibly provided equipment used in Ukraine that might have military applications. 

The report cited Russian customs data that showed Chinese state-owned military contractors supplied drones, navigation equipment, fighter jet parts and other goods. 

The Biden administration has warned Beijing of unspecified consequences if it supports the Kremlin’s war effort. Last week’s report didn’t say whether any of the trade cited might trigger U.S. retaliation. 

Xi and Russian President Vladimir Putin declared before the February 2022 invasion that their governments had a “no-limits” friendship. Beijing has blocked efforts to censure Moscow in the United Nations and has repeated Russian justifications for the attack. 

China has “always opposed the use of civilian drones for military purposes,” the Ministry of Commerce said. “The moderate expansion of drone control by China this time is an important measure to demonstrate the responsibility of a responsible major country.” 

The Ukrainian government appealed to DJI in March 2022 to stop selling drones it said the Russian ministry was using to target missile attacks. DJI rejected claims it leaked data on Ukraine’s military positions to Russia. 

AM Radio Fights to Keep Its Spot on US Car Dashboards

There has been a steady decline in the number of AM radio stations in the United States. Over the decades, urban and mainstream broadcasters have moved to the FM band, which has better audio fidelity, although more limited range. Now, there is a new threat to the remaining AM stations. Some automakers want to kick AM off their dashboard radios, deeming it obsolete. VOA’s chief national correspondent, Steve Herman, in the state of Texas, has been tuning in to some traditional rural stations, as well as those broadcasting in languages others than English in the big cities. Camera – Steve Herman and Jonathan Zizzo.

FBI Warns About China Theft of US AI Technology

China is pilfering U.S.-developed artificial intelligence (AI) technology to enhance its own aspirations and to conduct foreign influence operations, senior FBI officials said Friday.

The officials said China and other U.S. adversaries are targeting American businesses, universities and government research facilities to get their hands on cutting-edge AI research and products.

“Nation-state adversaries, particularly China, pose a significant threat to American companies and national security by stealing our AI technology and data to advance their own AI programs and enable foreign influence campaigns,” a senior FBI official said during a background briefing call with reporters.

China has a national plan to surpass the U.S. as the world’s top AI power by 2030, but U.S. officials say much of its progress is based on stolen or otherwise acquired U.S. technology.

“What we’re seeing is efforts across multiple vectors, across multiple industries, across multiple avenues to try to solicit and acquire U.S. technology … to be able to re-create and develop and advance their AI programs,” the senior FBI official said.

The briefing was aimed at giving the FBI’s view of the threat landscape, not to react to any recent events, officials said.

FBI Director Christopher Wray sounded the alarm about China’s AI intentions at a cybersecurity summit in Atlanta on Wednesday. He warned that after “years stealing both our innovation and massive troves of data,” the Chinese are well-positioned “to use the fruits of their widespread hacking to power, with AI, even more powerful hacking efforts.”

China has denied the allegations.

The senior FBI official briefing reporters said that while the bureau remains focused on foreign acquisition of U.S. AI technology and talent, it has concern about future threats from foreign adversaries who exploit that technology.

“However, if and when the technology is acquired, their ability to deploy it in an instance such as [the 2024 presidential election] is something that we are concerned about and do closely monitor.”

With the recent surge in AI use, the U.S. government is grappling with its benefits and risks. At a White House summit earlier this month, top AI executives agreed to institute guidelines to ensure the technology is developed safely.

Even as the technology evolves, cybercriminals are actively using AI in a variety of ways, from creating malicious code to crafting convincing phishing emails and carrying out insider trading of securities, officials said.

“The bulk of the caseload that we’re seeing now and the scope of activity has in large part been on criminal actor use and deployment of AI models in furtherance of their traditional criminal schemes,” the senior FBI official said.

The FBI warned that violent extremists and traditional terrorist actors are experimenting with the use of various AI tools to build explosives, he said.

“Some have gone as far as to post information about their engagements with the AI models and the success which they’ve had defeating security measures in most cases or in a number of cases,” he said.

The FBI has observed a wave of fake AI-generated websites with millions of followers that carry malware to trick unsuspecting users, he said. The bureau is investigating the websites.

Wray cited a recent case in which a Dark Net user created malicious code using ChatGPT.

The user “then instructed other cybercriminals on how to use it to re-create malware strains and techniques based on common variants,” Wray said.

“And that’s really just the tip of the iceberg,” he said. “We assess that AI is going to enable threat actors to develop increasingly powerful, sophisticated, customizable and scalable capabilities — and it’s not going to take them long to do it.”

Prospect of AI Producing News Articles Concerns Digital Experts 

Google’s work developing an artificial intelligence tool that would produce news articles is concerning some digital experts, who say such devices risk inadvertently spreading propaganda or threatening source safety. 

The New York Times reported last week that Google is testing a new product, known internally by the working title Genesis, that employs artificial intelligence, or AI, to produce news articles.

Genesis can take in information, like details about current events, and create news content, the Times reported. Google already has pitched the product to the Times and other organizations, The Washington Post and News Corp, which owns The Wall Street Journal.

The launch of the generative AI chatbot ChatGPT last fall has sparked debate about how artificial intelligence can and should fit into the world — including in the news industry.

AI tools can help reporters research by quickly analyzing data and extracting it from PDF files in a process known as scraping.  AI can also help journalists’ fact-check sources. 

But the apprehension — including potentially spreading propaganda or ignoring the nuance humans bring to reporting — appears to be weightier. These worries extend beyond Google’s Genesis tool to encapsulate the use of AI in news gathering more broadly.

If AI-produced articles are not carefully checked, they could unwittingly include disinformation or misinformation, according to John Scott-Railton, who researches disinformation at the Citizen Lab in Toronto.  

“It’s sort of a shame that the places that are the most friction-free for AI to scrape and draw from — non-paywalled content — are the places where disinformation and propaganda get targeted,” Scott-Railton told VOA. “Getting people out of the loop does not make spotting disinformation easier.”

Paul M. Barrett, deputy director at New York University’s Stern Center for Business and Human Rights, agrees that artificial intelligence can turbocharge the dissemination of falsehoods. 

“It’s going to be easier to generate myths and disinformation,” he told VOA. “The supply of misleading content is, I think, going to go up.”

In an emailed statement to VOA, a Google spokesperson said, “In partnership with news publishers, especially smaller publishers, we’re in the earliest stages of exploring ideas to potentially provide AI-enabled tools to help their journalists with their work.”

“Our goal is to give journalists the choice of using these emerging technologies in a way that enhances their work and productivity,” the spokesperson said. “Quite simply these tools are not intended to, and cannot, replace the essential role journalists have in reporting, creating and fact-checking their articles.”

The implications for a news outlet’s credibility are another important consideration regarding the use of artificial intelligence.

News outlets are presently struggling with a credibility crisis. Half of Americans believe that national news outlets try to mislead or misinform audiences through their reporting, according to a February report from Gallup and the Knight Foundation.

“I’m puzzled that anyone thinks that the solution to this problem is to introduce a much less credible tool, with a much shakier command of facts, into newsrooms,” said Scott-Railton, who was previously a Google Ideas fellow.

Reports show that AI chatbots regularly produce responses that are entirely wrong or made up. AI researchers refer to this habit as a “hallucination.”

Digital experts are also cautious about what security risks may be posed by using AI tools to produce news articles. Anonymous sources, for instance, might face retaliation if their identities are revealed.

“All users of AI-powered systems need to be very conscious of what information they are providing to the system,” Barrett said.

“The journalist would have to be cautious and wary of disclosing to these AI systems information such as the identity of a confidential source, or, I would say, even information that the journalist wants to make sure doesn’t become public,” he said. 

Scott-Railton said he thinks AI probably has a future in most industries, but it’s important not to rush the process, especially in news. 

“What scares me is that the lessons learned in this case will come at the cost of well-earned reputations, will come at the cost of factual accuracy when it actually counts,” he said.  

Vietnam Orders Social Media Firms to Cut ‘Toxic’ Content Using AI

HO CHI MINH CITY, VIETNAM – Vietnam’s demand that international social media firms use artificial intelligence to identify and remove “toxic” online content is part of an ever expanding and alarming campaign to pressure overseas platforms to suppress freedom of speech in the country, rights groups, experts and activists say.

Vietnam is a lucrative market for overseas social media platforms. Of the country’s population of nearly 100 million there are 75.6 million Facebook users, according to Singapore-based research firm Data Reportal. And since Vietnamese authorities have rolled out tighter restrictions on online content and ordered social media firms to remove content the government deems anti-state, social media sites have largely complied with government demands to silence online critiques of the government, experts and rights groups told VOA.

“Toxic” is a term used broadly to refer to online content which the state deems to be false, violent, offensive, or anti-state, according to local media reports.

During a mid-year review conference on June 30, Vietnam’s Information Ministry ordered international tech firms to use artificial intelligence to find and remove so-called toxic content automatically, according to a report from state-run broadcaster Vietnam Television. Details have not been revealed on how or when companies must comply with the new order.

Le Quang Tu Do, the head of the Authority of Broadcasting and Electronic Information, had noted during an April 6 news conference that Vietnamese authorities have economic, technical and diplomatic tools to act against international platforms, according to a local media report. According to the report he said the government could cut off social platforms from advertisers, banks, and e-commerce, block domains and servers, and advise the public to cease using platforms with toxic content.

“The point of these measures is for international platforms without offices in Vietnam, like Facebook and YouTube, to abide by the law,” Do said.

Pat de Brun, Amnesty International’s deputy director of Amnesty Tech, told VOA the latest demand is consistent with Vietnam’s yearslong strategy to increase pressure on social media companies. De Brun said it is the government’s broad definition of what is toxic, rather than use of artificial intelligence, that is of most human rights concern because it silences speech that can include criticism of government and policies.

“Vietnamese authorities have used exceptionally broad categories to determine content that they find inappropriate and which they seek to censor. … Very, very often this content is protected speech under international human rights law,” de Brun said. “It’s really alarming to see that these companies have relented in the face of this pressure again and again.”

During the first half of this year, Facebook removed 2,549 posts, YouTube removed 6,101 videos, and TikTok took down 415 links, according to an Information Ministry statement.

Online suppression

Nguyen Khac Giang, a research fellow at Singapore’s ISEAS-Yusof Ishak Institute, told VOA that heightened online censorship has been led by the conservative faction within Vietnam’s Communist Party, which gained power in 2016.

Nguyen Phu Trong was elected as general secretary in 2016, putting a conservative in the top position within the one-party state. Along with Trong, other conservative-minded leaders rose within government the same year, pushing out reformists, Giang said. Efforts to control the online sphere led to 2018’s Law on Cybersecurity, which expands government control of online content and attempts to localize user data in Vietnam. The government also established Force 47 in 2017, a military unit with reportedly 10,000 members assigned to monitor online space.

On July 19, local media reported that the information ministry proposed taking away the internet access of people who commit violations online especially via livestream on social media sites.

Activists often see their posts removed, lose access to their accounts, and the government also arrests Vietnamese bloggers, journalists, and critics living in the country for their online speech. They are often charged under Article 117 of Vietnam’s Criminal Code, which criminalizes “making, storing, distributing or disseminating information, documents and items against the Socialist Republic of Vietnam.”

According to The 88 Project, a U.S.-based human rights group, 191 activists are in jail in Vietnam, many of whom have been arrested for online advocacy and charged under Article 117.

“If you look at the way that social media is controlled in Vietnam, it is very starkly contrasted with what happened before 2016,” Giang said. “What we are seeing now is only a signal of what we’ve been seeing for a long time.”

Giang said the government order is a tool to pressure social media companies to use artificial intelligence to limit content, but he warned that online censorship and limits on public discussion could cause political instability by eliminating a channel for public feedback.

“The story here is that they want the social media platforms to take more responsibility for whatever happens on social media in Vietnam,” Giang said. “If they don’t allow people to report on wrongdoings … how can the [government] know about it?”

Vietnamese singer and dissident Do Nguyen Mai Khoi, now living in the United States, has been contacting Facebook since 2018 for activists who have lost accounts or had posts censored, or are the victims of coordinated online attacks by pro-government Facebook users. Although she has received some help from the company in the past, responses to her requests have become more infrequent.

“[Facebook] should use their leverage,” she added. “If Vietnam closed Facebook, everyone would get angry and there’d be a big wave of revolution or protests.”

Representatives of Meta Platforms Inc., Facebook’s parent company, did not respond to VOA requests for comment.

Vietnam is also a top concern in the region for the harsh punishment of online speech, Dhevy Sivaprakasam, Asia Pacific policy counsel at Access Now, a nonprofit defending digital rights, said.

“I think it’s one of the most egregious examples of persecution on the online space,” she said.

Ambassador: China Will Respond in Kind to US Chip Export Restrictions 

If the United States imposes more investment restrictions and export controls on China’s semiconductor industry, Beijing will respond in kind, according to China’s ambassador to the U.S., Xie Feng, whose tough talk analysts see as the latest response from a so-called wolf-warrior diplomat.

Xie likened the U.S. export controls to “restricting their opponents to only wearing old swimsuits in swimming competitions, while they themselves can wear advanced shark swimsuits.”

Xie’s remarks, made at the Aspen Security Forum last week, came as the U.S. finalized its mechanism for vetting possible investments in China’s cutting-edge technology. These include semiconductors, quantum computing and artificial intelligence, all of which have military as well as commercial applications.

The U.S. Department of Commerce is also considering imposing new restrictions on exports of artificial intelligence (AI) chips to China, despite the objections of U.S. chipmakers.

Wen-Chih Chao, of the Institute of Strategic and International Affairs Studies at Taiwan’s National Chung Cheng University, characterized Xie’s remarks as part of China’s “wolf-warrior” diplomacy, as China’s increasingly assertive style of foreign policy has come to be known. 

He said the threatened Chinese countermeasures would depend on whether Beijing just wants to show an “attitude” or has decided to confront Western countries head-on.

He pointed to China’s investigations of some U.S. companies operating in China. He sees these as China retaliating by “expressing an attitude.”

Getting tougher

But as the tit-for-tat moves of the U.S. and China seem to be “escalating,” Chao pointed to China’s retaliation getting tougher.

An example, he said, is the export controls Beijing slapped on exporters of gallium, germanium and other raw minerals used in high-end chip manufacturing. As of August 1, they must apply for permission from the Ministry of Commerce of China and report the details of overseas buyers.

Chao said China might go further by blocking or limiting the supply of batteries for electric vehicles, mechanical components needed for wind-power generation, gases needed for solar panels, and raw materials needed for pharmaceuticals and semiconductor manufacturing.

China wants to show Western countries that they must think twice when imposing sanctions on Chinese semiconductors or companies, he said.

But other analysts said Beijing does not want to escalate its retaliation to the point where further moves by the U.S. and its allies harm China’s economy, which is only slowly recovering from draconian pandemic lockdowns.

No cooperation

Chao also said China could retaliate by refusing to cooperate on efforts to limit climate change, or by saying “no” when asked to use its influence with Pyongyang to lessen tensions on the Korean Peninsula.

“These are the means China can use to retaliate,” Chao said. “I think there are a lot of them. These may be its current bargaining chips, and it will not use them all simultaneously. It will see how the West reacts. It may show its ability to counter the West step by step.”

Cheng Chen, a political science professor at the State University of New York at Albany, said China’s recent announcement about gallium, germanium and other chipmaking metals is a warning of its ability, and willingness, to retaliate against the U.S.

Even if the U.S. invests heavily in reshaping these industrial chains, it will take a long time to assemble the links, she said.

Chen said that if the U.S. further escalates sanctions on China’s high-tech products, China could retaliate in kind — using tariffs for tariffs, sanctions for sanctions, and regulations for regulations.

Most used strategy

Yang Yikui, an assistant researcher at Taiwan National Defense Security Research Institute, said economic coercion is China’s most commonly used retaliatory tactic.

He said China imposed trade sanctions on salmon imported from Norway when the late pro-democracy activist Liu Xiaobo was awarded the Nobel Peace Prize in 2010. Beijing tightened restrictions on imports of Philippine bananas, citing quarantine issues, during a 2012 maritime dispute with Manila over a shoal in the South China Sea.

Yang said studies show that since 2018, China’s sanctions have become more diverse and detailed, allowing it to retaliate directly and indirectly. It can also use its economic and trade relations to force companies from other countries to participate.

Yang said that after Lithuania agreed in 2021 to let Taiwan establish a representative office in Vilnius, China downgraded its diplomatic relations from the ambassadorial level to the charge d’affaires and removed the country from its customs system database, making it impossible for Lithuanian goods to pass customs.

Beijing then reduced the credit lines of Lithuanian companies operating in the Chinese market and forced other multinational companies to sanction Lithuania. Companies in Germany, France, Sweden and other countries reportedly had cargos stopped at Chinese ports because they contained products made in Lithuania. 

When Australia investigated the origins of COVID, an upset China imposed tariffs or import bans on Australian beef, wine, cotton, timber, lobster, coal and barley. But Beijing did not sanction Australia’s iron ore, wool and natural gas because sanctions on those products stood to hurt key Chinese sectors. 

Adrianna Zhang contributed to this report.

US Works With Artificial Intelligence Companies to Mitigate Risks

Can artificial intelligence wipe out humanity?

A senior U.S. official said the United States government is working with leading AI companies and at least 20 countries to set up guardrails to mitigate potential risks, while focusing on the innovative edge of AI technologies.

Nathaniel Fick, the U.S. ambassador-at-large for cyberspace and digital policy, spoke Tuesday to VOA about the voluntary commitments from leading AI companies to ensure safety and transparency around AI development.

One of the popular generative AI platforms is ChatGPT, which is not accessible in China. If a user asked it politically sensitive questions in Mandarin Chinese such as, “What is the 1989 Tiananmen Square Massacre?” the user would get information that is heavily censored by the Beijing government.

But ChatGPT, created by U.S.-based OpenAI, is not available in China.

China has finalized rules governing its own generative AI. The new regulation will be effective August 15. Chinese chatbots reportedly have built-in censorship to avoid sensitive keywords.

“I think that the development of these systems actually requires a foundation of openness, of interoperability, of reliability of data. And an authoritarian top-down approach that controls the flow of information over time will undermine a government’s ability, a company’s ability, to sustain an innovative edge in AI,” Fick told VOA.

The following excerpts from the interview have been edited for brevity and clarity.

VOA: Seven leading AI companies made eight promises about what they will do with their technology. What do these commitments actually mean?

Nathaniel Fick, the U.S. ambassador-at-large for cyberspace and digital policy: As we think about governance of this new tech frontier of artificial intelligence, our North Star ought to be preserving our innovative edge and ensuring that we can continue to maintain a global leadership position in the development of robust AI tools, because the upside to solve shared challenges around the world is so immense. …

These commitments fall into three broad categories. First, the companies have a duty to ensure that their products are safe. … Second, the companies have a responsibility to ensure that their products are secure. … Third, the companies have a duty to ensure that their products gain the trust of people around the world. And so, we need a way for viewers, consumers, to ascertain whether audio content or visual content is AI-generated or not, whether it is authentic or not. And that’s what these commitments do.

VOA: Would the United States government fund some of these types of safety tests conducted by those companies?

Fick: The United States government has a huge interest in ensuring that these companies, these models, their products are safe, are secure, and are trustworthy. We look forward to partnering with these companies over time to do that. And of course, that could certainly include financial partnership.

VOA: The White House has listed cancer prevention and mitigating climate change as two of the areas where it would like AI companies to focus their efforts. Can you talk about U.S. competition with China on AI? Is that an administration priority?

Fick: We would expect the Chinese approach to artificial intelligence to look very much like the PRC’s [People’s Republic of China] approach to other areas of technology. Generally, top down. Generally, not focused on open expression, not focused on open access to information. And these AI systems, by their very definition, require that sort of openness and that sort of access to large data sets and information.

VOA: Some industry experts have warned that China is spending three times as much as the U.S. to become the world’s AI leader. Can you talk about China’s ambition on AI? Is the U.S. keeping up with the competition?

Fick: We certainly track things like R&D [research and development] and investment dollars, but I would make the point that those are inputs, not outputs. And I don’t think it’s any accident that the leading companies in AI research are American companies. Our innovation ecosystem, supported by foundational research and immigration policy that attracts the world’s best talent, tax and regulatory policies that encourage business creation and growth.

VOA: Any final thoughts about the risks? Can AI models be used to develop bioweapons? Can AI wipe out humanity?

Fick: My experience has been that risk and return really are correlated in life and in financial markets. There’s huge reward and promise in these technologies and of course, at the same time, they bring with them significant risks. We need to maintain our North Star, our focus on that innovative edge and all of the promise that these technologies bring in. At the same time, it’s our responsibility as governments and as responsible companies leading in this space to put the guardrails in place to mitigate those risks.

In the Shadow of Giants, Mongolian Girls Learn to Code

A class that teaches teenage girls how to code – or write instructions for computers – is drawing lots of interest in Mongolia. For VOA, Graham Kanwit and Elizabeth Lee have the story about a program that prepares them for jobs in technology. Camera: Sam Paakkonen