Biden Signs Sweeping Executive Order on AI Oversight

President Joe Biden on Monday signed a wide-ranging executive order on artificial intelligence, covering topics as varied as national security, consumer privacy, civil rights and commercial competition. The administration heralded the order as taking “vital steps forward in the U.S.’s approach on safe, secure, and trustworthy AI.”

The order directs departments and agencies across the U.S. federal government to develop policies aimed at placing guardrails alongside an industry that is developing newer and more powerful systems at a pace rate that has many concerned it will outstrip effective regulation.

“To realize the promise of AI and avoid the risk, we need to govern this technology,” Biden said during a signing ceremony at the White House. The order, he added, is “the most significant action any government anywhere in the world has ever taken on AI safety, security and trust.” 

‘Red teaming’ for security 

One of the marquee requirements of the new order is that it will require companies developing advanced artificial intelligence systems to conduct rigorous testing of their products to ensure that bad actors cannot use them for nefarious purposes. The process, known as red teaming, will assess, among other things, “AI systems threats to critical infrastructure, as well as chemical, biological, radiological, nuclear and cybersecurity risks.” 

The National Institute of Standards and Technology will set the standards for such testing, and AI companies will be required to report their results to the federal government prior to releasing new products to the public. The Departments of Homeland Security and Energy will be closely involved in the assessment of threats to vital infrastructure. 

To counter the threat that AI will enable the creation and dissemination of false and misleading information, including computer-generated images and “deep fake” videos, the Commerce Department will develop guidance for the creation of standards that will allow computer-generated content to be easily identified, a process commonly called “watermarking.” 

The order directs the White House chief of staff and the National Security Council to develop a set of guidelines for the responsible and ethical use of AI systems by the U.S. national defense and intelligence agencies.

Privacy and civil rights

The order proposes a number of steps meant to increase Americans’ privacy protections when AI systems access information about them. That includes supporting the development of privacy-protecting technologies such as cryptography and creating rules for how government agencies handle data containing citizens’ personally identifiable information.

However, the order also notes that the United States is currently in need of legislation that codifies the kinds of data privacy protections that Americans are entitled to. Currently, the U.S. lags far behind Europe in the development of such rules, and the order calls on Congress to “pass bipartisan data privacy legislation to protect all Americans, especially kids.”

The order recognizes that the algorithms that enable AI to process information and answer users’ questions can themselves be biased in ways that disadvantage members of minority groups and others often subject to discrimination. It therefore calls for the creation of rules and best practices addressing the use of AI in a variety of areas, including the criminal justice system, health care system and housing market.

The order covers several other areas, promising action on protecting Americans whose jobs may be affected by the adoption of AI technology; maintaining the United States’ market leadership in the creation of AI systems; and assuring that the federal government develops and follows rules for its own adoption of AI systems.

Open questions

Experts say that despite the broad sweep of the executive order, much remains unclear about how the Biden administration will approach the regulations of AI in practice.

Benjamin Boudreaux, a policy researcher at the RAND Corporation, told VOA that while it is clear the administration is “trying to really wrap their arms around the full suite of AI challenges and risks,” much work remains to be done.

“The devil is in the details here about what funding and resources go to executive branch agencies to actually enact many of these recommendations, and just what models a lot of the norms and recommendations suggested here will apply to,” Boudreaux said.

International leadership

Looking internationally, the order says the administration will work to take the lead in developing “an effort to establish robust international frameworks for harnessing AI’s benefits and managing its risks and ensuring safety.”

James A. Lewis, senior vice president and director of the strategic technologies program at the Center for Strategic and International Studies, told VOA that the executive order does a good job of laying out where the U.S. stands on many important issues related to the global development of AI.

“It hits all the right issues,” Lewis said. “It’s not groundbreaking in a lot of places, but it puts down the marker for companies and other countries as to how the U.S. is going to approach AI.”

That’s important, Lewis said, because the U.S. is likely to play a leading role in the development of the international rules and norms that grow up around the technology.

“Like it or not — and certainly some countries don’t like it — we are the leaders in AI,” Lewis said. “There’s a benefit to being the place where the technology is made when it comes to making the rules, and the U.S. can take advantage of that.”

‘Fighting the last war’ 

Not all experts are certain the Biden administration’s focus is on the real threats that AI might present to consumers and citizens. 

Louis Rosenberg, a 30-year veteran of AI development and the CEO of American tech firm Unanimous AI, told VOA he is concerned the administration may be “fighting the last war.”

“I think it’s great that they’re making a bold statement that this is a very important issue,” Rosenberg said. “It definitely shows that the administration is taking it seriously and that they want to protect the public from AI.”

However, he said, when it comes to consumer protection, the administration seems focused on how AI might be used to advance existing threats to consumers, like fake images and videos and convincing misinformation — things that already exist today.

“When it comes to regulating technology, the government has a track record of underestimating what’s new about the technology,” he said.

Rosenberg said he is more concerned about the new ways in which AI might be used to influence people. For example, he noted that AI systems are being built to interact with people conversationally.

“Very soon, we’re not going to be typing in requests into Google. We’re going to be talking to an interactive AI bot,” Rosenberg said. “AI systems are going to be really effective at persuading, manipulating, potentially even coercing people conversationally on behalf of whomever is directing that AI. This is the new and different threat that did not exist before AI.” 

Musk Pulls Plug on Paying for X Factchecks

Elon Musk has said that corrections to posts on X would no longer be eligible for payment as the social network comes under mounting criticism as becoming a conduit for misinformation.

In the year since taking over Twitter, now rebranded as X, Musk has gutted content moderation, restored accounts of previously banned extremists, and allowed users to purchase account verification, helping them profit from viral — but often inaccurate — posts.

Musk has instead promoted Community Notes, in which X users police the platform, as a tool to combat misinformation. 

But on Sunday, Musk tweeted a modification in how Community Notes works.

“Making a slight change to creator monetization: Any posts that are corrected by @CommunityNotes become ineligible for revenue share,” he wrote.  

“The idea is to maximize the incentive for accuracy over sensationalism,” he added. 

X pays content creators whose work generates lots of views a share of advertising revenue. 

Musk warned against using corrections to make X users ineligible for receiving payouts.

“Worth ‘noting’ that any attempts to weaponize @CommunityNotes to demonetize people will be immediately obvious, because all code and data is open source,” he posted.

Musk’s announcement follows the unveiling Friday of a $16-a-month subscription plan that users who pay more get the biggest boost for their replies. Earlier this year it unveiled an $8-a-month plan to get a “verified” account.

A recent study by the disinformation monitoring group NewsGuard found that verified, paying subscribers were the big spreaders of misinformation about the Israel-Hamas war. 

“Nearly three-fourths of the most viral posts on X advancing misinformation about the Israel-Hamas War are being pushed by ‘verified’ X accounts,” the group said.

It said the 250 most-engaged posts that promoted one of 10 prominent false or unsubstantiated narratives related to the war were viewed more than 100 million times globally in just one week. 

NewsGuard said 186 of those posts were made from verified accounts and only 79 had been fact-checked by Community Notes. 

Verified accounts “turned out to be a boon for bad actors sharing misinformation,” said NewsGuard.

“For less than the cost of a movie ticket, they have gained the added credibility associated with the once-prestigious blue checkmark and enabling them to reach a larger audience on the platform,” it said.

While the organization said it found misinformation spreading widely on other social media platforms such as Facebook, Instagram, TikTok and Telegram, it added that it found false narratives about the Israel-Hamas war tend to go viral on X before spreading elsewhere. 

Musk Says Starlink to Provide Connectivity in Gaza

Elon Musk said on Saturday that SpaceX’s Starlink will support communication links in Gaza with “internationally recognized aid organizations.”

A telephone and internet blackout isolated people in the Gaza Strip from the world and from each other on Saturday, with calls to loved ones, ambulances or colleagues elsewhere all but impossible as Israel widened its air and ground assault.

International humanitarian organizations said the blackout, which began on Friday evening, was worsening an already desperate situation by impeding lifesaving operations and preventing them from contacting their staff on the ground.

Following Russia’s February 2022 invasion of Ukraine, Starlink satellites were reported to have been critical to maintaining internet connectivity in some areas despite attempted Russian jamming.

Since then, Musk has said he declined to extend coverage over Russian-occupied Crimea, refusing to allow his satellites to be used for Ukrainian attacks on Russian forces there.

UN Announces Advisory Body on Artificial Intelligence 

The United Nations has begun an effort to help the world manage the risks and benefits of artificial intelligence.

U.N. Secretary-General Antonio Guterres on Thursday launched a 39-member advisory body of tech company executives, government officials and academics from countries spanning six continents.

The panel aims to issue preliminary recommendations on AI governance by the end of the year and finalize them before the U.N. Summit of the Future next September.

“The transformative potential of AI for good is difficult even to grasp,” Guterres said. He pointed to possible uses including predicting crises, improving public health and education, and tackling the climate crisis.

However, he cautioned, “it is already clear that the malicious use of AI could undermine trust in institutions, weaken social cohesion and threaten democracy itself.”

Widespread concern about the risks associated with AI has grown since tech company OpenAI launched ChatGPT last year. Its ease of use has raised concern that the tool could replace writing tasks that previously only humans could perform.

With many calling for regulation of AI, researchers and lawmakers have stressed the need for global cooperation on the matter.

The U.N.’s new body on AI will hold its first meeting Friday.

Some information for this report came from Reuters. 

Zara Owner Inditex to Buy Recycled Polyester From US Start-Up

Zara-owner Inditex, the world’s biggest clothing retailer, has agreed to buy recycled polyester from a U.S. start-up as it aims for 25% of its fibers to come from “next-generation” materials by 2030.

As fast-fashion retailers face pressure to reduce waste and use recycled fabrics, Inditex is spending more than $74 million to secure supply from Los Angeles-based Ambercycle of its recycled polyester made from textile waste.

Polyester, a product of the petroleum industry, is widely used in sportswear as it is quick-drying and durable.

Under the offtake deal, Inditex will buy 70% of Ambercycle’s production of recycled polyester, which is sold under the brand cycora, over three years, Inditex CEO Oscar Garcia Maceiras said at a business event in Zaragoza, Spain.

Garcia Maceiras said Inditex is also working with other companies and start-ups in its innovation hub, a unit looking for ways to curb the environmental impact of its products.

“The sustainable transformation of Inditex … is not possible without the collaboration of the different stakeholders,” he said.

The Inditex investment will help Ambercycle fund its first commercial-scale textile recycling factory. Production of cycora at the plant is expected to begin around 2025, and the material will be used in Inditex products over the following three years.

Zara Athleticz, a sub-brand of sportswear for men, launched a collection on Wednesday of “technical pieces” containing up to 50% cycora. Inditex said the collection would be available from Zara.com.

Some apparel brands seeking to reduce their reliance on virgin polyester have switched to recycled polyester derived from plastic bottles, but that practice has come under criticism as it has created more demand for used plastic bottles, pushing up prices.

Textile-to-textile polyester recycling is in its infancy, though, and will take time to reach the scale required by global fashion brands.

“We want to drive innovation to scale-up new solutions, processes and materials to achieve textile-to-textile recycling,” Inditex’s chief sustainability officer Javier Losada said in a statement.

The Ambercycle deal marks the latest in a series of investments made by Inditex into textile recycling start-ups.

Last year it signed a $104 million, three-year deal to buy 30% of the recycled fiber produced by Finland’s Infinited Fiber Co., and also invested in Circ, another U.S. firm focused on textile-to-textile recycling.

In Spain, Inditex has joined forces with rivals, including H&M and Mango, in an association to manage clothing waste, as the industry prepares for EU legislation requiring member states to separately collect textile waste beginning January 2025.

33 US States Sue Meta, Accusing Platform of Harming Children

Thirty-three U.S. states are suing Meta Platforms Inc., accusing it of damaging young people’s mental health through the addictive nature of their social media platforms.

The suit filed Tuesday in federal court in Oakland, California, alleges Meta knowingly installed addictive features on its social media platforms, Instagram and Facebook, and has collected data on children younger than 13, without their parents’ consent, violating federal law.

“Research has shown that young people’s use of Meta’s social media platforms is associated with depression, anxiety, insomnia, interference with education and daily life, and many other negative outcomes,” the complaint says.

The filing comes after Meta’s own research in 2021 found that the company was aware of the damage Instagram can do to teenagers, especially girls.

In Meta’s 2021 study, 13.5% of teen girls said Instagram makes thoughts of suicide worse and 17% of teen girls said it makes eating disorders worse.

Meta responded to the lawsuit by saying it has “already introduced over 30 tools to support teens and their families.”

“We’re disappointed that instead of working productively with companies across the industry to create clear, age-appropriate standards for the many apps teens use, the attorneys general have chosen this path,” the company added.

Meta is one of many social media companies facing criticism and legal action, with lawsuits also filed against ByteDance’s TikTok and Google’s YouTube.

Measures to protect children on social media exist, but they are easily circumvented, such as a federal law that bans kids under 13 from setting up accounts.

The dangers of social media for children have been highlighted by U.S. Surgeon General Dr. Vivek Murthy, who said the effects of social media require “immediate action to protect kids now.”

In addition to the 33 states suing, nine more state attorneys general are expected to join and file similar lawsuits.

Some information in this report came from The Associated Press and Reuters. 

Governments, Firms Should Spend More on AI Safety, Top Researchers Say

Artificial intelligence companies and governments should allocate at least one third of their AI research and development funding to ensuring the safety and ethical use of the systems, top AI researchers said in a paper on Tuesday. 

The paper, issued a week before the international AI Safety Summit in London, lists measures that governments and companies should take to address AI risks. 

“Governments should also mandate that companies are legally liable for harms from their frontier AI systems that can be reasonably foreseen and prevented,” according to the paper written by three Turing Award winners, a Nobel laureate, and more than a dozen top AI academics. 

Currently there are no broad-based regulations focusing on AI safety, and the first set of legislation by the European Union is yet to become law as lawmakers are yet to agree on several issues.

“Recent state of the art AI models are too powerful, and too significant, to let them develop without democratic oversight,” said Yoshua Bengio, one of the three people known as the godfather of AI.

“It [investments in AI safety] needs to happen fast, because AI is progressing much faster than the precautions taken,” he said.

Authors include Geoffrey Hinton, Andrew Yao, Daniel Kahneman, Dawn Song and Yuval Noah Harari.

Since the launch of OpenAI’s generative AI models, top academics and prominent CEOs such as Elon Musk have warned about the risks on AI, including calling for a six-month pause in developing powerful AI systems.

Some companies have countered this, saying they will face high compliance costs and disproportionate liability risks.

“Companies will complain that it’s too hard to satisfy regulations — that ‘regulation stifles innovation’ — that’s ridiculous,” said British computer scientist Stuart Russell.

“There are more regulations on sandwich shops than there are on AI companies.” 

Taiwan Computer Chip Workers Adjust to Life in American Desert

Phoenix, Arizona, in America’s Southwest, is the site of a Taiwanese semiconductor chip making facility. One part of President Joe Biden’s cornerstone agenda is to rely less on manufacturing from overseas and boost domestic production of chips that run everything from phones to cars. Many Taiwanese workers who moved to the U.S. to work at the facility — face the challenges of living in a new land. VOA’s Stella Hsu, Enming Liu and Elizabeth Lee have the story.

India Conducts Space Flight Test Ahead Of 2025 Crewed Mission

India successfully carried out Saturday the first of a series of key test flights after overcoming a technical glitch ahead of its planned mission to take astronauts into space by 2025, the space agency said.

The test involved launching a module to outer space and bringing it back to earth to test the spacecraft’s crew escape system, said the Indian Space Research Organization chief S. Somanath, and was being recovered after its touchdown in the Bay of Bengal.

The launch was delayed by 45 minutes in the morning because of weather conditions. The attempt was again deferred by more than an hour because of an issue with the engine, and the ground computer put the module’s liftoff on hold, said Somanath.

The glitch caused by a monitoring anomaly in the system was rectified and the test was carried out successfully 75 minutes later from the Sriharikota satellite launching station in southern India, Somanath told reporters.

It would pave the way for other unmanned missions, including sending a robot into space next year.

In September, India successfully launched its first space mission to study the sun, less than two weeks after a successful uncrewed landing near the south pole region of the moon.

After a failed attempt to land on the moon in 2019, India in September joined the United States, the Soviet Union and China as only the fourth country to achieve the milestone.

The successful mission showcased India’s rising standing as a technology and space powerhouse and dovetails with Prime Minister Narendra Modi’s desire to project an image of an ascendant country asserting its place among the global elite.

Signaling a roadmap for India’s future space ambitions, Modi earlier this week announced that India’s space agency will set up an Indian-crafted space station by 2035 and land an Indian astronaut on the moon by 2040.

Active since the 1960s, India has launched satellites for itself and other countries, and successfully put one in orbit around Mars in 2014. India is planning its first mission to the International Space Station next year in collaboration with the United States.

US Sounds Alarm on Russian Election Efforts

Russia’s efforts to discredit and undermine democratic elections appears to be expanding rapidly, according to newly declassified intelligence, spurred on by what the Kremlin sees as its success in disrupting the past two U.S. presidential elections.

The U.S. intelligence findings, shared in a diplomatic cable sent to more than 100 countries and obtained by VOA, are based on a review of Russian information operations between January 2020 and December 2022 that found Moscow “engaged in a concerted effort … to undermine public confidence in at least 11 elections across nine democracies.”

The review also found what the cable describes as “a less pronounced level of Russian messaging and social media activity” that targeted another 17 democracies.

“These figures represent a snapshot of Russian activities,” the cable warned. “Russia likely has sought to undermine confidence in democratic elections in additional cases that have gone undetected.

“Our information indicates that senior Russian government officials, including in the Kremlin, see value in this type of influence operation and perceive it to be effective,” the cable added.

VOA reached out to the Russian Embassy for comment on the cable warnings but so far has not received a response.

Russia has routinely denied allegations it interferes in foreign elections. However, last November, Wagner chief Yevgeny Prigozhin appeared to admit culpability for interfering in U.S. elections in a social media post.

“Gentlemen, we interfered, we interfere and we will interfere,” Prigozhin said.

U.S. officials assess that, in addition to Russia’s efforts to sow doubt surrounding the 2016 and 2020 elections in the United States, Russian campaigns have targeted countries in Asia, Europe, the Middle East and South America.

The goal, they say, is specifically to erode public confidence in election results and to paint the newly elected governments as illegitimate — using internet trolls, social media influencers, proxy websites linked to Russian intelligence and even Russian state-run media channels like RT and Sputnik.

And even though Russia’s resources have been strained due to its invasion of Ukraine, Moscow election interference efforts do not seem to be slowing down.

It is “a fairly low cost, low barrier to entry operation,” said a senior U.S. intelligence official, who spoke on the condition of anonymity in order to discuss the intelligence assessment.

“In many cases they’re amplifying existing domestic narratives that kind of question the integrity of elections,” the official said. “This is a very efficient use of resources. All they’re doing is magnifying claims that it’s unfair or it didn’t work or it’s chaotic.”

U.S. officials said they have started giving more detailed, confidential briefings to select countries that are being targeted by Russia. Some of the countries, they said, have likewise promised to share intelligence gathered from their own investigations.

Additionally, the cable makes a series of recommendations to counter the threat from the Russian disinformation campaigns, including for countries to expose, sanction and even expel any Russian officials involved in spreading misinformation or disinformation.

The cable also encourages democratic countries to engage in information campaigns to share factual information about their elections and to turn to independent election observers to assess and affirm the integrity of any elections.