TikTok Fined $15.9M by UK Watchdog for Misuse of Kids’ Data

Britain’s privacy watchdog hit TikTok with a multimillion-dollar penalty Tuesday for misusing children’s data and violating other protections for users’ personal information.

The Information Commissioner’s Office said it issued a fine of $15.9 million to the short-video sharing app, which is wildly popular with young people.

It’s the latest example of tighter scrutiny that TikTok and its parent, Chinese technology company ByteDance, are facing in the West, where governments are increasingly concerned about risks that the app poses to data privacy and cybersecurity.

The British watchdog, which was investigating data breaches between May 2018 and July 2020, said TikTok allowed as many as 1.4 million children in the U.K. under 13 to use the app in 2020, despite the platform’s own rules prohibiting children that young from setting up accounts.

TikTok didn’t adequately identify and remove children under 13 from the platform, the watchdog said. And even though it knew younger children were using the app, TikTok failed to get consent from their parents to process their data, as required by Britain’s data protection laws, the agency said.

“There are laws in place to make sure our children are as safe in the digital world as they are in the physical world. TikTok did not abide by those laws,” Information Commissioner John Edwards said in a press release.

TikTok collected and used personal data of children who were inappropriately given access to the app, he said.

“That means that their data may have been used to track them and profile them, potentially delivering harmful, inappropriate content at their very next scroll,” Edwards said.

The company said it disagreed with the watchdog’s decision.

“We invest heavily to help keep under 13s off the platform and our 40,000-strong safety team works around the clock to help keep the platform safe for our community,” TikTok said in statement. “We will continue to review the decision and are considering next steps.”

TikTok says it has improved its sign-up system since the breaches happened by no longer allowing users to simply declare they are old enough and looking for other signs that an account is used by someone under 13.

The penalty also covered other breaches of U.K. data privacy law.

The watchdog said TikTok failed to properly inform people about how their data is collected, used and shared in an easily understandable way. Without this information, it’s unlikely that young users would be able “to make informed choices” about whether and how to use TikTok, it said.

TikTok also failed to ensure personal data of British users was processed lawfully, fairly and transparently, the regulator said.

TikTok initially faced a 27 million-pound fine, which was reduced after the company persuaded regulators to drop other charges.

U.S. regulators in 2019 fined TikTok, previously known as Music.aly, $5.7 million in a case that involved similar allegations of unlawful collection of children’s personal information.

Also Tuesday, Australia became the latest country to ban TikTok from its government devices, with authorities from the European Union to the United States concerned that the app could share data with the Chinese government or push pro-Beijing narratives

U.S. lawmakers are also considering forcing a sale or even banning it outright as tensions with China grow.

Australia Bans TikTok on Government Devices

Australia said Tuesday it will ban TikTok on government devices, joining a growing list of Western nations cracking down on the Chinese-owned app due to national security fears.   

Attorney-General Mark Dreyfus said the decision followed advice from the country’s intelligence agencies and would begin “as soon as practicable”.   

Australia is the last member of the secretive Five Eyes security alliance to pursue a government TikTok ban, joining its allies the United States, Britain, Canada and New Zealand.   

France, the Netherlands and the European Commission have made similar moves.   

Dreyfus said the government would approve some exemptions on a “case-by-case basis” with “appropriate security mitigations in place”.   

Cybersecurity experts have warned that the app — which boasts more than one billion global users — could be used to hoover up data that is then shared with the Chinese government.   

Surveys have estimated that as many as seven million Australians use the app — or about a quarter of the population.   

In a security notice outlining the ban, the Attorney-General’s Department said TikTok posed “significant security and privacy risks” stemming from the “extensive collection of user data”.   

China condemned the ban, saying it had “lodged stern representations” with Canberra over the move and urging Australia to “provide Chinese companies with a fair, transparent and non-discriminatory business environment”.   

“China has always maintained that the issue of data security should not be used as a tool to generalize the concept of national security, abuse state power and unreasonably suppress companies from other countries,” foreign ministry spokesperson Mao Ning said.   

‘No-brainer’    

But Fergus Ryan, an analyst with the Australian Strategic Policy Institute, said stripping TikTok from government devices was a “no-brainer”.   

“It’s been clear for years that TikTok user data is accessible in China,” Ryan told AFP.    

“Banning the use of the app on government phones is a prudent decision given this fact.”   

The security concerns are underpinned by a 2017 Chinese law that requires local firms to hand over personal data to the state if it is relevant to national security.   

Beijing has denied these reforms pose a threat to ordinary users.   

China “has never and will not require companies or individuals to collect or provide data located in a foreign country, in a way that violates local law”, the foreign ministry’s Mao said in March.   

‘Rooted in xenophobia’   

TikTok has said such bans are “rooted in xenophobia”, while insisting that it is not owned or operated by the Chinese government.    

The company’s Australian spokesman Lee Hunter said it would “never” give data to the Chinese government.   

“No one is working harder to make sure this would never be a possibility,” he told Australia’s Channel Seven.   

But the firm acknowledged in November that some employees in China could access European user data, and in December it said employees had used the data to spy on journalists.   

The app is typically used to share short, lighthearted videos and has exploded in popularity in recent years.   

Many government departments were initially eager to use TikTok as a way to connect with a younger demographic that is harder to reach through traditional media channels.   

New Zealand banned TikTok from government devices in March, saying the risks were “not acceptable in the current New Zealand Parliamentary environment”.    

Earlier this year, the Australian government announced it would be stripping Chinese-made CCTV cameras from politicians’ offices due to security concerns. 

Virgin Orbit Files for Bankruptcy, Seeks Buyer

Virgin Orbit, the satellite launch company founded by Richard Branson, has filed for Chapter 11 bankruptcy and will sell the business, the firm said in a statement Tuesday.   

The California-based company said last week it was laying off 85% of its employees — around 675 people — to reduce expenses due to its inability to secure sufficient funding.   

Virgin Orbit suffered a major setback earlier this year when an attempt to launch the first rocket into space from British soil ended in failure.   

The company had organized the mission with the UK Space Agency and Cornwall Spaceport to launch nine satellites into space.   

On Tuesday, the firm said “it commenced a voluntary proceeding under Chapter 11 of the U.S. Bankruptcy Code… in order to effectuate a sale of the business” and intended to use the process “to maximize value for its business and assets.”   

Last month, Virgin Orbit suspended operations for several days while it held funding negotiations and explored strategic opportunities.   

But at an all-hands meeting on Thursday, CEO Dan Hart told employees that operations would cease “for the foreseeable future,” US media reported at the time.   

“While we have taken great efforts to address our financial position and secure additional financing, we ultimately must do what is best for the business,” Hart said in the company statement on Tuesday.   

“We believe that the cutting-edge launch technology that this team has created will have wide appeal to buyers as we continue in the process to sell the Company.”   

Founded by Branson in 2017, the firm developed “a new and innovative method of launching satellites into orbit,” while “successfully launching 33 satellites into their precise orbit,” Hart added.   

Virgin Orbit’s shares on the New York Stock Exchange were down 3% at 19 cents on Monday evening. 

Germany Could Block ChatGPT if Needed, Says Data Protection Chief

Germany could follow in Italy’s footsteps by blocking ChatGPT over data security concerns, the German commissioner for data protection told the Handelsblatt newspaper in comments published on Monday.

Microsoft-backed MSFT.O OpenAI took ChatGPT offline in Italy on Friday after the national data agency banned the chatbot temporarily and launched an investigation into a suspected breach of privacy rules by the artificial intelligence application. 

“In principle, such action is also possible in Germany,” Ulrich Kelber said, adding that this would fall under state jurisdiction. He did not, however, outline any such plans. 

Kelber said that Germany has requested further information from Italy on its ban. Privacy watchdogs in France and Ireland said they had also contacted the Italian data regulator to discuss its findings. 

“We are following up with the Italian regulator to understand the basis for their action and we will coordinate with all EU data protection authorities in relation to this matter,” said a spokesperson for Ireland’s Data Protection Commissioner (DPC). 

OpenAI had said on Friday that it actively works to reduce personal data in training its AI systems. 

While the Irish DPC is the lead EU regulator for many global technology giants under the bloc’s “one stop shop” data regime, it is not the lead regulator for OpenAI, which has no offices in the EU.

The privacy regulator in Sweden said it has no plans to ban ChatGPT nor is it in contact with the Italian watchdog.

The Italian investigation into OpenAI was launched after a cybersecurity breach last week led to people being shown excerpts of other users’ ChatGPT conversations and their financial information. 

It accused OpenAI of failing to check the age of ChatGPT’s users, who are supposed to be aged 13 or above. Italy is the first Western country to take action against a chatbot powered by artificial intelligence. 

For a nine-hour period, the exposed data included first and last names, billing addresses, credit card types, credit card expiration dates and the last four digits of credit card numbers, according to an email sent by OpenAI to one affected customer and seen by the Financial Times.

NASA to Reveal Crew for 2024 Flight Around the Moon

NASA is to reveal the names on Monday of the astronauts — three Americans and a Canadian — who will fly around the Moon next year, a prelude to returning humans to the lunar surface for the first time in a half century.   

The mission, Artemis II, is scheduled to take place in November 2024 with the four-person crew circling the Moon but not landing on it.   

As part of the Artemis program, NASA aims to send astronauts to the Moon in 2025 — more than five decades after the historic Apollo missions ended in 1972.   

Besides putting the first woman and first person of color on the Moon, the US space agency hopes to establish a lasting human presence on the lunar surface and eventually launch a voyage to Mars.   

NASA administrator Bill Nelson said this week at a “What’s Next Summit” hosted by Axios that he expected a crewed mission to Mars by the year 2040.  

The four members of the Artemis II crew will be announced at an event at 10:00 am (1500 GMT) at the Johnson Space Center in Houston.   

The 10-day Artemis II mission will test NASA’s powerful Space Launch System rocket as well as the life-support systems aboard the Orion spacecraft.   

The first Artemis mission wrapped up in December with an uncrewed Orion capsule returning safely to Earth after a 25-day journey around the Moon.   

During the trip around Earth’s orbiting satellite and back, Orion logged well over 1.6 million kilometers and went farther from Earth than any previous habitable spacecraft.   

Nelson was also asked at the Axios summit whether NASA could stick to its timetable of landing astronauts on the south pole of the Moon in late 2025.   

“Space is hard,” Nelson said. “You have to wait until you know that it’s as safe as possible, because you’re living right on the edge.   

“So I’m not so concerned with the time,” he said. “We’re not going to launch until it’s right.”   

Only 12 people — all of them white men — have set foot on the Moon. 

Twitter Pulls ‘Verified’ Check Mark From Main New York Times Account

Twitter has removed the verification check mark on the main account of The New York Times, one of CEO Elon Musk’s most despised news organizations.

The removal comes as many of Twitter’s high-profile users are bracing for the loss of the blue check marks that helped verify their identity and distinguish them from impostors on the social media platform.

Musk, who owns Twitter, set a deadline of Saturday for verified users to buy a premium Twitter subscription or lose the checks on their profiles. The Times said in a story Thursday that it would not pay Twitter for verification of its institutional accounts.

Early Sunday, Musk tweeted that the Times’ check mark would be removed. Later he posted disparaging remarks about the newspaper, which has aggressively reported on Twitter and on flaws with partially automated driving systems at Tesla, the electric car company, which he also runs.

Other Times accounts such as its business news and opinion pages still had either blue or gold check marks Sunday, as did multiple reporters for the news organization.

“We aren’t planning to pay the monthly fee for check mark status for our institutional Twitter accounts,” the Times said in a statement Sunday. “We also will not reimburse reporters for Twitter Blue for personal accounts, except in rare instances where this status would be essential for reporting purposes,” the newspaper said in a statement Sunday.

The Associated Press, which has said it also will not pay for the check marks, still had them on its accounts at midday Sunday.

Twitter did not answer emailed questions Sunday about the removal of The New York Times check mark.

The costs of keeping the check marks ranges from $8 a month for individual web users to a starting price of $1,000 monthly to verify an organization, plus $50 monthly for each affiliate or employee account. Twitter does not verify the individual accounts to ensure they are who they say they are, as was the case with the previous blue check doled out to public figures and others during the platform’s pre-Musk administration.

While the cost of Twitter Blue subscriptions might seem like nothing for Twitter’s most famous commentators, celebrity users from basketball star LeBron James to Star Trek’s William Shatner have balked at joining. Seinfeld actor Jason Alexander pledged to leave the platform if Musk takes his blue check away.

The White House is also passing on enrolling in premium accounts, according to a memo sent to staff. While Twitter has granted a free gray mark for President Joe Biden and members of his Cabinet, lower-level staff won’t get Twitter Blue benefits unless they pay for it themselves.

“If you see impersonations that you believe violate Twitter’s stated impersonation policies, alert Twitter using Twitter’s public impersonation portal,” said the staff memo from White House official Rob Flaherty.

Alexander, the actor, said there are bigger issues in the world but without the blue mark, “anyone can allege to be me” so if he loses it, he’s gone.

“Anyone appearing with it=an imposter. I tell you this while I’m still official,” he tweeted.

After buying Twitter for $44 billion in October, Musk has been trying to boost the struggling platform’s revenue by pushing more people to pay for a premium subscription. But his move also reflects his assertion that the blue verification marks have become an undeserved or “corrupt” status symbol for elite personalities, news reporters and others granted verification for free by Twitter’s previous leadership.

Along with shielding celebrities from impersonators, one of Twitter’s main reasons to mark profiles with a blue check mark starting about 14 years ago was to verify politicians, activists and people who suddenly find themselves in the news, as well as little-known journalists at small publications around the globe, as an extra tool to curb misinformation coming from accounts that are impersonating people. Most “legacy blue checks” are not household names and weren’t meant to be.

One of Musk’s first product moves after taking over Twitter was to launch a service granting blue checks to anyone willing to pay $8 a month. But it was quickly inundated by impostor accounts, including those impersonating Nintendo, pharmaceutical company Eli Lilly and Musk’s businesses Tesla and SpaceX, so Twitter had to temporarily suspend the service days after its launch.

The relaunched service costs $8 a month for web users and $11 a month for users of its iPhone or Android apps. Subscribers are supposed to see fewer ads, be able to post longer videos and have their tweets featured more prominently. 

Dutch Refinery to Feed Airlines’ Thirst for Clean Fuel 

Scaffolding and green pipes envelop a refinery in the port of Rotterdam where Finnish giant Neste is preparing to significantly boost production of sustainable aviation fuel. 

Switching to non-fossil aviation fuels that produce less net greenhouse gas emissions is key to plans to decarbonize air transport, a significant contributor to global warming. 

Neste, the largest global producer of SAF, uses cooking oil and animal fat at this Dutch refinery. 

Sustainable aviation fuels (SAF) are being made from different sources such as municipal waste, leftovers from the agricultural and forestry industry, crops and plants, and even hydrogen. 

These technologies are still developing, and the product is more expensive. 

But these fuels will help airlines reduce CO2 emissions by up to 80%, according to the International Air Transport Association. 

Global output of SAF was 250,000 tons last year, less than 0.1% of the more than 300 million tons of aviation fuel used during that period. 

“It’s a drop in the ocean but a significant drop,” said Matti Lehmus, CEO of Neste. 

“We’ll be growing drastically our production from 100,000 tons to 1.5 million tons next year,” he added. 

There clearly is demand. 

The European Union plans to impose the use of a minimum amount of sustainable aviation fuel by airlines, rising from 2% in 2025 to 6% in 2030 and at least 63% in 2050. 

Neste has another site for SAF in Singapore which will start production in April. 

“With the production facilities of Neste in Rotterdam and Singapore, we can meet the mandate for [the] EU in 2025,” said Jonathan Wood, the company’s vice president for renewable aviation. 

Vincent Etchebehere, director for sustainable development at Air France, said that “between now and 2030, there will be more demand than supply of SAF.” 

Need to mature technologies 

Air France-KLM has reached a deal with Neste for a supply of 1 million tons of sustainable aviation fuel between 2023 and 2030. 

It has also lined up 10 year-agreements with U.S. firm DG Fuels for 600,000 tons and with TotalEnergies for 800,000 tons. 

At the Rotterdam site, two giant storage tanks of 15,000 cubic meters are yet to be painted. 

They’re near a quay where the fuel will be transported by boat to feed Amsterdam’s Schiphol airport and airports in Paris. 

The Franco-Dutch group has already taken steps to cut its carbon footprint, using 15% of the global SAF output last year — or 0.6% of its fuel needs. 

Neste’s Lehmus said there was a great need to “mature the technologies” to make sustainable aviation fuel from diverse sources such as algae, nitrocellulose and synthetic fuels. 

Air France CEO Anne Rigail said, the prices of sustainable aviation fuel were as important as their production. 

Sustainable fuel costs 3,500 euros ($3,800) a ton globally but only $2,000 in the United States thanks to government subsidies. In France, it costs 5,000 euros a ton. 

“We need backing and we really think the EU can do more,” said Rigail. 

Italy Temporarily Blocks ChatGPT Over Privacy Concerns

Italy is temporarily blocking the artificial intelligence software ChatGPT in the wake of a data breach as it investigates a possible violation of stringent European Union data protection rules, the government’s privacy watchdog said Friday.

The Italian Data Protection Authority said it was taking provisional action “until ChatGPT respects privacy,” including temporarily limiting the company from processing Italian users’ data.

U.S.-based OpenAI, which developed the chatbot, said late Friday night it has disabled ChatGPT for Italian users at the government’s request. The company said it believes its practices comply with European privacy laws and hopes to make ChatGPT available again soon.

While some public schools and universities around the world have blocked ChatGPT from their local networks over student plagiarism concerns, Italy’s action is “the first nation-scale restriction of a mainstream AI platform by a democracy,” said Alp Toker, director of the advocacy group NetBlocks, which monitors internet access worldwide.

The restriction affects the web version of ChatGPT, popularly used as a writing assistant, but is unlikely to affect software applications from companies that already have licenses with OpenAI to use the same technology driving the chatbot, such as Microsoft’s Bing search engine.

The AI systems that power such chatbots, known as large language models, are able to mimic human writing styles based on the huge trove of digital books and online writings they have ingested.

The Italian watchdog said OpenAI must report within 20 days what measures it has taken to ensure the privacy of users’ data or face a fine of up to either 20 million euros (nearly $22 million) or 4% of annual global revenue.

The agency’s statement cites the EU’s General Data Protection Regulation and pointed to a recent data breach involving ChatGPT “users’ conversations” and information about subscriber payments.

OpenAI earlier announced that it had to take ChatGPT offline on March 20 to fix a bug that allowed some people to see the titles, or subject lines, of other users’ chat history.

“Our investigation has also found that 1.2% of ChatGPT Plus users might have had personal data revealed to another user,” the company had said. “We believe the number of users whose data was actually revealed to someone else is extremely low and we have contacted those who might be impacted.”

Italy’s privacy watchdog, known as the Garante, also questioned whether OpenAI had legal justification for its “massive collection and processing of personal data” used to train the platform’s algorithms. And it said ChatGPT can sometimes generate — and store — false information about individuals.

Finally, it noted there’s no system to verify users’ ages, exposing children to responses “absolutely inappropriate to their age and awareness.”

OpenAI said in response that it works “to reduce personal data in training our AI systems like ChatGPT because we want our AI to learn about the world, not about private individuals.”

“We also believe that AI regulation is necessary — so we look forward to working closely with the Garante and educating them on how our systems are built and used,” the company said.

The Italian watchdog’s move comes as concerns grow about the artificial intelligence boom. A group of scientists and tech industry leaders published a letter Wednesday calling for companies such as OpenAI to pause the development of more powerful AI models until the fall to give time for society to weigh the risks.

The president of Italy’s privacy watchdog agency told Italian state TV Friday evening he was one of those who signed the appeal. Pasquale Stanzione said he did so because “it’s not clear what aims are being pursued” ultimately by those developing AI.

Namibia Looks East for Green Hydrogen Partnerships

The administrator of the National Energy Administration of China, Zhang Jinhua, on Friday paid a visit to Namibia President Hage Geingob. The visit is aimed at establishing cooperation in the area of green hydrogen production.

Namibia is positioning itself as a future green hydrogen producer to attract investment from the globe’s leading and fastest growing producer of renewable energy — China.

James Mnyupe, Namibia’s green hydrogen commissioner and economic adviser to the president, told VOA that although Namibia has not signed a partnership with China on green hydrogen, officials are looking to the Asian country as a critical partner. But it isn’t talking to China alone.

“We have an MOU [Memo of Understanding] with Europe; we are also discussing possibilities of collaboration with the United States,” he said. “If you look at any of these green hydrogen projects as I mentioned, simply they will use components from all over the world.”

He said in the face of rising energy demands around the globe and increased tensions between the East and West, Namibia will not be drawn into picking sides. He was referring to the conflict in Ukraine and its effect on international relations

“So today Europe’s biggest trading partner is China, China’s biggest markets are the U.S. and Europe so if Namibia trades with Europe, China or the U.S. for that matter, that is not a reason for involving Namibia in any political or conflict-related discussions between those countries,” he said.

Presidential spokesperson Alfredo Hengari said the visit by U.S. Ambassador to Namibia Randy Berry on Tuesday was aimed at cementing relations in major areas of interest, among them green hydrogen and oil exploration.

“Namibia is making tremendous advances in the areas of green energy but also in hydrocarbons,” he said. “American companies are drilling off the coast of the Republic of Namibia and so it was a courtesy visit just to emphasize increasing cooperation in these areas.”

Speaking through an interpreter, China’s administrator for its National Energy Administration on Friday said China is ready to partner with Namibia in all areas of green hydrogen.

Hydrogen is an alternative fuel that industrialized nations hope can help them reach their ambitious goal of net-zero carbon emission by 2050.

Mnyupe says Namibia is looking to learn from China on how best to use its experience in producing renewable energy and renewable energy components. Friday’s visit is an indication of China’s interest in partnering with Namibia and participating in the countries green-hydrogen value chain.

Call for Pause in AI Development May Fall on Deaf Ears

A group of influential figures from Silicon Valley and the larger tech community released an open letter this week calling for a pause in the development of powerful artificial intelligence programs, arguing that they present unpredictable dangers to society.

The organization that created the open letter, the Future of Life Institute, said the recent rollout of increasingly powerful AI tools by companies like Open AI, IBM and Google demonstrates that the industry is “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

The signatories of the letter, including Elon Musk, founder of Tesla and SpaceX, and Steve Wozniak, co-founder of Apple, called for a six-month halt to all development work on large language model AI projects.

“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” the letter says. “These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.”

The letter does not call for a halt to all AI-related research but focuses on extremely large systems that assimilate vast amounts of data and use it to solve complex tasks and answer difficult questions.

However, experts told VOA that commercial competition between different AI labs, and a broader concern about allowing Western companies to fall behind China in the race to develop more advanced applications of the technology, make any significant pause in development unlikely.

Chatbots offer window

While artificial intelligence is present in day-to-day life in myriad ways, including algorithms that curate social media feeds, systems used to make credit decisions in many financial institutions and facial recognition increasingly used in security systems, large language models have increasingly taken center stage in the discussion of AI.

In its simplest form, a large language model is an AI system that analyzes large amounts of textual data and uses a set of parameters to predict the next word in a sentence. However, models of sufficient complexity, operating with billions of parameters, are able to model human language, sometimes with uncanny accuracy.

In November of last year, Open AI released a program called ChatGPT (Chat Generative Pre-trained Transformer) to the general public. Based on the underlying GPT 3.5 model, the program allows users to communicate with the program by entering text through a web browser, which returns responses created nearly instantaneously by the program.

ChatGPT was an immediate sensation, as users used it to generate everything from complex computer code to poetry. Though it was quickly apparent that the program frequently returned false or misleading information, the potential for it to disrupt any number of sectors of life, from academia to customer service systems to national defense, was clear.

Microsoft has since integrated ChatGPT into its search engine, Bing. More recently, Google has rolled out its own AI-supported search capability, known as Bard.

GPT-4 as benchmark

In the letter calling for pause in development, the signatories use GPT-4 as a benchmark. GPT-4 is an AI tool developed by Open AI that is more powerful than the version that powers the original ChatGPT. It is currently in limited release. The moratorium being called for in the letter is on systems “more powerful than GPT-4.”

One problem though, is that it is not precisely clear what “more powerful” means in this context.

“There are other models that, in computational terms, are much less large or powerful, but which have very powerful potential impacts,” Bill Drexel, an associate fellow with the AI Safety and Stability program at the Center for a New American Security (CNAS), told VOA. “So there are much smaller models that can potentially help develop dangerous pathogens or help with chemical engineering — really consequential models that are much smaller.”

Limited capabilities

Edward Geist, a policy researcher at the RAND Corporation and the author of the forthcoming book Deterrence Under Uncertainty: Artificial Intelligence and Nuclear Warfare told VOA that it is important to understand both what programs like GPT-4 are capable of, but also what they are not.

For example, he said, Open AI has made it clear in technical data provided to potential commercial customers that once the model is trained on a set of data, there is no clear way to teach it new facts or to otherwise update it without completely retraining the system. Additionally, it does not appear to be able to perform tasks that require “evolving” memory, such as reading a book.

“There are, sort of, glimmerings of an artificial general intelligence,” he said. “But then you read the report, and it seems like it’s missing some features of what I would consider even a basic form of general intelligence.”

Geist said that he believes many of those warning about the dangers of AI are “absolutely earnest” in their concerns, but he is not persuaded that those dangers are as severe as they believe.

“The gap between that super-intelligent self-improving AI that has been postulated in those conjectures, and what GPT-4 and its ilk can actually do seems to be very broad, based on my reading of Open AI’s technical report about it.”

Commercial and security concerns

James A. Lewis, senior vice president and director of the Strategic Technologies Program at the Center for Strategic and International Studies (CSIS), told VOA he is skeptical that the open letter will have much effect, for reasons as varied as commercial competition and concerns about national security.

Asked what he thinks the chances are of the industry agreeing to a pause in research, he said, “Zero.”

“You’re asking Microsoft to not compete with Google?” Lewis said. “They’ve been trying for decades to beat Google on search engines, and they’re on the verge of being able to do it. And you’re saying, let’s take a pause? Yeah, unlikely.”

Competition with China

More broadly, Lewis said, improvements in AI will be central to progress in technology related to national defense.

“The Chinese aren’t going to stop because Elon Musk is getting nervous,” Lewis said. “That will affect [Department of Defense] thinking. If we’re the only ones who put the brakes on, we lose the race.”

Drexel, of CNAS, agreed that China is unlikely to feel bound by any such moratorium.

“Chinese companies and the Chinese government would be unlikely to agree to this pause,” he said. “If they agreed, they’d be unlikely to follow through. And in any case, it’d be very difficult to verify whether or not they were following through.”

He added, “The reason why they’d be particularly unlikely to agree is because — particularly on models like GPT-4 — they feel and recognize that they are behind. [Chinese President] Xi Jinping has said numerous times that AI is a really important priority for them. And so catching up and surpassing [Western companies] is a high priority.”

Li Ang Zhang, an information scientist with the RAND Corporation, told VOA he believes a blanket moratorium is a mistake.

“Instead of taking a fear-based approach, I’d like to see a better thought-out strategy towards AI governance,” he said in an email exchange. “I don’t see a broad pause in AI research as a tenable strategy but I think this is a good way to open a conversation on what AI safety and ethics should look like.”

He also said that a moratorium might disadvantage the U.S. in future research.

“By many metrics, the U.S. is a world leader in AI,” he said. “For AI safety standards to be established and succeed, two things must be true. The U.S. must maintain its world-lead in both AI and safety protocols. What happens after six months? Research continues, but now the U.S. is six months behind.”

Is Banning TikTok Constitutional?

U.S. lawmakers and officials are ratcheting up threats to ban TikTok, saying the Chinese-owned video-sharing app used by millions of Americans poses a threat to privacy and U.S. national security.

But free speech advocates and legal experts say an outright ban would likely face a constitutional hurdle: the First Amendment right to free speech.

“If passed by Congress and enacted into law, a nationwide ban on TikTok would have serious ramifications for free expression in the digital sphere, infringing on Americans’ First Amendment rights and setting a potent and worrying precedent in a time of increased censorship of internet users around the world,” a coalition of free speech advocacy organizations wrote in a letter to Congress last week, urging a solution short of an outright ban.

The plea came as U.S. lawmakers grilled TikTok CEO Shou Chew over concerns the Chinese government could exploit the platform’s user data for espionage and influence operations in the United States.

TikTok, which bills itself as a “platform for free expression” and a “modern-day version of the town square,” says it has more than 150 million users in the United States.

But the platform is owned by ByteDance, a Beijing-based company, and U.S. officials have raised concerns that the Chinese government could utilize the app’s user data to influence and spy on Americans.

Aaron Terr, director of public advocacy at the Foundation for Individual Rights and Expression, said while there are legitimate privacy and national security concerns about TikTok, the First Amendment implications of a ban so far have received little public attention.

“If nothing else, it’s important for that to be a significant part of the conversation,” Terr said in an interview. “It’s important for people to consider alongside national security concerns.”

To be sure, the First Amendment is not absolute. There are types of speech that are not protected by the amendment. Among them: obscenity, defamation and incitement.

But the Supreme Court has also made it clear there are limits on how far the government can go to regulate speech, even when it involves a foreign adversary or when the government argues that national security is at stake.

In a landmark 1965 case, the Supreme Court invalidated a law that prevented Americans from receiving foreign mail that the government deemed was “communist political propaganda.”

In another consequential case involving a defamation lawsuit brought against The New York Times, the court ruled that even an “erroneous statement” enjoyed some constitutional protection.

“And that’s relevant because here, one of the reasons that Congress is concerned about TikTok is the potential that the Chinese government could use it to spread disinformation,” said Caitlin Vogus, deputy director of the Free Expression Project at the Center for Democracy and Technology, one of the signatories of the letter to Congress.

Proponents of a ban deny a prohibition would run afoul of the First Amendment.

“This is not a First Amendment issue, because we’re not trying to ban booty videos,” Republican Senator Marco Rubio, a longtime critic of TikTok, said on the Senate floor on Monday.

ByteDance, TikTok’s parent company, is beholden to the Chinese Communist Party, Rubio said.

“So, if the Communist Party goes to ByteDance and says, ‘We want you to use that algorithm to push these videos on Americans to convince them of whatever,’ they have to do it. They don’t have an option,” Rubio said.

The Biden administration has reportedly demanded that ByteDance divest itself from TikTok or face a possible ban.

TikTok denies the allegations and says it has taken measures to protect the privacy and security of its U.S. user data.

Rubio is sponsoring one of several competing bills that envision different pathways to a TikTok ban.

A House bill called the Deterring America’s Technological Adversaries Act would empower the president to shut down TikTok.

A Senate bill called the RESTRICT Act would authorize the Commerce Department to investigate information and communications technologies to determine whether they pose national security risks.

This would not be the first time the U.S. government has attempted to ban TikTok.

In 2020, then-President Donald Trump issued an executive order declaring a national emergency that would have effectively shut down the app.

In response, TikTok sued the Trump administration, arguing that the executive order violated its due process and First Amendment rights.

While courts did not weigh in on the question of free speech, they blocked the ban on the grounds that Trump’s order exceeded statutory authority by targeting “informational materials” and “personal communication.”

Allowing the ban would “have the effect of shutting down, within the United States, a platform for expressive activity used by about 700 million individuals globally,” including more than 100 million Americans, federal judge Wendy Beetlestone wrote in response to a lawsuit brought by a group of TikTok users.

A fresh attempt to ban TikTok, whether through legislation or executive action, would likely trigger a First Amendment challenge from the platform, as well as its content creators and users, according to free speech advocates. And the case could end up before the Supreme Court.

In determining the constitutionality of a ban, courts would likely apply a judicial review test known as an “intermediate scrutiny standard,” Vogus said.

“It would still mean that any ban would have to be justified by an important governmental interest and that a ban would have to be narrowly tailored to address that interest,” Vogus said. “And I think that those are two significant barriers to a TikTok ban.”

But others say a “content-neutral” ban would pass Supreme Court muster.

“To pass content-neutral laws, the government would need to show that the restraint on speech, if any, is narrowly tailored to serve a ‘significant government interest’ and leaves open reasonable alternative avenues for expression,” Joel Thayer, president of the Digital Progress Institute, wrote in a recent column in The Hill online newspaper.

In Congress, even as the push to ban TikTok gathers steam, there are lone voices of dissent.

One is progressive Democrat Alexandria Ocasio-Cortez. Another is Democratic Representative Jamal Bowman, himself a prolific TikTok user.

Opposition to TikTok, Bowman said, stems from “hysteria” whipped up by a “Red scare around China.”

“Our First Amendment gives us the right to speak freely and to communicate freely, and TikTok as a platform has created a community and a space for free speech for 150 million Americans and counting,” Bowman, who has more than 180,000 TikTok followers, said recently at a rally held by TikTok content creators.

Instead of singling out TikTok, Bowman said, Congress should enact new legislation to ensure social media users are safe and their data secure.