Study Details Differences Between Deep Interiors of Mars and Earth

Mars is Earth’s next-door neighbor in the solar system — two rocky worlds with differences down to their very core, literally.

A new study based on seismic data obtained by NASA’s robotic InSight lander is offering a fuller understanding of the Martian deep interior and fresh details about dissimilarities between Earth, the third planet from the sun, and Mars, the fourth.

The research, informed by the first detection of seismic waves traveling through the core of a planet other than Earth, showed that the innermost layer of Mars is slightly smaller and denser than previously known. It also provided the best assessment to date of the composition of the Martian core.

Both planets possess cores comprised primarily of liquid iron. But about 20% of the Martian core is made up of elements lighter than iron — mostly sulfur, but also oxygen, carbon and a dash of hydrogen, the study found. That is about double the percentage of such elements in Earth’s core, meaning the Martian core is considerably less dense than our planet’s core — though more dense than a 2021 estimate based on a different type of data from the now-retired InSight.

“The deepest regions of Earth and Mars have different compositions —  likely a product both of the conditions and processes at work when the planets formed and of the material they are made from,” said seismologist Jessica Irving of the University of Bristol in England, lead author of the study published this week in the journal Proceedings of the National Academy of Sciences.

The study also refined the size of the Martian core, finding it has a diameter of about 2,212-2,249 miles (3,560-3,620 km), approximately 12-31 miles (20-50 km) smaller than previously estimated. The Martian core makes up a slightly smaller percentage of the planet’s diameter than does Earth’s core.

The nature of the core can play a role in governing whether a rocky planet or moon could harbor life. The core, for instance, is instrumental in generating Earth’s magnetic field that shields the planet from harmful solar and cosmic particle radiation.

“On planets and moons like Earth, there are silicate — rocky — outer layers and an iron-dominated metallic core. One of the most important ways a core can impact habitability is to generate a planetary dynamo,” Irving said.

“Earth’s core does this but Mars’ core does not — though it used to, billions of years ago. Mars’ core likely no longer has the energetic, turbulent motion which is needed to generate such a field,” Irving added.

Mars has a diameter of about 4,212 miles (6,779 km), compared to Earth’s diameter of about 7,918 miles (12,742 km), and Earth is almost seven times larger in total volume.

The behavior of seismic waves traveling through a planet can reveal details about its interior structure. The new findings stem from two seismic events that occurred on the opposite side of Mars from where the InSight lander — and specifically its seismometer device — sat on the planet’s surface.

The first was an August 2021 marsquake centered close to Valles Marineris, the solar system’s largest canyon. The second was a September 2021 meteorite impact that left a crater of about 425 feet (130 meters).

The U.S. space agency formally retired InSight in December after four years of operations, with an accumulation of dust preventing its solar-powered batteries from recharging.

“The InSight mission has been fantastically successful in helping us decipher the structure and conditions of the planet’s interior,” University of Maryland geophysicist and study co-author Vedran Lekic said. “Deploying a network of seismometers on Mars would lead to even more discoveries and help us understand the planet as a system, which we cannot do by just looking at its surface from orbit.”

Moon Shot: Japan Firm to Attempt Historic Lunar Landing

A Japanese space start-up will attempt Tuesday to become the first private company to put a lander on the Moon.   

If all goes to plan, ispace’s Hakuto-R Mission 1 lander will start its descent towards the lunar surface at around 15:40 GMT.   

It will slow its orbit some 100 kilometers above the Moon, then adjust its speed and altitude to make a “soft landing” around an hour later.   

Success is far from guaranteed. In April 2019, Israeli organization SpaceIL watched their lander crash into the Moon’s surface.   

ispace has announced three alternative landing sites and could shift the lunar descent date to April 26, May 1 or May 3, depending on conditions.   

“What we have accomplished so far is already a great achievement, and we are already applying lessons learned from this flight to our future missions,” ispace founder and CEO Takeshi Hakamada said earlier this month.   

“The stage is set. I am looking forward to witnessing this historic day, marking the beginning of a new era of commercial lunar missions.”   

The lander, standing just over two meters tall and weighing 340 kilograms, has been in lunar orbit since last month.   

It was launched from Earth in December on one of SpaceX’s Falcon 9 rockets after several delays.   

So far only the United States, Russia and China have managed to put a robot on the lunar surface, all through government-sponsored programs.   

However, Japan and the United States announced last year that they would cooperate on a plan to put a Japanese astronaut on the Moon by the end of the decade.   

SEE ALSO: A related video by VOA’s Alexander Kruglyakov

The lander is carrying several lunar rovers, including a miniature Japanese model of just eight centimeters that was jointly developed by Japan’s space agency with toy manufacturer Takara Tomy.   

The mission is also being closely watched by the United Arab Emirates, whose Rashid rover is aboard the lander as part of the nation’s expanding space program.   

The Gulf country is a newcomer to the space race but sent a probe into Mars’ orbit in 2021. If its rover successfully lands, it will be the Arab world’s first Moon mission.   

Hakuto means “white rabbit” in Japanese and references Japanese folklore that a white rabbit lives on the Moon.   

The project was one of five finalists in Google’s Lunar X Prize competition to land a rover on the Moon before a 2018 deadline, which passed without a winner.   

With just 200 employees, ispace has said it “aims to extend the sphere of human life into space and create a sustainable world by providing high-frequency, low-cost transportation services to the Moon.”   

Hakamada has touted the mission as laying “the groundwork for unleashing the Moon’s potential and transforming it into a robust and vibrant economic system.”   

The firm believes the Moon will support a population of 1,000 people by 2040, with 10,000 more visiting each year.   

It plans a second mission, tentatively scheduled for next year, involving both a lunar landing and the deployment of its own rover. 

SpaceX Wins Approval to Add Fifth U.S. Rocket Launch Site

The U.S. Space Force said on Monday that Elon Musk’s SpaceX was granted approval to lease a second rocket launch complex at a military base in California, setting the space company up for its fifth launch site in the United States. 

Under the lease, SpaceX will launch its workhorse Falcon rockets from Space Launch Complex-6 at Vandenberg Space Force Base, a military launch site north of Los Angeles where the space company operates another launchpad. It has two others in Florida and its private Starbase site in south Texas. 

A Monday night Space Force statement said a letter of support for the decision was signed on Friday by Space Launch Delta 30 commander Col. Rob Long. The statement did not mention a duration for SpaceX’s lease. 

The new launch site, vacated last year by the Boeing-Lockheed joint venture United Launch Alliance, gives SpaceX more room to handle an increasingly busy launch schedule for commercial, government and internal satellite launches. 

Vandenberg Space Force Base allows for launches in a southern trajectory over the Pacific Ocean, which is often used for weather-monitoring, military or spy satellites that commonly rely on polar Earth orbits. 

SpaceX’s grant of Space Launch Complex-6 comes as rocket companies prepare to compete for the Pentagon’s Phase 3 National Security Space Launch program, a watershed military launch procurement effort expected to begin in the next year or so. 

Twitter Changes Stoke Russian, Chinese Propaganda Surge

Twitter accounts operated by authoritarian governments in Russia, China and Iran are benefiting from recent changes at the social media company, researchers said Monday, making it easier for them to attract new followers and broadcast propaganda and disinformation to a larger audience. 

The platform is no longer labeling state-controlled media and propaganda agencies, and will no longer prohibit their content from being automatically promoted or recommended to users. Together, the two changes, both made in recent weeks, have supercharged the Kremlin’s ability to use the U.S.-based platform to spread lies and misleading claims about its invasion of Ukraine, U.S. politics and other topics. 

Russian state media accounts are now earning 33% more views than they were just weeks ago, before the change was made, according to findings released Monday by Reset, a London-based non-profit that tracks authoritarian governments’ use of social media to spread propaganda. Reset’s findings were first reported by The Associated Press. 

The increase works out to more than 125,000 additional views per post. Those posts included ones suggesting the CIA had something to do with the September 11, 2001, attacks on the U.S., that Ukraine’s leaders are embezzling foreign aid to their country, and that Russia’s invasion of Ukraine was justified because the U.S. was running clandestine biowarfare labs in the country. 

State media agencies operated by Iran and China have seen similar increases in engagement since Twitter quietly made the changes. 

The about-face from the platform is the latest development since billionaire Elon Musk purchased Twitter last year. Since then, he’s ushered in a confusing new verification system and laid off much of the company’s staff, including those dedicated to fighting misinformation, allowed back neo-Nazis and others formerly suspended from the site, and ended the site’s policy prohibiting dangerous COVID-19 misinformation. Hate speech and disinformation have thrived. 

Before the most recent change, Twitter affixed labels reading “Russia state-affiliated media” to let users know the origin of the content. It also throttled back the Kremlin’s online engagement by making the accounts ineligible for automatic promotion or recommendation—something it regularly does for ordinary accounts as a way to help them reach bigger audiences. 

The labels quietly disappeared after National Public Radio and other outlets protested Musk’s plans to label their outlets as state-affiliated media, too. NPR then announced it would no longer use Twitter, saying the label was misleading, given NPR’s editorial independence, and would damage its credibility. 

Reset’s conclusions were confirmed by the Atlantic Council’s Digital Forensic Research Lab (DFRL), where researchers determined the changes were likely made by Twitter late last month. Many of the dozens of previously labeled accounts were steadily losing followers since Twitter began using the labels. But after the change, many accounts saw big jumps in followers. 

RT Arabic, one of Russia’s most popular propaganda accounts on Twitter, had fallen to less than 5,230,000 followers on January 1, but rebounded after the change was implemented, the DFRL found. It now has more than 5,240,000 followers. 

Before the change, users interested in seeking out Kremlin propaganda had to search specifically for the account or its content. Now, it can be recommended or promoted like any other content. 

“Twitter users no longer must actively seek out state-sponsored content in order to see it on the platform; it can just be served to them,” the DFRL concluded. 

Twitter did not respond to questions about the change or the reasons behind it. Musk has made past comments suggesting he sees little difference between state-funded propaganda agencies operated by authoritarian strongmen and independent news outlets in the west.

“All news sources are partially propaganda,” he tweeted last year, “some more than others.”

Writer, Adviser, Poet, Bot: How ChatGPT Could Transform Politics

The AI bot ChatGPT has passed exams, written poetry, and deployed in newsrooms, and now politicians are seeking it out — but experts are warning against rapid uptake of a tool also famous for fabricating “facts.”

The chatbot, released last November by U.S. firm OpenAI, has quickly moved center stage in politics — particularly as a way of scoring points.

Japanese Prime Minister Fumio Kishida recently took a direct hit from the bot when he answered some innocuous questions about health care reform from an opposition MP.

Unbeknownst to the PM, his adversary had generated the questions with ChatGPT. He also generated answers that he claimed were “more sincere” than Kishida’s.

The PM hit back that his own answers had been “more specific.”

French trade union boss Sophie Binet was on-trend when she drily assessed a recent speech by President Emmanuel Macron as one that “could have been done by ChatGPT.”

But the bot has also been used to write speeches and even help draft laws. 

“It’s useful to think of ChatGPT and generative AI in general as a cliche generator,” David Karpf of George Washington University in the U.S. said during a recent online panel. 

“Most of what we do in politics is also cliche generation.”

‘Limited added value’

Nowhere has the enthusiasm for grandstanding with ChatGPT been keener than in the United States.

Last month, Congresswoman Nancy Mace gave a five-minute speech at a Senate committee enumerating potential uses and harms of AI — before delivering the punchline that “every single word” had been generated by ChatGPT.

Local U.S. politician Barry Finegold had already gone further though, pronouncing in January that his team had used ChatGPT to draft a bill for the Massachusetts Senate.

The bot reportedly introduced original ideas to the bill, which is intended to rein in the power of chatbots and AI.

Anne Meuwese from Leiden University in the Netherlands wrote in a column for Dutch law journal RegelMaat last week that she had carried out a similar experiment with ChatGPT and also found that the bot introduced original ideas.

But while ChatGPT was to some extent capable of generating legal texts, she wrote that lawmakers should not fall over each other to use the tool.

“Not only is much still unclear about important issues such as environmental impact, bias and the ethics at OpenAI … the added value also seems limited for now,” she wrote.

Agitprop bots

The added value might be more obvious lower down the political food chain, though, where staffers on the campaign trail face a treadmill of repetitive tasks.

Karpf suggested AI could be useful for generating emails asking for donations — necessary messages that were not intended to be masterpieces.

This raises an issue of whether the bots can be trained to represent a political point of view.

ChatGPT has already provoked a storm of controversy over its apparent liberal bias — the bot initially refused to write a poem praising Donald Trump but happily churned out couplets for his successor as U.S. President Joe Biden.

Billionaire magnate Elon Musk has spied an opportunity. Despite warning that AI systems could destroy civilization, he recently promised to develop TruthGPT, an AI text tool stripped of the perceived liberal bias.

Perhaps he needn’t have bothered. New Zealand researcher David Rozado already ran an experiment retooling ChatGPT as RightWingGPT — a bot on board with family values, liberal economics and other right-wing rallying cries.

“Critically, the computational cost of trialling, training and testing the system was less than $300,” he wrote on his Substack blog in February.

Not to be outdone, the left has its own “Marxist AI.”

The bot was created by the founder of Belgian satirical website Nordpresse, who goes by the pseudonym Vincent Flibustier.

He told AFP his bot just sends queries to ChatGPT with the command to answer as if it were an “angry trade unionist.”

The malleability of chatbots is central to their appeal but it goes hand-in-hand with the tendency to generate untruths, making AI text generators potentially hazardous allies for the political class.

“You don’t want to become famous as the political consultant or the political campaign that blew it because you decided that you could have a generative AI do [something] for you,” said Karpf. 

US Invests in Alternative Solar Tech, More Solar for Renters

The Biden administration announced more than $80 million in funding Thursday in a push to produce more solar panels in the U.S., make solar energy available to more people, and pursue superior alternatives to the ubiquitous sparkly panels made with silicon.

The initiative, spearheaded by the U.S. Department of Energy (DOE) and known as Community solar, encompasses a variety of arrangements where renters and people who don’t control their rooftops can still get their electricity from solar power. Two weeks ago, Vice President Kamala Harris announced what the administration said was the largest community solar effort ever in the United States.

Now it is set to spend $52 million on 19 solar projects across a dozen states, including $10 million from the infrastructure law, as well as $30 million on technologies that will help integrate solar electricity into the grid.

The DOE also selected 25 teams to participate in a $10 million competition designed to fast-track the efforts of solar developers working on community solar projects.

The Inflation Reduction Act already offers incentives to build large solar generation projects, such as renewable energy tax credits. But Ali Zaidi, White House national climate adviser, said the new money focuses on meeting the nation’s climate goals in a way that benefits more communities.

“It’s lifting up our workers and our communities. And that’s, I think, what really excites us about this work,” Zaidi said. “It’s a chance not just to tackle the climate crisis, but to bring economic opportunity to every zip code of America.”

The investments will help people save on their electricity bills and make the electricity grid more reliable, secure, and resilient in the face of a changing climate, said Becca Jones-Albertus, director of the energy department’s Solar Energy Technologies Office.

Jones-Albertus said she’s particularly excited about the support for community solar projects, since half of Americans don’t live in a situation where they can buy their own solar and put in on the roof.

Michael Jung, executive director of the ICF Climate Center agreed. “Community solar can help address equity concerns, as most current rooftop solar panels benefit owners of single-family homes,” he said.

In typical community solar projects, households can invest in or subscribe to part of a larger solar array offsite. “What we’re doing here is trying to unlock the community solar market,” Jones-Albertus said.

The U.S. has 5.3 gigawatts of installed community solar capacity currently, according to the latest estimates. The goal is that by 2025, five million households will have access to it — about three times as many as today — saving $1 billion on their electricity bills, according to Jones-Albertus.

The new funding also highlights investment in a next generation of solar technologies, intended to wring more electricity out of the same amount of solar panels. Currently only about 20% of the sun’s energy is converted to electricity in crystalline silicon solar cells, which is what most solar panels are made of. There has long been hope for higher efficiency, and today’s announcement puts some money towards developing two alternatives: perovskite and cadmium telluride (CdTe) solar cells. Zaidi said this will allow the U.S. to be “the innovation engine that tackles the climate crisis.”

Joshua Rhodes, a scientist at the University of Texas at Austin said the investment in perovskites is good news. They can be produced more cheaply than silicon and are far more tolerant of defects, he said. They can also be built into textured and curved surfaces, which opens up more applications for their use than traditional rigid panels. Most silicon is produced in China and Russia, Rhodes pointed out.

Cadmium telluride solar can be made quickly and at a low cost, but further research is needed to improve how efficient the material is at converting sunlight to electrons.

Cadmium is also toxic and people shouldn’t be exposed to it. Jones-Albertus said that in cadmium telluride solar technology, the compound is encapsulated in glass and additional protective layers.

The new funds will also help recycle solar panels and reuse rare earth elements and materials. “One of the most important ways we can make sure CdTe remains in a safe compound form is ensuring that all solar panels made in the U.S. can be reused or recycled at the end of their life cycle,” Jones-Albertus explained.

Recycling solar panels also reduces the need for mining, which damages landscapes and uses a lot of energy, in part to operate the heavy machinery. Eight of the projects in Thursday’s announcement focus on improving solar panel recycling, for a total of about $10 million.

Clean energy is a fit for every state in the country, the administration said. One solar project in Shungnak, Alaska, was able to eliminate the need to keep making electricity by burning diesel fuel, a method sometimes used in remote communities that is not healthy for people and contributes to climate change.

“Alaska is not a place that folks often think of when they think about solar, but this energy can be an economic and affordable resource in all parts of the country,” said Jones-Albertus.

Did the AI-Generated Drake Song Breach Copyright?

A viral AI-generated song imitating Drake and The Weeknd was pulled from streaming services this week, but did it breach copyright as claimed by record label Universal?

Created by someone called @ghostwriter, Heart On My Sleeve racked up millions of listens before Universal Music Group asked for its removal from Spotify, Apple Music and other platforms.

However, Andres Guadamuz, who teaches intellectual property law at Britain’s University of Sussex, is not convinced that the song breached copyright.

As similar cases look set to multiply — with an uncanny AI replication of Liam Gallagher from Oasis causing buzz — he spoke to AFP about some of the issues being raised.

Did the song breach copyright?

The underlying music on Heart On My Sleeve was new, only the sound of the voice was familiar, “and you can’t copyright the sound of someone’s voice,” Guadamuz said.

Perhaps the furor around AI impersonators may lead to copyright being expanded to include voice, rather than just melody, lyrics and other created elements, “but that would be problematic,” Guadamuz added.

“What you’re protecting with copyright is the expression of an idea, and voice isn’t really that,” he said. 

He said Universal probably claimed copyright infringement because it is the simplest route to removing content, with established procedures in place with streaming platforms.

Were other rights breached?

An AI-generated impersonator may be breaching other laws.

If an artist has a distinctive voice or image, this is potentially protected under “publicity rights” in the United States or similar image rights in other countries.

Bette Midler won a case against Ford in 1988 for using an impersonator of her in an ad. Tom Waits won a similar case in 1993 against the Frito-Lays potato chips company.

The problem, said Guadamuz, is that enforcement of these rights is “very hit and miss” and taken much more seriously in some countries than others.

And streaming platforms currently lack straightforward mechanisms for removing content seen as breaching image rights.

What comes next?

The big upcoming legal fight is over how AI programs are trained.

It may be argued that inputting existing Drake and Weeknd songs to train an AI program may be a breach of copyright, but Guadamuz said this issue was far from settled.

“You need to copy the music in order to train the AI and so that unauthorized copying could potentially be copyright infringement,” he said.

“But defendants will say it’s fair use. They are using it to train a machine, teaching it to listen to music, and then removing the copies,” he said. “Ultimately, we will have to wait and see for the case law to be decided.”

But it is almost certainly too late to stem the flood.

“Bands are going to have to decide whether they want to pursue this in court, and copyright cases are expensive,” said Guadamuz.

“Some artists may lean into the technology and start using it themselves, especially if they start losing their voice.” 

US-China Competition in Tech Expands to AI Regulations

Competition between the U.S. and China in artificial intelligence has expanded into a race to design and implement comprehensive AI regulations.

The efforts to come up with rules to ensure AI’s trustworthiness, safety and transparency come at a time when governments around the world are exploring the impact of the technology on national security and education.

ChatGPT, a chatbot that mimics human conversation, has received massive attention since its debut in November. Its ability to give sophisticated answers to complex questions with a language fluency comparable to that of humans has caught the world by surprise. Yet its many flaws, including its ostensibly coherent responses laden with misleading information and apparent bias, have prompted tech leaders in the U.S. to sound the alarm.

“What happens when something vastly smarter than the smartest person comes along in silicon form? It’s very difficult to predict what will happen in that circumstance,” said Tesla Chief Executive Officer Elon Musk in an interview with Fox News. He warned that artificial intelligence could lead to “civilization destruction” without regulations in place.

Google CEO Sundar Pichai echoed that sentiment. “Over time there has to be regulation. There have to be consequences for creating deep fake videos which cause harm to society,” Pichai said in an interview with CBS’s “60 Minutes” program.

Jessica Brandt, policy director for the Artificial Intelligence and Emerging Technology Initiative at the Brookings Institution, told VOA Mandarin, “Business leaders understand that regulators will be watching this space closely, and they have an interest in shaping the approaches regulators will take.”

US grapples with regulations

AI regulation is still nascent in the U.S. Last year, the White House released voluntary guidance through a Blueprint for an AI Bill of Rights to help ensure users’ rights are protected as technology companies design and develop AI systems.

At a meeting of the President’s Council of Advisors on Science and Technology this month, President Joe Biden expressed concern about the potential dangers associated with AI and underscored that companies had a responsibility to ensure their products were safe before making them public.

On April 11, the National Telecommunications and Information Administration, a Commerce Department agency that advises the White House on telecommunications and information policy, began to seek comment and public input with the aim of crafting a report on AI accountability.

The U.S. government is trying to find the right balance to regulate the industry without stifling innovation “in part because the U.S. having innovative leadership globally is a selling point for the United States’ hard and soft power,” said Johanna Costigan, a junior fellow at the Asia Society Policy Institute’s Center for China Analysis.

Brandt, with Brookings, said, “The challenge for liberal democracies is to ensure that AI is developed and deployed responsibly, while also supporting a vibrant innovation ecosystem that can attract talent and investment.”

Meanwhile, other Western countries have also started to work on regulating the emerging technology.

The U.K. government published its AI regulatory framework in March. Also last month, Italy temporarily blocked ChatGPT in the wake of a data breach, and the German commissioner for data protection said his country could follow suit.

The European Union stated it’s pushing for an AI strategy aimed at making Europe a world-class hub for AI that ensures AI is human-centric and trustworthy, and it hopes to lead the world in AI standards.

Cyber regulations in China

In contrast to the U.S., the Chinese government has already implemented regulations aimed at tech sectors related to AI. In the past few years, Beijing has introduced several major data protection laws to limit the power of tech companies and to protect consumers.

The Cybersecurity Law enacted in 2017 requires that data must be stored within China and operators must submit to government-conducted security checks. The Data Security Law enacted in 2021 sets a comprehensive legal framework for processing personal information when doing business in China. The Personal Information Protection Law established in the same year gives Chinese consumers the right to access, correct and delete their personal data gathered by businesses. Costigan, with the Asia Society, said these laws have laid the groundwork for future tech regulations.

In March 2022, China began to implement a regulation that governs the way technology companies can use recommendation algorithms. The Cyberspace Administration of China (CAC) now supervises the process of using big data to analyze user preferences and companies’ ability to push information to users.

On April 11, the CAC unveiled a draft for managing generative artificial intelligence services similar to ChatGPT, in an effort to mitigate the dangers of the new technology.

Costigan said the goal of the proposed generative AI regulation could be seen in Article 4 of the draft, which states that content generated by future AI products must reflect the country’s “core socialist values” and not encourage subversion of state power.

“Maintaining social stability is a key consideration,” she said. “The new draft regulation does some good and is unambiguously in line with [President] Xi Jinping’s desire to ensure that individuals, companies or organizations cannot use emerging AI applications to challenge his rule.”

Michael Caster, the Asia digital program manager at Article 19, a London-based rights organization, told VOA, “The language, especially at Article 4, is clearly about maintaining the state’s power of censorship and surveillance.

“All global policymakers should be clearly aware that while China may be attempting to set standards on emerging technology, their approach to legislation and regulation has always been to preserve the power of the party.”

The future of cyber regulations

As strategies for cyber and AI regulations evolve, how they develop may largely depend on each country’s way of governance and reasons for creating standards. Analysts say there will also be intrinsic hurdles linked to coming up with consensus.

“Ethical principles can be hard to implement consistently, since context matters and there are countless potential scenarios at play,” Brandt told VOA. “They can be hard to enforce, too. Who would take on that role? How? And of course, before you can implement or enforce a set of principles, you need broad agreement on what they are.”

Observers said the international community would face challenges as it creates standards aimed at making AI technology ethical and safe.

US Targeting China, Artificial Intelligence Threats 

U.S. homeland security officials are launching what they describe as two urgent initiatives to combat growing threats from China and expanding dangers from ever more capable, and potentially malicious, artificial intelligence.

Homeland Security Secretary Alejandro Mayorkas announced Friday that his department was starting a “90-day sprint” to confront more frequent and intense efforts by China to hurt the United States, while separately establishing an artificial intelligence task force.

“Beijing has the capability and the intent to undermine our interests at home and abroad and is leveraging every instrument of its national power to do so,” Mayorkas warned, addressing the threat from China during a speech at the Council on Foreign Relations in Washington.

The 90-day sprint will “assess how the threats posed by the PRC [People’s Republic of China] will evolve and how we can be best positioned to guard against future manifestations of this threat,” he said.

“One critical area we will assess, for example, involves the defense of our critical infrastructure against PRC or PRC-sponsored attacks designed to disrupt or degrade provision of national critical functions, sow discord and panic, and prevent mobilization of U.S. military capabilities,” Mayorkas added.

Other areas of focus for the sprint will include addressing ways to stop Chinese government exploitation of U.S. immigration and travel systems to spy on the U.S. government and private entities and to silence critics, and looking at ways to disrupt the global fentanyl supply chain.

 

AI dangers

Mayorkas also said the magnitude of the threat from artificial intelligence, appearing in a growing number of tools from major tech companies, was no less critical.

“We must address the many ways in which artificial intelligence will drastically alter the threat landscape and augment the arsenal of tools we possess to succeed in the face of these threats,” he said.

Mayorkas promised that the Department of Homeland Security “will lead in the responsible use of AI to secure the homeland and in defending against the malicious use of this transformational technology.”

 

The new task force is set to seek ways to use AI to protect U.S. supply chains and critical infrastructure, counter the flow of fentanyl, and help find and rescue victims of online child sexual exploitation.

The unveiling of the two initiatives came days after lawmakers grilled Mayorkas about what some described as a lackluster and derelict effort under his leadership to secure the U.S. border with Mexico.

“You have not secured our borders, Mr. Secretary, and I believe you’ve done so intentionally,” the chair of the House Homeland Security Committee, Republican Mark Green, told Mayorkas on Wednesday.

Another lawmaker, Republican Marjorie Taylor Greene, went as far as to accuse Mayorkas of lying, though her words were quickly removed from the record.

Mayorkas on Friday said it might be possible to use AI to help with border security, though how exactly it could be deployed for the task was not yet clear.

“We’re at a nascent stage of really deploying AI,” he said. “I think we’re now at the dawn of a new age.”

But Mayorkas cautioned that technologies like AI would do little to slow the number of migrants willing to embark on dangerous journeys to reach U.S. soil.

“Desperation is the greatest catalyst for the migration we are seeing,” he said.

FBI warning

The announcement of Homeland Security’s 90-day sprint to confront growing threats from Beijing followed a warning earlier this week from the FBI about the willingness of China to target dissidents and critics in the U.S.

and the arrests of two New York City residents for their involvement in a secret Chinese police station.

China has denied any wrongdoing.

“The Chinese government strictly abides by international law, and fully respects the law enforcement sovereignty of other countries,” Liu Pengyu, the spokesman for the Chinese Embassy in Washington, told VOA in an email earlier this week, accusing the U.S. of seeking “to smear China’s image.”

Top U.S. officials have said they are opening two investigations daily into Chinese economic espionage in the U.S.

“The Chinese government has stolen more of American’s personal and corporate data than that of every nation, big or small combined,” FBI Director Christopher Wray told an audience late last year.

More recently, Wray warned of Chinese’ advances in AI, saying he was “deeply concerned.”

Mayorkas voiced a similar sentiment, pointing to China’s use of investments and technology to establish footholds around the world.

“We are deeply concerned about PRC-owned and -operated infrastructure, elements of infrastructure, and what that control can mean, given that the operator and owner has adverse interests,” Mayorkas said Friday.

“Whether it’s investment in our ports, whether it is investment in partner nations, telecommunications channels and the like, it’s a myriad of threats,” he said.

Twitter Drops Government-Funded Media Labels

Twitter has removed labels describing global media organizations as government-funded or state-affiliated, a move that comes after the Elon Musk-owned platform started stripping blue verification checkmarks from accounts that don’t pay a monthly fee.

Among those no longer labeled was National Public Radio in the U.S., which announced last week that it would stop using Twitter after its main account was designated state-affiliated media, a term also used to identify media outlets controlled or heavily influenced by authoritarian governments, such as Russia and China.

Twitter later changed the label to “government-funded media,” but NPR — which relies on the government for a tiny fraction of its funding — said it was still misleading.

Canadian Broadcasting Corp. and Swedish public radio made similar decisions to quit tweeting. CBC’s government-funded label vanished Friday, along with the state-affiliated tags on media accounts including Sputnik and RT in Russia and Xinhua in China.

Many of Twitter’s high-profile users on Thursday lost the blue checks that helped verify their identity and distinguish them from impostors.

Twitter had about 300,000 verified users under the original blue-check system — many of them journalists, athletes and public figures. The checks used to mean the account was verified by Twitter to be who it says it is.

High-profile users who lost their blue checks Thursday included Beyoncé, Pope Francis, Oprah Winfrey and former President Donald Trump.

The costs of keeping the marks range from $8 a month for individual web users to a starting price of $1,000 monthly to verify an organization, plus $50 monthly for each affiliate or employee account. Twitter does not verify the individual accounts, as was the case with the previous blue check doled out during the platform’s pre-Musk administration.

Celebrity users, from basketball star LeBron James to author Stephen King and Star Trek’s William Shatner, have balked at joining — although on Thursday, all three had blue checks indicating that the account paid for verification.

King, for one, said he hadn’t paid.

“My Twitter account says I’ve subscribed to Twitter Blue. I haven’t. My Twitter account says I’ve given a phone number. I haven’t,” King tweeted Thursday. “Just so you know.”

In a reply to King’s tweet, Musk said “You’re welcome namaste” and in another tweet he said he’s “paying for a few personally.” He later tweeted he was just paying for King, Shatner and James.

Singer Dionne Warwick tweeted earlier in the week that the site’s verification system “is an absolute mess.”

“The way Twitter is going anyone could be me now,” Warwick said. She had earlier vowed not to pay for Twitter Blue, saying the monthly fee “could (and will) be going toward my extra hot lattes.”

On Thursday, Warwick lost her blue check (which is actually a white check mark in a blue background).

For users who still had a blue check Thursday, a popup message indicated that the account “is verified because they are subscribed to Twitter Blue and verified their phone number.” Verifying a phone number simply means that the person has a phone number and they verified that they have access to it — it does not confirm the person’s identity.

It wasn’t just celebrities and journalists who lost their blue checks Thursday. Many government agencies, nonprofits and public-service accounts around the world found themselves no longer verified, raising concerns that Twitter could lose its status as a platform for getting accurate, up-to-date information from authentic sources, including in emergencies.

While Twitter offers gold checks for “verified organizations” and gray checks for government organizations and their affiliates, it’s not clear how the platform doles these out.

The official Twitter account of the New York City government, which earlier had a blue check, tweeted on Thursday that “This is an authentic Twitter account representing the New York City Government This is the only account for @NYCGov run by New York City government” in an attempt to clear up confusion.

A newly created spoof account with 36 followers (also without a blue check), disagreed: “No, you’re not. THIS account is the only authentic Twitter account representing and run by the New York City Government.”

Soon, another spoof account — purporting to be Pope Francis — weighed in too: “By the authority vested in me, Pope Francis, I declare @NYC_GOVERNMENT the official New York City Government. Peace be with you.”

Fewer than 5% of legacy verified accounts appear to have paid to join Twitter Blue as of Thursday, according to an analysis by Travis Brown, a Berlin-based developer of software for tracking social media.

Musk’s move has riled up some high-profile users and pleased some right-wing figures and Musk fans who thought the marks were unfair. But it is not an obvious money-maker for the social media platform that has long relied on advertising for most of its revenue.

Digital intelligence platform Similarweb analyzed how many people signed up for Twitter Blue on their desktop computers and only detected 116,000 confirmed sign-ups last month, which at $8 or $11 per month does not represent a major revenue stream. The analysis did not count accounts bought via mobile apps.

After buying San Francisco-based Twitter for $44 billion in October, Musk has been trying to boost the struggling platform’s revenue by pushing more people to pay for a premium subscription. But his move also reflects his assertion that the blue verification marks have become an undeserved or “corrupt” status symbol for elite personalities, news reporters and others granted verification for free by Twitter’s previous leadership.

Twitter began tagging profiles with a blue check mark starting about 14 years ago. Along with shielding celebrities from impersonators, one of the main reasons was to provide an extra tool to curb misinformation coming from accounts impersonating people. Most “legacy blue checks,” including the accounts of politicians, activists and people who suddenly find themselves in the news, as well as little-known journalists at small publications around the globe, are not household names.

One of Musk’s first product moves after taking over Twitter was to launch a service granting blue checks to anyone willing to pay $8 a month. But it was quickly inundated by impostor accounts, including those impersonating Nintendo, pharmaceutical company Eli Lilly and Musk’s businesses Tesla and SpaceX, so Twitter had to temporarily suspend the service days after its launch.

The relaunched service costs $8 a month for web users and $11 a month for users of its iPhone or Android apps. Subscribers are supposed to see fewer ads, be able to post longer videos and have their tweets featured more prominently.

TikTok CEO Tries to Ease Critics’ Security Concerns

The CEO of TikTok tried to calm critics’ fears about the security of his company’s app during an appearance Thursday.

Shou Chew was asked at a TED2023 Possibility conference if he could guarantee Beijing would not use the TikTok app, owned by the Chinese tech company ByteDance, to interfere in future U.S. elections.

“I can say that we are building all the tools to prevent any of these actions from happening,” Chew said. “And I’m very confident that with an unprecedented amount of transparency that we’re giving on the platform, we can, how we can reduce this risk to as low as zero as possible.”

Chew made the comments in Vancouver at the TED organization’s annual convention, where artificial intelligence and safeguards were discussed.

U.S. lawmakers and officials are ratcheting up threats to ban TikTok, saying the Chinese-owned video-sharing app used by millions of Americans poses a threat to privacy and U.S. national security.

U.S. lawmakers have grilled Chew over concerns the Chinese government could exploit the platform’s user data for espionage and influence operations in the United States.

U.S. House Speaker Kevin McCarthy tweeted in March, “It’s very concerning that the CEO of TikTok can’t be honest and admit what we already know to be true — China has access to TikTok user data.”

U.S. Representative Michael McCaul, chairman of the U.S. House of Representatives Foreign Affairs Committee, was even more blunt in February, telling the committee, “Make no mistake, TikTok is a national security threat. … It’s a spy balloon in your phone.”

He was referencing a Chinese surveillance balloon that drifted across the United States in early February before being shot down off the southeastern U.S. coast.

Several governments, including Canada and the U.S., have banned the TikTok app from government-issued smartphones, citing concerns the Chinese government could exploit the platform’s user data for espionage and influence operations in the United States.

Chew says TikTok has never stored data from Americans on servers in China.

“All new U.S. data is already stored in the Oracle Cloud infrastructure. So it’s in this protected U.S. environment that we talked about in the United States,” he said. “We still have some legacy data to delete in our own servers in Virginia and in Singapore. Our data has never been stored in China.”

“It’s going to take us a while to delete them, but I expect it to be done this year,” he said.

Chew also emphasized TikTok’s efforts to moderate content. When asked how many people are reviewing content posted to the platform, Chew said the numbers and cost are huge.

“The group is based in Ireland and it’s a lot of people. It’s tens of thousands of people,” Chew said. “It’s one of the most important cost items. And I think it’s completely worth it.”

Speaking to a TED conference dominated by discussions of artificial intelligence, Chew said a lot of moderation on TikTok is done by machines.

“The machines are good, they’re quite good, but they’re not as good as you know, they’re not perfect at this point. So you have to complement it with a lot of human beings today,” he said.

VOA’s Masood Farivar contributed to this report. 

Good, Bad of Artificial Intelligence Discussed at TED Conference  

While artificial intelligence, or AI, is not new, the speed at which the technology is developing and its implications for societies are, for many, a cause for wonder and alarm.

ChatGPT recently garnered headlines for doing things like writing term papers for university students.

Tom Graham and his company, Metaphysic.ai, have received attention for creating fake videos of actor Tom Cruise and re-creating Elvis Presley singing on an American talent show. Metaphysic was started to utilize artificial intelligence and create high-quality avatars of stars like Cruise or people from one’s own family or social circle.

Graham, who appeared at this year’s TED Conference in Vancouver, which began Monday and runs through Friday, said talking with an artificially created younger self or departed loved one can have tremendous benefits for therapy.

He added that the technology would allow actors to appear in movies without having to show up on set, or in ads with AI-generated sports stars.

“So, the idea of them being able to create ads without having to turn up is – it’s a match made in heaven,” Graham said. “The advertisers get more content. The sports people never have to turn up because they don’t want to turn up. And everyone just gets paid the same.”

Sal Khan, founder of Khan Academy, a nonprofit organization that provides free teaching materials, sees AI as beneficial to education and a kind of one-on-one instruction: student and AI.

His organization is using artificial intelligence to supplement traditional instruction and make it more interactive.

“But now, they can talk to literary characters,” he said. “They can talk to fictional cats. They can talk to historic characters, potentially even talk to inanimate objects, like, we were talking about the Mississippi River. Or talk to the Empire State Building. Or talk to … you know, talk to Mount Everest. This is all possible.”

For Chris Anderson, who is in charge of TED – a nonpartisan, nonprofit organization whose mission is to spread ideas, usually in the form of short speeches – conversations about artificial intelligence are the most important ones we can have at the moment. He said the organization’s role this year is to bring different parts of this rapidly emerging technology together.

“And the conversation can’t just be had by technologists,” he said. “And it can’t just be heard by politicians. And it can’t just be held by creatives. Everyone’s future is being affected. And so, we need to bring people together.”

For all of AI’s promise, there are growing calls for safeguards against misuse of the technology.

Computer scientist Yejin Choi at the University of Washington said policies and regulations are lagging because AI is moving so fast.

“And then there’s this question of whose guardrails are you going to install into AI,” she said. “So there’s a lot of these open questions right now. And ideally, we should be able to customize the guardrails for different cultures or different use cases.”

Another TED speaker this year, Eliezer Yudkowsky, has been studying AI for 20 years and is currently a senior research fellow at the Machine Intelligence Research Institute in California. He has a more pessimistic view of artificial intelligence and any type of safeguards.

“This eventually gets to the point where there is stuff smarter than us,” he said. “I think we are presently not on track to be able to handle that remotely gracefully. I think we all end up dead.”

Ready or not, societies are confronting the need to adapt to AI’s emergence.