China Tells Tech Manufacturers: Stop Using US-Made Micron Chips

Stepping up a feud with Washington over technology and security, China’s government Sunday told users of computer equipment deemed sensitive to stop buying products from the biggest U.S. memory chipmaker, Micron Technology Inc. 

Micron products have unspecified “serious network security risks” that pose hazards to China’s information infrastructure and affect national security, the Cyberspace Administration of China said on its website. Its six-sentence statement gave no details. 

“Operators of critical information infrastructure in China should stop purchasing products from Micron Co.,” the agency said. 

The United States, Europe and Japan are reducing Chinese access to advanced chipmaking and other technology they say might be used in weapons at a time when President Xi Jinping’s government has threatened to attack Taiwan and is increasingly assertive toward Japan and other neighbors. 

Chinese officials have warned of unspecified consequences but appear to be struggling to find ways to retaliate without hurting China’s smartphone producers and other industries and efforts to develop its own processor chip suppliers. 

An official review of Micron under China’s increasingly stringent information security laws was announced April 4, hours after Japan joined Washington in imposing restrictions on Chinese access to technology to make processor chips on security grounds. 

Foreign companies have been rattled by police raids on two consulting firms, Bain & Co. and Capvision, and a due diligence firm, Mintz Group. Chinese authorities have declined to explain the raids but said foreign companies are obliged to obey the law. 

Business groups and the U.S. government have appealed to authorities to explain newly expanded legal restrictions on information and how they will be enforced. 

Sunday’s announcement appeared to try to reassure foreign companies. 

“China firmly promotes high-level opening up to the outside world and, as long as it complies with Chinese laws and regulations, welcomes enterprises and various platform products and services from various countries to enter the Chinese market,” the cyberspace agency said. 

Xi accused Washington in March of trying to block China’s development. He called on the public to “dare to fight.” 

Despite that, Beijing has been slow to retaliate, possibly to avoid disrupting Chinese industries that assemble most of the world’s smartphones, tablet computers and other consumer electronics. They import more than $300 billion worth of foreign chips every year. 

Beijing is pouring billions of dollars into trying to accelerate chip development and reduce the need for foreign technology. Chinese foundries can supply low-end chips used in autos and home appliances but can’t support smartphones, artificial intelligence and other advanced applications. 

The conflict has prompted warnings the world might decouple or split into separate spheres with incompatible technology standards that mean computers, smartphones and other products from one region wouldn’t work in others. That would raise costs and might slow innovation. 

U.S.-Chinese relations are at their lowest level in decades due to disputes over security, Beijing’s treatment of Hong Kong and Muslim ethnic minorities, territorial disputes and China’s multibillion-dollar trade surpluses. 

G7 Calls for ‘Responsible’ Use of Generative AI

The world must urgently assess the impact of generative artificial intelligence, G7 leaders said Saturday, announcing they will launch discussions this year on “responsible” use of the technology.

A working group will be set up to tackle issues from copyright to disinformation, the seven leading economies said in a final communique released during a summit in Hiroshima, Japan.

Text generation tools such as ChatGPT, image creators and music composed using AI have sparked delight, alarm and legal battles as creators accuse them of scraping material without permission.

Governments worldwide are under pressure to move quickly to mitigate the risks, with the chief executive of ChatGPT’s OpenAI telling U.S. lawmakers this week that regulating AI was essential.

“We recognise the need to immediately take stock of the opportunities and challenges of generative AI, which is increasingly prominent across countries and sectors,” the G7 statement said.

“We task relevant ministers to establish the Hiroshima AI process, through a G7 working group, in an inclusive manner … for discussions on generative AI by the end of this year,” it said.

“These discussions could include topics such as governance, safeguard of intellectual property rights including copyrights, promotion of transparency, response to foreign information manipulation, including disinformation, and responsible utilisation of these technologies.”

The new working group will be organized in cooperation with the OECD group of developed countries and the Global Partnership on Artificial Intelligence (GPAI), the statement added.

On Tuesday, OpenAI CEO Sam Altman testified before a U.S. Senate panel and urged Congress to impose new rules on big tech.

He insisted that in time, generative AI developed by his company would one day “address some of humanity’s biggest challenges, like climate change and curing cancer.”

However, “we think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” he said.

European Parliament lawmakers this month also took a first step towards EU-wide regulation of ChatGPT and other AI systems.

The text is to be put to the full parliament next month for adoption before negotiations with EU member states on a final law.

“While rapid technological change has been strengthening societies and economies, the international governance of new digital technologies has not necessarily kept pace,” the G7 said.

For AI and other emerging technologies including immersive metaverses, “the governance of the digital economy should continue to be updated in line with our shared democratic values,” the group said.

Among others, these values include fairness, respect for privacy and “protection from online harassment, hate and abuse,” among others, it added.

US Supreme Court Lets Twitter Off Hook in Terror Lawsuit Over Istanbul Massacre

The U.S. Supreme Court on Thursday refused to clear a path for victims of attacks by militant organizations to hold social media companies liable under a federal anti-terrorism law for failing to prevent the groups from using their platforms, handing a victory to Twitter.

The justices, in a unanimous decision, reversed a lower court’s ruling that had revived a lawsuit against Twitter by the American relatives of Nawras Alassaf, a Jordanian man killed in a 2017 attack during New Year’s celebration in a Istanbul nightclub claimed by the Islamic State militant group. 

The case was one of two that the Supreme Court weighed in its current term aimed at holding internet companies accountable for contentious content posted by users – an issue of growing concern for the public and U.S. lawmakers. 

The justices on Thursday, in a similar case against Google-owned YouTube, part of Alphabet Inc, sidestepped ruling on a bid to narrow a federal law protecting internet companies from lawsuits for content posted by their users — called Section 230 of the Communications Decency Act. 

That case involved a lawsuit by the family of Nohemi Gonzalez, a 23-year-old college student from California who was fatally shot in an Islamic State attack in Paris in 2015, of a lower court’s decision to throw out their lawsuit. 

The Istanbul massacre on Jan. 1, 2017, killed Alassaf and 38 others. His relatives accused Twitter of aiding and abetting the Islamic State, which claimed responsibility for the attack, by failing to police the platform for the group’s accounts or posts in violation of a federal law called the Anti-Terrorism Act that enables Americans to recover damages related to “an act of international terrorism.” 

Twitter and its backers had said that allowing lawsuits like this would threaten internet companies with liability for providing widely available services to billions of users because some of them may be members of militant groups, even as the platforms regularly enforce policies against terrorism-related content. 

The case hinged on whether the family’s claims sufficiently alleged that the company knowingly provided “substantial assistance” to an “act of international terrorism” that would allow the relatives to maintain their suit and seek damages under the anti-terrorism law.

After a judge dismissed the lawsuit, the San Francisco-based 9th U.S. Circuit Court of Appeals in 2021 allowed it to proceed, concluding that Twitter had refused to take “meaningful steps” to prevent Islamic State’s use of the platform. 

President Joe Biden’s administration supported Twitter, saying the Anti-Terrorism Act imposes liability for assisting a terrorist act and not for “providing generalized aid to a foreign terrorist organization” with no causal link to the act at issue. 

In the Twitter case, the 9th Circuit did not consider whether Section 230 barred the family’s lawsuit. Google and Meta’s Facebook, also defendants, did not formally join Twitter’s appeal.

Islamic State called the Istanbul attack revenge for Turkish military involvement in Syria. The main suspect, Abdulkadir Masharipov, an Uzbek national, was later captured by police.

Twitter in court papers has said that it has terminated more than 1.7 million accounts for violating rules against “threatening or promoting terrorism.” 

‘Godfather of AI’ Quits Google to Warn of the Technology’s Dangers

A computer scientist often dubbed “the godfather of artificial intelligence” has quit his job at Google to speak out about the dangers of the technology, U.S. media reported Monday.

Geoffrey Hinton, who created a foundation technology for AI systems, told The New York Times that advancements made in the field posed “profound risks to society and humanity”.

“Look at how it was five years ago and how it is now,” he was quoted as saying in the piece, which was published on Monday. “Take the difference and propagate it forwards. That’s scary.”

Hinton said that competition between tech giants was pushing companies to release new AI technologies at dangerous speeds, risking jobs and spreading misinformation.

“It is hard to see how you can prevent the bad actors from using it for bad things,” he told The Times.

Jobs could be at risk

In 2022, Google and OpenAI — the startup behind the popular AI chatbot ChatGPT — started building systems using much larger amounts of data than before.

Hinton told The Times he believed these systems were eclipsing human intelligence in some ways because of the amount of data they were analyzing.

“Maybe what is going on in these systems is actually a lot better than what is going on in the brain,” he told the paper.

While AI has been used to support human workers, the rapid expansion of chatbots like ChatGPT could put jobs at risk.

AI “takes away the drudge work” but “might take away more than that,” he told The Times.

Concern about misinformation

The scientist also warned about the potential spread of misinformation created by AI, telling The Times that the average person will “not be able to know what is true anymore.”

Hinton notified Google of his resignation last month, The Times reported.

Jeff Dean, lead scientist for Google AI, thanked Hinton in a statement to U.S. media.

“As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI,” the statement added.

“We’re continually learning to understand emerging risks while also innovating boldly.”

In March, tech billionaire Elon Musk and a range of experts called for a pause in the development of AI systems to allow time to make sure they are safe.

An open letter, signed by more than 1,000 people. including Musk and Apple co-founder Steve Wozniak, was prompted by the release of GPT-4, a much more powerful version of the technology used by ChatGPT.

Hinton did not sign that letter at the time, but told The New York Times that scientists should not “scale this up more until they have understood whether they can control it.”

EU Tech Tsar Vestager Sees Political Agreement on AI Law This Year 

The European Union is likely to reach a political agreement this year that will pave the way for the world’s first major artificial intelligence (AI) law, the bloc’s tech regulation chief, Margrethe Vestager, said on Sunday.

This follows a preliminary deal reached on Thursday by members of the European Parliament to push through the draft of the EU’s Artificial Intelligence Act to a vote on May 11. Parliament will then thrash out the bill’s final details with EU member states and the European Commission before it becomes law.

At a press conference after a Group of Seven digital ministers’ meeting in Takasaki, Japan, Vestager said the EU AI Act was “pro-innovation” since it seeks to mitigate the risks of societal damage from emerging technologies.

Regulators around the world have been trying to find a balance where governments could develop “guardrails” on emerging artificial intelligence technology without stifling innovation.

“The reason why we have these guardrails for high-risk use cases is that cleaning up … after a misuse by AI would be so much more expensive and damaging than the use case of AI in itself,” Vestager said.

While the EU AI Act is expected to be passed by this year, lawyers have said it will take a few years for it to be enforced. But Vestager said businesses could start considering the implication of the new legislation.

“There was no reason to hesitate and to wait for the legislation to be passed to accelerate the necessary discussions to provide the changes in all the systems where AI will have an enormous influence,” she told Reuters in an interview.

While research on AI has been going on for years, the sudden popularity of generative AI applications such as OpenAI’S ChatGPT and Midjourney have led to a scramble by lawmakers to find ways to regulate any uncontrolled growth.

An organization backed by Elon Musk and European lawmakers involved in drafting the EU AI Act are among those to have called for world leaders to collaborate to find ways to stop advanced AI from creating disruptions.

Digital ministers of the G-7 advanced nations on Sunday also agreed to adopt “risk-based” regulation on AI, among the first steps that could lead to global agreements on how to regulate AI.

“It is important that our democracy paved the way and put in place the rules to protect us from its abusive manipulation – AI should be useful but it shouldn’t be manipulating us,” said German Transport Minister Volker Wissing.

This year’s G-7 meeting was also attended by representatives from Indonesia, India and Ukraine.

UK Blocks Microsoft-Activision Gaming Deal, Biggest in Tech

British antitrust regulators on Wednesday blocked Microsoft’s $69 billion purchase of video game maker Activision Blizzard, thwarting the biggest tech deal in history over worries that it would stifle competition for popular titles like Call of Duty in the fast-growing cloud gaming market.

The Competition and Markets Authority said in its final report that “the only effective remedy” to the substantial loss of competition “is to prohibit the Merger.” The companies have vowed to appeal.

The all-cash deal faced stiff opposition from rival Sony, which makes the PlayStation gaming system, and also was being scrutinized by regulators in the U.S. and Europe over fears that it would give Microsoft and its Xbox console control of hit franchises like Call of Duty and World of Warcraft.

The U.K. watchdog’s concerns centered on how the deal would affect cloud gaming, which streams to tablets, phones and other devices and frees players from buying expensive consoles and gaming computers. Gamers can keep playing major Activision titles, including mobile games like Candy Crush, on the platforms they typically use.

Cloud gaming has the potential to change the industry by giving people more choice over how and where they play, said Martin Colman, chair of the Competition and Markets Authority’s independent expert panel investigating the deal.

“This means that it is vital that we protect competition in this emerging and exciting market,” he said.

The decision underscores Europe’s reputation as the global leader in efforts to rein in the power of Big Tech companies. A day earlier, the U.K. government unveiled draft legislation that would give regulators more power to protect consumers from online scams and fake reviews and boost digital competition.

The U.K. decision further dashes Microsoft’s hopes that a favorable outcome could help it resolve a lawsuit brought by the U.S. Federal Trade Commission. A trial before FTC’s in-house judge is set to begin Aug. 2. The European Union’s decision, meanwhile, is due May 22.

Activision lashed out, portraying the watchdog’s decision as a bad signal to international investors in the United Kingdom at a time when the British economy faces severe challenges.

The game maker said it would “work aggressively” with Microsoft to appeal, asserting that the move “contradicts the ambitions of the U.K.” to be an attractive place for tech companies.

“We will reassess our growth plans for the U.K. Global innovators large and small will take note that — despite all its rhetoric — the U.K. is clearly closed for business,” Activision said.

Redmond, Washington-based Microsoft also signaled it wasn’t ready to give up.

“We remain fully committed to this acquisition and will appeal,” President Brad Smith said in a statement. The decision “rejects a pragmatic path to address competition concerns” and discourages tech innovation and investment in Britain, he said.

“We’re especially disappointed that after lengthy deliberations, this decision appears to reflect a flawed understanding of this market and the way the relevant cloud technology actually works,” Smith said.

It’s not the first time British regulators have flexed their antitrust muscles on a Big Tech deal. They previously blocked Facebook parent Meta’s purchase of Giphy over fears it would limit innovation and competition. The social media giant appealed the decision to a tribunal but lost and was forced to sell off the GIF sharing platform.

When it comes to gaming, Microsoft already has a strong position in the cloud computing market, and regulators concluded that if the deal went through, it would reinforce the company’s advantage by giving it control of key game titles.

In an attempt to ease concerns, Microsoft struck deals with Nintendo and some cloud gaming providers to license Activision titles like Call of Duty for 10 years — offering the same to Sony.

The watchdog said it reviewed Microsoft’s remedies “in considerable depth” but found they would require its oversight, whereas preventing the merger would allow cloud gaming to develop without intervention.

Study Details Differences Between Deep Interiors of Mars and Earth

Mars is Earth’s next-door neighbor in the solar system — two rocky worlds with differences down to their very core, literally.

A new study based on seismic data obtained by NASA’s robotic InSight lander is offering a fuller understanding of the Martian deep interior and fresh details about dissimilarities between Earth, the third planet from the sun, and Mars, the fourth.

The research, informed by the first detection of seismic waves traveling through the core of a planet other than Earth, showed that the innermost layer of Mars is slightly smaller and denser than previously known. It also provided the best assessment to date of the composition of the Martian core.

Both planets possess cores comprised primarily of liquid iron. But about 20% of the Martian core is made up of elements lighter than iron — mostly sulfur, but also oxygen, carbon and a dash of hydrogen, the study found. That is about double the percentage of such elements in Earth’s core, meaning the Martian core is considerably less dense than our planet’s core — though more dense than a 2021 estimate based on a different type of data from the now-retired InSight.

“The deepest regions of Earth and Mars have different compositions —  likely a product both of the conditions and processes at work when the planets formed and of the material they are made from,” said seismologist Jessica Irving of the University of Bristol in England, lead author of the study published this week in the journal Proceedings of the National Academy of Sciences.

The study also refined the size of the Martian core, finding it has a diameter of about 2,212-2,249 miles (3,560-3,620 km), approximately 12-31 miles (20-50 km) smaller than previously estimated. The Martian core makes up a slightly smaller percentage of the planet’s diameter than does Earth’s core.

The nature of the core can play a role in governing whether a rocky planet or moon could harbor life. The core, for instance, is instrumental in generating Earth’s magnetic field that shields the planet from harmful solar and cosmic particle radiation.

“On planets and moons like Earth, there are silicate — rocky — outer layers and an iron-dominated metallic core. One of the most important ways a core can impact habitability is to generate a planetary dynamo,” Irving said.

“Earth’s core does this but Mars’ core does not — though it used to, billions of years ago. Mars’ core likely no longer has the energetic, turbulent motion which is needed to generate such a field,” Irving added.

Mars has a diameter of about 4,212 miles (6,779 km), compared to Earth’s diameter of about 7,918 miles (12,742 km), and Earth is almost seven times larger in total volume.

The behavior of seismic waves traveling through a planet can reveal details about its interior structure. The new findings stem from two seismic events that occurred on the opposite side of Mars from where the InSight lander — and specifically its seismometer device — sat on the planet’s surface.

The first was an August 2021 marsquake centered close to Valles Marineris, the solar system’s largest canyon. The second was a September 2021 meteorite impact that left a crater of about 425 feet (130 meters).

The U.S. space agency formally retired InSight in December after four years of operations, with an accumulation of dust preventing its solar-powered batteries from recharging.

“The InSight mission has been fantastically successful in helping us decipher the structure and conditions of the planet’s interior,” University of Maryland geophysicist and study co-author Vedran Lekic said. “Deploying a network of seismometers on Mars would lead to even more discoveries and help us understand the planet as a system, which we cannot do by just looking at its surface from orbit.”

Moon Shot: Japan Firm to Attempt Historic Lunar Landing

A Japanese space start-up will attempt Tuesday to become the first private company to put a lander on the Moon.   

If all goes to plan, ispace’s Hakuto-R Mission 1 lander will start its descent towards the lunar surface at around 15:40 GMT.   

It will slow its orbit some 100 kilometers above the Moon, then adjust its speed and altitude to make a “soft landing” around an hour later.   

Success is far from guaranteed. In April 2019, Israeli organization SpaceIL watched their lander crash into the Moon’s surface.   

ispace has announced three alternative landing sites and could shift the lunar descent date to April 26, May 1 or May 3, depending on conditions.   

“What we have accomplished so far is already a great achievement, and we are already applying lessons learned from this flight to our future missions,” ispace founder and CEO Takeshi Hakamada said earlier this month.   

“The stage is set. I am looking forward to witnessing this historic day, marking the beginning of a new era of commercial lunar missions.”   

The lander, standing just over two meters tall and weighing 340 kilograms, has been in lunar orbit since last month.   

It was launched from Earth in December on one of SpaceX’s Falcon 9 rockets after several delays.   

So far only the United States, Russia and China have managed to put a robot on the lunar surface, all through government-sponsored programs.   

However, Japan and the United States announced last year that they would cooperate on a plan to put a Japanese astronaut on the Moon by the end of the decade.   

SEE ALSO: A related video by VOA’s Alexander Kruglyakov

The lander is carrying several lunar rovers, including a miniature Japanese model of just eight centimeters that was jointly developed by Japan’s space agency with toy manufacturer Takara Tomy.   

The mission is also being closely watched by the United Arab Emirates, whose Rashid rover is aboard the lander as part of the nation’s expanding space program.   

The Gulf country is a newcomer to the space race but sent a probe into Mars’ orbit in 2021. If its rover successfully lands, it will be the Arab world’s first Moon mission.   

Hakuto means “white rabbit” in Japanese and references Japanese folklore that a white rabbit lives on the Moon.   

The project was one of five finalists in Google’s Lunar X Prize competition to land a rover on the Moon before a 2018 deadline, which passed without a winner.   

With just 200 employees, ispace has said it “aims to extend the sphere of human life into space and create a sustainable world by providing high-frequency, low-cost transportation services to the Moon.”   

Hakamada has touted the mission as laying “the groundwork for unleashing the Moon’s potential and transforming it into a robust and vibrant economic system.”   

The firm believes the Moon will support a population of 1,000 people by 2040, with 10,000 more visiting each year.   

It plans a second mission, tentatively scheduled for next year, involving both a lunar landing and the deployment of its own rover. 

SpaceX Wins Approval to Add Fifth U.S. Rocket Launch Site

The U.S. Space Force said on Monday that Elon Musk’s SpaceX was granted approval to lease a second rocket launch complex at a military base in California, setting the space company up for its fifth launch site in the United States. 

Under the lease, SpaceX will launch its workhorse Falcon rockets from Space Launch Complex-6 at Vandenberg Space Force Base, a military launch site north of Los Angeles where the space company operates another launchpad. It has two others in Florida and its private Starbase site in south Texas. 

A Monday night Space Force statement said a letter of support for the decision was signed on Friday by Space Launch Delta 30 commander Col. Rob Long. The statement did not mention a duration for SpaceX’s lease. 

The new launch site, vacated last year by the Boeing-Lockheed joint venture United Launch Alliance, gives SpaceX more room to handle an increasingly busy launch schedule for commercial, government and internal satellite launches. 

Vandenberg Space Force Base allows for launches in a southern trajectory over the Pacific Ocean, which is often used for weather-monitoring, military or spy satellites that commonly rely on polar Earth orbits. 

SpaceX’s grant of Space Launch Complex-6 comes as rocket companies prepare to compete for the Pentagon’s Phase 3 National Security Space Launch program, a watershed military launch procurement effort expected to begin in the next year or so. 

Twitter Changes Stoke Russian, Chinese Propaganda Surge

Twitter accounts operated by authoritarian governments in Russia, China and Iran are benefiting from recent changes at the social media company, researchers said Monday, making it easier for them to attract new followers and broadcast propaganda and disinformation to a larger audience. 

The platform is no longer labeling state-controlled media and propaganda agencies, and will no longer prohibit their content from being automatically promoted or recommended to users. Together, the two changes, both made in recent weeks, have supercharged the Kremlin’s ability to use the U.S.-based platform to spread lies and misleading claims about its invasion of Ukraine, U.S. politics and other topics. 

Russian state media accounts are now earning 33% more views than they were just weeks ago, before the change was made, according to findings released Monday by Reset, a London-based non-profit that tracks authoritarian governments’ use of social media to spread propaganda. Reset’s findings were first reported by The Associated Press. 

The increase works out to more than 125,000 additional views per post. Those posts included ones suggesting the CIA had something to do with the September 11, 2001, attacks on the U.S., that Ukraine’s leaders are embezzling foreign aid to their country, and that Russia’s invasion of Ukraine was justified because the U.S. was running clandestine biowarfare labs in the country. 

State media agencies operated by Iran and China have seen similar increases in engagement since Twitter quietly made the changes. 

The about-face from the platform is the latest development since billionaire Elon Musk purchased Twitter last year. Since then, he’s ushered in a confusing new verification system and laid off much of the company’s staff, including those dedicated to fighting misinformation, allowed back neo-Nazis and others formerly suspended from the site, and ended the site’s policy prohibiting dangerous COVID-19 misinformation. Hate speech and disinformation have thrived. 

Before the most recent change, Twitter affixed labels reading “Russia state-affiliated media” to let users know the origin of the content. It also throttled back the Kremlin’s online engagement by making the accounts ineligible for automatic promotion or recommendation—something it regularly does for ordinary accounts as a way to help them reach bigger audiences. 

The labels quietly disappeared after National Public Radio and other outlets protested Musk’s plans to label their outlets as state-affiliated media, too. NPR then announced it would no longer use Twitter, saying the label was misleading, given NPR’s editorial independence, and would damage its credibility. 

Reset’s conclusions were confirmed by the Atlantic Council’s Digital Forensic Research Lab (DFRL), where researchers determined the changes were likely made by Twitter late last month. Many of the dozens of previously labeled accounts were steadily losing followers since Twitter began using the labels. But after the change, many accounts saw big jumps in followers. 

RT Arabic, one of Russia’s most popular propaganda accounts on Twitter, had fallen to less than 5,230,000 followers on January 1, but rebounded after the change was implemented, the DFRL found. It now has more than 5,240,000 followers. 

Before the change, users interested in seeking out Kremlin propaganda had to search specifically for the account or its content. Now, it can be recommended or promoted like any other content. 

“Twitter users no longer must actively seek out state-sponsored content in order to see it on the platform; it can just be served to them,” the DFRL concluded. 

Twitter did not respond to questions about the change or the reasons behind it. Musk has made past comments suggesting he sees little difference between state-funded propaganda agencies operated by authoritarian strongmen and independent news outlets in the west.

“All news sources are partially propaganda,” he tweeted last year, “some more than others.”

Writer, Adviser, Poet, Bot: How ChatGPT Could Transform Politics

The AI bot ChatGPT has passed exams, written poetry, and deployed in newsrooms, and now politicians are seeking it out — but experts are warning against rapid uptake of a tool also famous for fabricating “facts.”

The chatbot, released last November by U.S. firm OpenAI, has quickly moved center stage in politics — particularly as a way of scoring points.

Japanese Prime Minister Fumio Kishida recently took a direct hit from the bot when he answered some innocuous questions about health care reform from an opposition MP.

Unbeknownst to the PM, his adversary had generated the questions with ChatGPT. He also generated answers that he claimed were “more sincere” than Kishida’s.

The PM hit back that his own answers had been “more specific.”

French trade union boss Sophie Binet was on-trend when she drily assessed a recent speech by President Emmanuel Macron as one that “could have been done by ChatGPT.”

But the bot has also been used to write speeches and even help draft laws. 

“It’s useful to think of ChatGPT and generative AI in general as a cliche generator,” David Karpf of George Washington University in the U.S. said during a recent online panel. 

“Most of what we do in politics is also cliche generation.”

‘Limited added value’

Nowhere has the enthusiasm for grandstanding with ChatGPT been keener than in the United States.

Last month, Congresswoman Nancy Mace gave a five-minute speech at a Senate committee enumerating potential uses and harms of AI — before delivering the punchline that “every single word” had been generated by ChatGPT.

Local U.S. politician Barry Finegold had already gone further though, pronouncing in January that his team had used ChatGPT to draft a bill for the Massachusetts Senate.

The bot reportedly introduced original ideas to the bill, which is intended to rein in the power of chatbots and AI.

Anne Meuwese from Leiden University in the Netherlands wrote in a column for Dutch law journal RegelMaat last week that she had carried out a similar experiment with ChatGPT and also found that the bot introduced original ideas.

But while ChatGPT was to some extent capable of generating legal texts, she wrote that lawmakers should not fall over each other to use the tool.

“Not only is much still unclear about important issues such as environmental impact, bias and the ethics at OpenAI … the added value also seems limited for now,” she wrote.

Agitprop bots

The added value might be more obvious lower down the political food chain, though, where staffers on the campaign trail face a treadmill of repetitive tasks.

Karpf suggested AI could be useful for generating emails asking for donations — necessary messages that were not intended to be masterpieces.

This raises an issue of whether the bots can be trained to represent a political point of view.

ChatGPT has already provoked a storm of controversy over its apparent liberal bias — the bot initially refused to write a poem praising Donald Trump but happily churned out couplets for his successor as U.S. President Joe Biden.

Billionaire magnate Elon Musk has spied an opportunity. Despite warning that AI systems could destroy civilization, he recently promised to develop TruthGPT, an AI text tool stripped of the perceived liberal bias.

Perhaps he needn’t have bothered. New Zealand researcher David Rozado already ran an experiment retooling ChatGPT as RightWingGPT — a bot on board with family values, liberal economics and other right-wing rallying cries.

“Critically, the computational cost of trialling, training and testing the system was less than $300,” he wrote on his Substack blog in February.

Not to be outdone, the left has its own “Marxist AI.”

The bot was created by the founder of Belgian satirical website Nordpresse, who goes by the pseudonym Vincent Flibustier.

He told AFP his bot just sends queries to ChatGPT with the command to answer as if it were an “angry trade unionist.”

The malleability of chatbots is central to their appeal but it goes hand-in-hand with the tendency to generate untruths, making AI text generators potentially hazardous allies for the political class.

“You don’t want to become famous as the political consultant or the political campaign that blew it because you decided that you could have a generative AI do [something] for you,” said Karpf.