Green Energy Expected to Cover Growth in Demand for Electricity

Paris — Power generated from low-emissions sources, such as wind, solar and nuclear, will be adequate to meet growth in global demand for the next three years, the International Energy Agency said, adding that emissions from the power sector are on the decline.

Following record growth, electricity generation from low-emissions sources will account for almost half of the world’s power by 2026, up from less than 40% in 2023, the IEA said in report on Wednesday.

Renewables are expected to overtake coal by early 2025, accounting for more than a third of total electricity generation, the report said.

Nuclear power is also forecast to reach a record globally as French output continues to recover from lows in 2022, several plants in Japan come back online and new reactors begin operations in markets including China, India, Korea and Europe.

Electricity demand is expected to rise on average by 3.4% from 2024 through 2026 with about 85% of demand growth seen coming from China, India and southeast Asia, after growth eased slightly to 2.2% in 2023, IEA data showed.

Over this period, China is expected to account for the largest share of the global increase in electricity demand in terms of volume, despite a forecast for slower economic growth and a lower reliance on heavy industry, the report said.

Meanwhile, global emissions are expected to decrease by 2.4% in 2024, followed by smaller declines in 2025 and 2026, the report said.

“The decoupling of global electricity demand and emissions would be significant given the energy sector’s increasing electrification, with more consumers using technologies such as electric vehicles and heat pumps,” the report said.

Electricity accounted for 2% more of final energy consumption in 2023 from 2015 levels, though reaching climate goals would require electrification to advance significantly faster in coming years, the IEA said.

AI Audience Row at Sundance Sparks Walkout, Highlights Division

Park City, Utah — An audience member was ejected from a Sundance festival event Tuesday in a spat over artificial intelligence, triggering a walkout that illustrates the divisions the technology has rapidly wrought in the film industry.

AI — a key driver of the recent and devastating Hollywood strikes — has been debated extensively at this year’s indie movie festival in Utah.

Filmmakers have experimented with using the technology as a creative tool, while also cautioning about its potential to erase jobs and stifle human expression and connection.

At a Tuesday screening of “Being (The Digital Griot),” in which audience members were encouraged to approach the screen and discuss issues like racism and the patriarchy with an AI bot, an audience member appeared to shout profanity about AI.

“I’m not here to be cursed out and I’m not going to have my AI child be cursed out either,” responded the film’s creator, artist Rashaad Newsome, refusing to participate in a post-screening Q&A until action was taken.

Festival staff forced the woman who had apparently yelled to leave the auditorium, prompting jeers.

Roughly a quarter of the auditorium walked out in solidarity, with some complaining that debate was being shut down and others insisting the lady expelled had not been the actual culprit.

Sundance organizers told AFP they were “looking into” the incident and “reviewing all available material to determine what happened so that corrective actions can be taken.”

But the incident highlighted long-brewing and sharply escalating tensions triggered by the issue of AI in the film world — something that this year’s Sundance lineup was specifically programmed to address.

‘Scary’

In addition to “Being,” the Sundance indie festival has hosted “Eternal You” and “Love Machina,” two documentaries about loved ones using AI to communicate after death.

Another film, “Eno,” explored musician Brian Eno’s career and creative process, using a “generative engine” to mesh together near-infinite different versions of a film from hundreds of possible scenes.

AI was also addressed on the fiction side by films like “Love Me,” starring Kristen Stewart, which imagined a romance between an AI-powered buoy and a satellite in a post-human world.

“Love Machina” director Peter Sillen told AFP that AI could soon mean that making a film will be a similar process to writing a novel.

“You’re going to be able to have somebody who’s sitting in their room create a masterpiece of filmmaking, probably,” he said. 

 

The idea was “hard and scary” but “interesting,” Sillen said, concluding: “I think you have to be open to it.”

“Eternal You” director Hans Block pointed out that AI is already widely used in movies — indeed, the Adobe software he used to edit the film is “full of AI” and “helped us as a tool a lot.”

“It’s so much more easy to make a film nowadays,” he said.

But Block said that while AI can help as a tool, it is important to debate what harm could be caused if the technology is not regulated.

“That’s why we are so happy to present the film right now, because it’s a perfect time to open the debate about these discussions,” he said. 

‘Human touch’

The danger that AI could replace screenwriters, actors and other professions was a key sticking point in last year’s Hollywood strikes, with unions holding out for guarantees from studios that they would not be replaced.

The encroachment of AI has sparked resolutely negative reactions from many filmmakers at Sundance.

Anirban Dutta, co-director of “Nocturnes,” an experiential documentary about scientists studying moths in the eastern Himalayas, said his movie is “a response to what’s happening to this world where all our human instincts are being mechanized.”

“Our film is a love letter to invite people to come back to what we are losing… human touch,” he said.

The woman who was thrown out of the “Being” screening, who has not been identified, was making a similar point before chaos erupted.

“As interesting as this (film) is… all of the knowledge it has comes from people,” she said.

Nigerian Startups See Rough Financing Road Ahead

ABUJA, NIGERIA     — Nigeria’s tech startups are facing reluctance from investors, stemming from the shutdown of some prominent young companies last year.

Kingsley Eze co-runs Nairaxi, an e-Commerce, on-demand logistics startup in Abuja, Nigeria’s capital. Despite its record of handling tens of thousands of successful requests, the firm has been largely funded by Eze, as well as family and friends. 

Eze told VOA that even though he is ready for expansion, it has been difficult to secure financing, amid the tales of failing startups in the country. 

“It’s been very difficult to raise funds, investors are cautious, the interest rate hikes in the Western economy is also a contributing factor to that, coupled with a lot of disappointing or not so good outings for a few startups that were like a beacon of hope for the Nigerian startup ecosystem,” said Eze.

Nigeria has been leading growth in African startups. Nevertheless, the sector faced a significant blow in 2023. Prominent startups such as 54Gene, Lazerpay, Vibra, Payday, and Hytch went out of business — largely over their inability to raise more capital to keep the companies running — losing more than $70 million of foreign investors’ funds. 

Abuja-based economist and investment expert Paul Alaje told VOA he blames the collapses on neglect of business principles. 

“Assumption is the major bane to startup development in Africa, especially Nigeria,” said Alaje. “That the idea worked at first and is technology-driven does not mean the fundamentals of traditional business or a growing business, economic principles behind traditional business, should be neglected when it comes to startups.” 

A recent report by Briter Bridges, a London-based business intelligence and research firm, showed a 54% drop in funding for startups between January and October of last year in Africa compared to the same period in 2022. 

Eze said he believes this will make it even harder to navigate the funding terrain.   

“The last statistics we had projected a 60% failure rate for Nigerian startup companies which is not a good bet for most investors,” said Eze. “When everyone is succeeding in the market, it encourages more investors.” 

Alaje said Nigeria’s business ecosystem needs an overhaul. 

((ACT Paul Alaje, Senior Economist (Male, in English) )) 

“Change policy, bring new policies that make it difficult for people who don’t have an idea regarding how business should be properly run,” said Alaje. “Two, show examples of people who got it correctly, including Paystack. We need to become more deliberate at all levels.” 

Paystack, a successful Nigerian payment processing company, was acquired by an Irish-American company for $200 million in 2020. 

According to venture capitalists in Nigeria, poor infrastructure, lack of accountability by business owners, and the foreign exchange crisis aided the collapse of many startups. 

For his part, Eze said he will continue to build his business from the revenues it generates. 

US Lawmakers Push for Limits on American Investment in China Tech

Capitol Hill — U.S. lawmakers renewed calls Wednesday to pass bipartisan legislation that would restrict American investment in Chinese technology.

“It should come as no surprise that China’s military and surveillance state are exploiting loopholes in U.S. policy to access billions of U.S. investment dollars and expertise. We know that U.S. investment has not democratized China and countries which are controlled by the CCP [Chinese Communist Party] have no power over the applications of their technology. The CCP can direct it to us for military or surveillance purposes,” House Foreign Affairs Committee Chairman Michael McCaul said at a hearing on the legislation Wednesday. 

The bill – which has support from both conservative organizations and the Biden administration – was not included in the National Defense Authorization Act or NDAA passed late last year. Republican Senator John Cornyn has sponsored companion legislation in the U.S. Senate that passed with more than ninety votes. 

Lawmakers hope it can still be passed individually and signed into law.  

If passed, McCaul said the measure, H.R. 6349, would target “specific technology sectors, like AI [artificial intelligence] and quantum computing, that are empowering China’s military development and surveillance.” 

Rep. Gregory Meeks, the top Democrat on the House Foreign Affairs Committee, said an executive order issued by the Biden administration last August “that calls for provisions and notification requirements of specific types of American investments in China, or in certain companies that develop or produce semiconductors, quantum computers, and artificial intelligence applications” is an important first step. 

But experts in U.S.-China relations told a House panel more could be done. 

“Congress has an opportunity to build on the initial steps taken by the Trump and Biden administrations to prevent U.S. capital from fueling China’s military and intelligence capabilities. First, Washington should take a sectoral rather than merely an entity-based approach. The Treasury Department has demonstrated since at least 2021 that it is disinterested in using even its existing narrow authorities to limit investment in Chinese military-linked companies. And in fairness to the Treasury Department tackling the problem on a company-by-company basis would be a resource-intensive and gargantuan task,” Matthew Pottinger, the deputy national security adviser during the Trump administration, said Wednesday. 

“We still haven’t learned that they will do everything they can to take anything we sell, particularly in the area of electronics and really high tech, and use it for the military. They’ve been doing that for decades. We don’t learn. We think somehow if you trade more, they’ll matriculate from dictatorship to democracy,” Republican Rep. Chris Smith said Wednesday.

The bipartisan push in the U.S. House comes as Senate negotiators continue work on the White House’s $106 billion national security supplemental request that includes funding to combat Chinese influence in the Indo-Pacific. Citing a border security crisis, Senate Republicans have sought changes to U.S. immigration law in return for their votes to pass more than $50 billion in assistance to Ukraine that is also part of the Biden administration’s request. 

Senate Minority Leader Mitch McConnell urged lawmakers Wednesday to reach an agreement soon. 

“It’s become quite fashionable in Washington to talk about how we’re not taking competition with China seriously enough,” McConnell said. “Winning this competition means credibly deterring Beijing’s worst impulses, which, for us, means investing in American strength. Outcompeting the PRC [People’s Republic of China] will require greater investments in our military capabilities and in our industrial capacity to produce them. The West cannot be caught unprepared for this challenge. We cannot afford to neglect the lessons of history.” 

Australia Outlines Plan to Manage the Rise of Artificial Intelligence

sydney — The Australian government is considering new laws to regulate the use of artificial intelligence in “high-risk” areas such as law enforcement and self-driving vehicles.

Voluntary measures also are being explored, such as asking companies to label AI-generated content.

The country has outlined its plan to respond to the rapid rise of artificial intelligence, or AI.

Under the Canberra government’s plan announced Wednesday, safeguards would be applied to technologies that predict the chances of someone again committing a crime, or that analyze job applications to find a well-matched candidate.

Australian officials have said that new laws could also mandate that organizations using high-risk AI must ensure a person is responsible for the safe use of the technology.

The Canberra government also wants to minimize restrictions on low-risk areas of AI to allow their growth to continue.

An expert advisory committee will be set up to help the government to prepare legislation.

Ed Husic is Australia’s federal minister for industry and science. He told the Australian Broadcasting Corp. On Wednesday that he wants AI-generated content to be labeled so it can’t be mistaken as genuine.

“We need to have confidence that what we are seeing we know exactly if it is organic or real content, or if it has been created by an AI system.  And, so, industry is just as keen to work with government on how to create that type of labeling,” he said. “More than anything else, I am not worried about the robots taking over, I’m worried about disinformation doing that. We need to ensure that when people are creating content that it is clear that AI has had a role or a hand to play in that.”

Kate Pounder, the head of the Tech Council of Australia, which represents the technology sector, told local media that the government’s AI proposals strike a sensible balance between fostering innovation and ensuring systems are developed safely.

The Australian Parliament defines artificial intelligence as “an engineered system that generates predictive outputs such as content, forecasts, recommendations…without explicit programming.”

Recent research shows that most Australians still distrust the technology, which they see as unsafe and prone to errors.

 

Robotic Restaurant Opening in California

An automated restaurant is opening this month in Pasadena, California. CaliExpress will be serviced by robots that make food in the kitchen and AI that takes clients’ orders. The only job humans will still need to do is assemble and pack the food. Angelina Bagdasaryan has the story, narrated by Anna Rice. Camera: Vazgen Varzhabetian

AI-Powered Misinformation Is World’s Biggest Short-Term Threat, Davos Report Says 

London — False and misleading information supercharged with cutting-edge artificial intelligence that threatens to erode democracy and polarize society is the top immediate risk to the global economy, the World Economic Forum said in a report Wednesday.

In its latest Global Risks Report, the organization also said an array of environmental risks pose the biggest threats in the longer term. The report was released ahead of the annual elite gathering of CEOs and world leaders in the Swiss ski resort town of Davos and is based on a survey of nearly 1,500 experts, industry leaders and policymakers.

The report listed misinformation and disinformation as the most severe risk over the next two years, highlighting how rapid advances in technology also are creating new problems or making existing ones worse.

The authors worry that the boom in generative AI chatbots like ChatGPT means that creating sophisticated synthetic content that can be used to manipulate groups of people won’t be limited any longer to those with specialized skills.

AI is set to be a hot topic next week at the Davos meetings, which are expected to be attended by tech company bosses including OpenAI CEO Sam Altman, Microsoft CEO Satya Nadella and AI industry players like Meta’s chief AI scientist, Yann LeCun.

AI-powered misinformation and disinformation is emerging as a risk just as a billions of people in a slew of countries, including large economies like the United States, Britain, Indonesia, India, Mexico, and Pakistan, are set to head to the polls this year and next, the report said.

“You can leverage AI to do deepfakes and to really impact large groups, which really drives misinformation,” said Carolina Klint, a risk management leader at Marsh, whose parent company Marsh McLennan co-authored the report with Zurich Insurance Group.

“Societies could become further polarized” as people find it harder to verify facts, she said. Fake information also could be used to fuel questions about the legitimacy of elected governments, “which means that democratic processes could be eroded, and it would also drive societal polarization even further,” Klint said.

The rise of AI brings a host of other risks, she said. It can empower “malicious actors” by making it easier to carry out cyberattacks, such as by automating phishing attempts or creating advanced malware.

With AI, “you don’t need to be the sharpest tool in the shed to be a malicious actor,” Klint said.

It can even poison data that is scraped off the internet to train other AI systems, which is “incredibly difficult to reverse” and could result in further embedding biases into AI models, she said.

The other big global concern for respondents of the risk survey centered around climate change.

Following disinformation and misinformation, extreme weather is the second-most-pressing short-term risk.

In the long term — defined as 10 years — extreme weather was described as the No. 1 threat, followed by four other environmental-related risks: critical change to Earth systems; biodiversity loss and ecosystem collapse; and natural resource shortages.

“We could be pushed past that irreversible climate change tipping point” over the next decade as the Earth’s systems undergo long-term changes, Klint said.