EU Lawmakers Vote for Tougher AI Rules as Draft Moves to Final Stage

EU lawmakers on Wednesday voted for tougher landmark draft artificial intelligence rules that include a ban on the use of the technology in biometric surveillance and for generative AI systems like ChatGPT to disclose AI-generated content.

The lawmakers agreed to the amendments to the draft legislation proposed by the European Commission which is seeking to set a global standard for the technology used in everything from automated factories to bots and self-driving cars.

Rapid adoption of Microsoft-backed OpenAI’s ChatGPT and other bots has led top AI scientists and company executives including Elon Musk and OpenAI CEO Sam Altman to raise the potential risks posed to society.

“While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose,” said Brando Benifei, co-rapporteur of the draft act.

Among other changes, European Union lawmakers want any company using generative tools to disclose copyrighted material used to train its systems and for companies working on “high-risk application” to do a fundamental rights impact assessment and evaluate environmental impact.

Microsoft, which has called for AI rules, welcomed the lawmakers’ agreement.

“We believe that AI requires legislative guardrails, alignment efforts at an international level, and meaningful voluntary actions by companies that develop and deploy AI,” a Microsoft spokesperson said.

However, the Computer and Communications Industry Association said the amendments on high-risk AIs were likely to overburden European AI developers with “excessively prescriptive rules” and slow down innovation.

“AI raises a lot of questions – socially, ethically, economically. But now is not the time to hit any ‘pause button’. On the contrary, it is about acting fast and taking responsibility,” EU industry chief Thierry Breton said.

The Commission announced its draft rules two years ago aimed at setting a global standard for a technology key to almost every industry and business and in a bid to catch up with AI leaders the United States and China.

The lawmakers will now have to thrash out details with European Union countries before the draft rules become legislation. 

EU Regulators Order Google To Break up Digital Ad Business Over Competition Concerns

European Union antitrust regulators took aim at Google’s lucrative digital advertising business in an unprecedented decision ordering the tech giant to sell off some of its ad business to address competition concerns.

The European Commission, the bloc’s executive branch and top antitrust enforcer, said that its preliminary view after an investigation is that “only the mandatory divestment by Google of part of its services” would satisfy the concerns.

The 27-nation EU has led the global movement to crack down on Big Tech companies, but it has previously relied on issuing blockbuster fines, including three antitrust penalties for Google worth billions of dollars.

It’s the first time the bloc has ordered a tech giant to split up keys of business.

Google can now defend itself by making its case before the commission issues its final decision. The company didn’t immediately respond to a request for comment.

The commission’s decision stems from a formal investigation that it opened in June 2021, looking into whether Google violated the bloc’s competition rules by favoring its own online display advertising technology services at the expense of rival publishers, advertisers and advertising technology services.

YouTube was one focus of the commission’s investigation, which looked into whether Google was using the video sharing site’s dominant position to favor its own ad-buying services by imposing restrictions on rivals.

Google’s ad tech business is also under investigation by Britain’s antitrust watchdog and faces litigation in the U.S.

Brussels has previously hit Google with more than $8.6 billion worth of fines in three separate antitrust cases, involving its Android mobile operating system and shopping and search advertising services.

The company is appealing all three penalties. An EU court last year slightly reduced the Android penalty to 4.125 million euros. EU regulators have the power to impose penalties worth up to 10% of a company’s annual revenue.

Big Amazon Cloud Services Recovering After Outage Hits Thousands of Users

Amazon.com said cloud services offered by its unit Amazon Web Services were recovering after a big disruption on Tuesday affected websites of the New York Metropolitan Transportation Authority and The Boston Globe, among others.

Several hours after Downdetector.com started showing reports of outages, Amazon said many AWS services were fully recovered and marked resolved.

“We are continuing to work to fully recover all services,” AWS’ status page showed.

Tuesday’s impact stretching from transportation to financial services businesses underscores adoption of Amazon’s younger Lambda service and the degree to which many of its cloud offerings are crucial to companies in the internet age.

According to research in the past year from the cloud company Datadog, more than half of organizations operating in the cloud use Lambda or rival services, known as serverless technology.

Nearly 12,000 users had reported issues with accessing the service, according to Downdetector, which tracks outages by collating status reports from a number of sources, including user-submitted errors on its platform.

The disruption appeared smaller in time and breadth than one the company suffered in 2017 of its data-hosting service known as Amazon S3, representing the bread and butter of its cloud business.

The outage appeared to extend to AWS’s own webpage describing disruptions in its operations, which at one point failed to load on Tuesday, Reuters witnesses saw.

“We quickly narrowed down the root cause to be an issue with a subsystem responsible for capacity management for AWS Lambda, which caused errors directly for customers and indirectly through the use by other AWS services,” Amazon said.

AWS Lambda is a service that lets customers run computer programs without having to manage any underlying servers.

Twitter users expressed their frustration with the outage, with one user saying, “I don’t know, Alexa won’t tell me because #AWS and her services are down!”

Delta Air Lines also said it was facing problems but did not say if it was related to the AWS outage. The company did not immediately respond to a request for comment.

Other Amazon services such as Amazon Music and Alexa were also impacted, according to Downdetector.

Amazon had its last major outage in December 2021, when disruptions to its cloud services temporarily knocked out streaming platforms Netflix and Disney+, Robinhood, and Amazon’s e-commerce website ahead of Christmas.

McCartney: ‘Final Beatles Record’ Out This Year Aided by AI

A “final Beatles record”, created with the help of artificial intelligence, will be released later this year, Paul McCartney told the BBC in an interview broadcast on Tuesday.

“It was a demo that John (Lennon) had, and that we worked on, and we just finished it up,” said McCartney, who turns 81 next week.

The Beatles — Lennon, McCartney, George Harrison and Ringo Starr — split in 1970, with each going on to have solo careers, but they never reunited.

Lennon was shot dead in New York in 1980 aged 40 while Harrison died of lung cancer in 2001, aged 58.

McCartney did not name the song that has been recorded but according to the BBC it is likely to be a 1978 Lennon composition called “Now And Then”.

The track — one of several on a cassette that Lennon had recorded for McCartney a year before his death — was given to him by Lennon’s widow Yoko Ono in 1994.

Two of the songs, “Free As A Bird” and “Real Love”, were cleaned up by the producer Jeff Lynne, and released in 1995 and 1996.

An attempt was made to do the same with “Now And Then” but the project was abandoned because of background noise on the demo.

McCartney, who has previously talked about wanting to finish the song, said AI had given him a new chance to do so.

‘Now and Then’

Working with Peter Jackson, the film director behind the 2021 documentary series “The Beatles: Get Back”, AI was used to separate Lennon’s voice and a piano.

“They tell the machine, ‘That’s the voice. This is a guitar. Lose the guitar’,” he explained.

“So when we came to make what will be the last Beatles’ record, it was a demo that John had (and) we were able to take John’s voice and get it pure through this AI.

“Then we can mix the record, as you would normally do. So it gives you some sort of leeway.”

McCartney performed a two-hour set at last year’s Glastonbury festival in England, playing Beatles’ classics to the 100,000-strong crowd.

The set included a virtual duet with Lennon of the song “I’ve Got a Feeling”, from the Beatles’ last album “Let It Be”.

Last month, Sting warned that “defending our human capital against AI” would be a major battle for musicians in the coming years.

The use of AI in music is the subject of debate in the industry, with some denouncing copyright abuses and others praising its prowess.

McCartney said the use of the technology was “kind of scary but exciting because it’s the future”, adding: “We’ll just have to see where that leads.”

India Denies Dorsey’s Claims It Threatened to Shut Down Twitter

India threatened to shut Twitter down unless it complied with orders to restrict accounts critical of the government’s handling of farmer protests, co-founder Jack Dorsey said, an accusation Prime Minister Narendra Modi’s government called an “outright lie.”

Dorsey, who quit as Twitter CEO in 2021, said on Monday that India also threatened the company with raids on employees if it did not comply with government requests to take down certain posts.

“It manifested in ways such as: ‘We will shut Twitter down in India’, which is a very large market for us; ‘we will raid the homes of your employees’, which they did; And this is India, a democratic country,” Dorsey said in an interview with YouTube news show Breaking Points.

Deputy Minister for Information Technology Rajeev Chandrasekhar, a top ranking official in Modi’s government, lashed out against Dorsey in response, calling his assertions an “outright lie.”

“No one went to jail nor was Twitter ‘shut down’. Dorsey’s Twitter regime had a problem accepting the sovereignty of Indian law,” he said in a post on Twitter.

Dorsey’s comments again put the spotlight on the struggles faced by foreign technology giants operating under Modi’s rule. His government has often criticized Google, Facebook and Twitter for not doing enough to tackle fake or “anti-India” content on their platforms, or for not complying with rules.

The former Twitter CEO’s comments drew widespread attention as it is unusual for global companies operating in India to publicly criticize the government. Last year, Xiaomi in a court filing said India’s financial crime agency threatened its executives with “physical violence” and coercion, an allegation which the agency denied.

Dorsey also mentioned similar pressure from governments in Turkey and Nigeria, which had restricted the platform in their nations at different points over the years before lifting those bans.

Twitter was bought by Elon Musk in a $44 billion deal last year.

Chandrasekhar said Twitter under Dorsey and his team had repeatedly violated Indian law. He didn’t name Musk, but added Twitter had been in compliance since June 2022.

Big tech vs Modi

Modi and his ministers are prolific users of Twitter, but free speech activists say his administration resorts to excessive censorship of content it thinks is critical of its working. India maintains its content removal orders are aimed at protecting users and sovereignty of the state.

The public spat with Twitter during 2021 saw Modi’s government seeking an “emergency blocking” of the “provocative” Twitter hashtag “#ModiPlanningFarmerGenocide” and dozens of accounts. Farmers’ groups had been protesting against new agriculture laws at the time, one of the biggest challenges faced by the Modi government.

The government later gave in to the farmers’ demands. Twitter initially complied with the government requests but later restored most of the accounts, citing “insufficient justification”, leading to officials threatening legal consequences.

In subsequent weeks, police visited a Twitter office as part of another probe linked to tagging of some ruling party posts as manipulated. Twitter at the time said it was worried about staff safety.

Dorsey in his interview said many India content take down requests during the farmer protests were “around particular journalists that were critical of the government.”

Since Modi took office in 2014, India has slid from 140th in World Press Freedom Index to 161 this year, out of 180 countries, its lowest ranking ever.

UN Chief Considering Watchdog Agency for AI   

U.N. Secretary-General Antonio Guterres said Monday that he will appoint a scientific advisory body in the coming days that will include outside experts on artificial intelligence, and said he is open to the idea of creating a new U.N. agency that would focus on AI.

“I would be favorable to the idea that we could have an artificial intelligence agency, I would say, inspired by what the International Atomic Energy Agency is today,” Guterres said of the U.N. nuclear watchdog agency.

He said he does not have the authority to create an IAEA-like agency — that is up to the organization’s 193-member states. But he said it has been discussed and he would see it as a positive development.

“What is the advantage of the IAEA — it is a very solid, knowledge-based institution,” Guterres told reporters. “And at the same time, even if limited, it has some regulatory functions. So, I believe this is a model that could be very interesting.”

The Vienna-based IAEA is the focal point for international nuclear cooperation. It has developed international nuclear safety standards and is both watchdog and advisor on the peaceful use of nuclear energy.

There are growing concerns about the power of artificial intelligence and how it can be abused for negative and even deadly purposes, including from Geoffrey Hinton, who is the scientist known as “the godfather of AI.”

Top U.S. cybersecurity officials have also warned of the growing dangers of AI. 

“I think ultimately there will have to be — and even industry is saying this — there will have to be some sort of regulation to govern the licensing and the use of these capabilities,” U.S. Cybersecurity and Infrastructure Security Agency (CISA) Director Jen Easterly told the Aspen Institute in Washington Monday. 

Easterly also emphasized the need for more dialogue on AI, pointing to proposals like the one being pursued by the U.N. 

“We can have conversations with our adversaries about nuclear weapons,” Easterly said. “I think we probably should think about having these conversations with our adversaries on AI which, after all, will be in my view the most powerful weapons of the century.” 

British Prime Minister Rishi Sunak announced last week plans for the UK to host the first major global summit on AI safety in the autumn.

Guterres said in terms of regulating AI, an industry where things move very quickly, a set of norms established one day can be outdated the next. So, something that is more flexible is necessary. 

“We need a process, a constant process of intervention of the different stakeholders, working together to permanently establish a number of soft law mechanisms, a number of — I would say — norms, codes of conduct and others,” he said.

Guterres said the scientific advisory body he will soon create will also include the chief scientists from the U.N. Educational, Scientific and Cultural Organization (UNESCO) and the International Telecommunication Union (ITU), which is a specialized U.N. agency related to information and telecommunication technology.

He said outside experts, including two from the AI sphere, would be a part of the advisory body.

The UN chief also announced plans for a digital compact he says would be a voluntary “code of conduct” that he hopes technology companies and governments will adhere to, with the aim of decreasing the spread of mis- and dis-information and hate speech to billions of people and making the internet a safer space.

“Its proposals are aimed at creating guardrails to help governments come together around guidelines that promote facts, while exposing conspiracies and lies, and safeguarding freedom of expression and information,” he said. “And to help tech companies navigate difficult ethical and legal issues and build business models based on a healthy information ecosystem.”

He said tech companies have done little to prevent their platforms from contributing to hate and violence, and he criticized governments for ignoring human rights and sometimes taking drastic measures, including sweeping internet shutdowns.

Guterres said he hopes to issue the code of conduct after discussions with member states and before the U.N. Summit of the Future, which is planned for September 2024.

VOA National Security Correspondent Jeff Seldin contributed to this report. 

Startup Firm Leads Kenya into World of High-Tech Manufacturing

A three-year-old startup company is leading Kenya into the world of high-tech manufacturing, building a sophisticated workforce capable of making the semiconductors and nanotechnology products that operate modern devices from mobile phones to refrigerators. VOA’s Africa correspondent Mariama Diallo visited the plant and has this story.

AI Chatbots Offer Comfort to the Bereaved

Staying in touch with a loved one after their death is the promise of several start-ups using the powers of artificial intelligence, though not without raising ethical questions.

Ryu Sun-yun sits in front of a microphone and a giant screen, where her husband, who died a few months earlier, appears.

“Sweetheart, it’s me,” the man on the screen tells her in a video demo. In tears, she answers him, and a semblance of conversation begins.

When Lee Byeong-hwal learned he had terminal cancer, the 76-year-old South Korean asked startup DeepBrain AI to create a digital replica using several hours of video.

“We don’t create new content” such as sentences that the deceased would have never uttered or at least written and validated during their lifetime, said Joseph Murphy, head of development at DeepBrain AI, about the “Rememory” program.

“I’ll call it a niche part of our business. It’s not a growth area for us,” he cautioned.

The idea is the same for StoryFile, a company that uses 92-year-old “Star Trek” actor William Shatner to market its site.

“Our approach is to capture the wonder of an individual, then use the AI tools,” said Stephen Smith, boss of StoryFile, which claims several thousand users of its Life service.

Entrepreneur Pratik Desai caused a stir a few months ago when he suggested people save audio or video of “your parents, elders and loved ones,” estimating that by “the end of this year” it would be possible to create an autonomous avatar of a deceased person, and that he was working on a project to this end.

The message posted on Twitter set off a storm, to the point that, a few days later, he denied being “a ghoul.”

“This is a very personal topic and I sincerely apologize for hurting people,” he said.

“It’s a very fine ethical area that we’re taking with great care,” Smith said.

After the death of her best friend in a car accident in 2015, Russian engineer Eugenia Kyuda, who emigrated to California, created a “chatbot” named Roman like her dead friend, which was fed with thousands of text messages he had sent to loved ones.

Two years later Kyuda launched Replika, which offers personalized conversational robots, among the most sophisticated on the market.

But despite the Roman precedent, Replika “is not a platform made to recreate a lost loved one,” a spokesperson, said.

Somnium Space, based in London, wants to create virtual clones while users are still alive so that they then can exist in a parallel universe after their death.

“It’s not for everyone,” CEO Artur Sychov conceded in a video posted on YouTube about his product, Live Forever, which he is announcing for the end of the year.

“Do I want to meet my grandfather who’s in AI? I don’t know. But those who want that will be able to,” he added.

Thanks to generative AI, the technology is there to allow avatars of departed loved ones to say things they never said when they were alive.

“I think these are philosophical challenges, not technical challenges,” said Murphy of DeepBrainAI.

“I would say that is a line right now that we do not plan on crossing, but who knows what the future holds?” he added.

“I think it can be helpful to interact with an AI version of a person in order to get closure — particularly in situations where grief was complicated by abuse or trauma,” Candi Cann, a professor at Baylor University who is currently researching this topic in South Korea.

Mari Dias, a professor of medical psychology at Johnson & Wales University, has asked many of her bereaved patients about virtual contact with their loved ones.

“The most common answer is ‘I don’t trust AI. I’m afraid it’s going to say something I’m not going to accept.’ … I get the impression that they think they don’t have control” over what the avatar does.

Apple, Defying the Times, Stays Quiet on AI

Resisting the hype, Apple defied most predictions this week and made no mention of artificial intelligence when it unveiled its latest slate of new products, including its Vision Pro mixed reality headset.

Generative AI has become the tech world’s biggest buzzword since Microsoft-backed OpenAI released ChatGPT late last year, revealing the capabilities of the emerging technology. 

ChatGPT opened the world’s eyes to the idea that computers can churn out complex, human-level content using simple prompts, giving amateurs the talents of tech geeks, artists or speechwriters. 

Apple has laid low as Microsoft and Google raced out announcements on how generative AI will revolutionize its products, from online search to word processing and retouching images.

During the recent earnings season, tech CEOs peppered mentions of AI into their every phrase, eager to reassure investors that they wouldn’t miss Silicon Valley’s next big chapter.

Apple has chosen to be much more discreet and, in its closely watched keynote address to the World Developers conference in California, never once mentioned AI specifically.

“Apple ghosts the generative AI revolution,” said a headline in Wired Magazine after the event. 

‘Not necessarily AI?’

Arguments vary on why Apple has chosen a more subtle approach. 

For one, Apple follows other critics who have long been wary of the catchall “AI” term believing that it is too vague and unhelpfully evokes dystopian nightmares of killer robots and human subjugation to machines. 

For this reason, some companies – including TikTok or Facebook’s Meta – roll out AI innovations, but without necessarily touting them as such. 

“We do integrate it into our products [but] people don’t necessarily think about it as AI,” Apple CEO Tim Cook told ABC News this week.

Indeed, AI was actually very much part of Apple’s annual jamboree on Monday, but it required a level of technical know-how to notice.

In one instance, Apple’s head of software said “on-device machine learning” would enhance autocorrect for iPhone messaging when he could have just as well said AI.

Apple’s autocorrect innovation drew giggles with the promise of iPhones no longer correcting common expletives.

“In those moments where you just want to type a ‘ducking’ word, well, the keyboard will learn it, too,” said Craig Federighi.

Autocorrect will also learn from your writing style, helping it guide suggestions, using AI technology similar to what powers ChatGPT.

In another example, a new iPhone app called Journal, an interactive diary, would use “on-device machine learning … to inspire your writing,” Apple said, again not referring to AI when other companies would have.

But AI will also play a major role in the Vision Pro headset when it is released next year, helping, for example, generate a user’s digital persona for video-conferencing.

‘Not much effort’

For some analysts, the non-mention of AI is an acknowledgement by Apple that it lost ground against rivals. 

“They haven’t put much effort into it,” independent tech analyst Rob Enderle told AFP. 

“I think they just kind of felt that AI was off into the future and it wasn’t anything surprising,” he added. 

The glitchy performance of Apple’s chatbot Siri, which was launched a decade ago, has also fed the feeling that the smartphone giant doesn’t get AI. 

“I think most people would agree that Apple lost its edge with Siri. That’s probably the most obvious way they fell behind,” said Insider Intelligence principal analyst Yory Wurmser. 

But Wurmser also insisted that Apple is primarily a device company and that AI, which is software, will always be “the means rather than the ends for a great user experience” on its premium devices.

In this vein, for analyst Dan Ives of Wedbush Securities, the release of Apple’s Vision Pro headset was in itself an AI play, even if it wasn’t explicitly spelled out that way.

“We continue to strongly believe this is the first step in a broader strategy for Apple to build out a generative AI driven app ecosystem” on the Vision Pro, he said. 

Financial Institutions in US, East Asia Spoofed by Suspected North Korean Hackers

There are renewed concerns North Korea’s army of hackers is targeting financial institutions to prop up the regime in Pyongyang and possibly fund its weapons programs.

A report published Tuesday by the cybersecurity firm Recorded Future finds North Korean aligned actors have been spoofing well-known financial firms in Japan, Vietnam and the United States, sending out emails and documents that, if opened, could grant the hackers access to critical systems.

“The targeting of investment banking and venture capital firms may expose sensitive or confidential information of these entities or their customers,” according to the report by Recorded Future’s Insikt Group.

“[It] may result in legal or regulatory action, jeopardize pending business negotiations or agreements, or expose information damaging to the company’s strategic investment portfolio,” it said.

The report said the most recent cluster of activity took place between September 2022 and March 2023, making use of three new internet addresses and two old addresses, and more than 20 domain names.

Some of the domains imitated those used by the targeted financial institutions.

Recorded Future’s named the group behind the attacks Threat Activity Group 71 (TAG-71), which is also known as APT38, Bluenoroff, Stardust Chollima and the Lazarus Group.

This past April, the U.S. sanctioned three individuals associated with the Lazarus Group, accusing them of helping North Korea launder stolen virtual currencies and turn it into cash.

U.S. Treasury officials levied additional sanctions just last month against North Korea’s Technical Reconnaissance Bureau, which develops tools and operations to be carried out by the Lazarus Group.

The Lazarus Group is believed to be responsible for the largest theft of virtual currency to date, stealing approximately $620 million connected to a popular online game in Match 2022.

Earlier this month, U.S. and South Korean agencies issued a warning about another set of North Korean cyber actors impersonating think tanks, academic institutions and journalists in an ongoing attempt to collect intelligence.

 

Japan, Australia, US to Fund Undersea Cable Connection in Micronesia to Counter China’s Influence

Japan announced Tuesday that it joined the United States and Australia in signing a $95 million undersea cable project that will connect East Micronesia island nations to improve networks in the Indo-Pacific region where China is increasingly expanding its influence.

The approximately 2,250-kilometer (1,400-mile) undersea cable will connect the state of Kosrae in the Federated State of Micronesia, Tarawa in Kiribati and Nauru to the existing cable landing point located in Pohnpei in Micronesia, according to the Japanese Foreign Ministry.

Japan, the United States and Australia have stepped up cooperation with the Pacific Islands, apparently to counter efforts by Beijing to expand its security and economic influence in the region.

In a joint statement, the parties said next steps involve a final survey and design and manufacturing of the cable, whose width is about that of a garden hose. The completion is expected around 2025.

The announcement comes just over two weeks after leaders of the Quad, a security alliance of Japan, the United States, Australia and India, emphasized the importance of undersea cables as a critical component of communications infrastructure and the foundation for internet connectivity.

“Secure and resilient digital connectivity has never been more important,” Matthew Murray, a senior official in the U.S. State Department’s Bureau of East Asian and Pacific Affairs, said in a statement. “The United States is delighted to be part of this project bringing our region closer together.”

NEC Corp., which won the contract after a competitive tender, said the cable will ensure high-speed, high-quality and more secure communications for residents, businesses and governments in the region, while contributing to improved digital connectivity and economic development.

The cable will connect more than 100,000 people across the three Pacific countries, according to Kazuya Endo, director general of the international cooperation bureau at the Japanese Foreign Ministry.