Bill Gates Visits China for Health, Development Talks

Microsoft Founder Bill Gates was in China on Thursday for what he said were meetings with global health and development partners who have worked with his charitable foundation.

“Solving problems like climate change, health inequity and food insecurity requires innovation,” Gates tweeted. “From developing malaria drugs to investing in climate adaptation, China has a lot of experience in that. We need to unlock that kind of progress for more people around the world.”

Gates said global crises stifled progress in reducing death and poverty in children and that he will next travel to West Africa because African countries are particularly vulnerable “with high food prices, crushing debt, and increasing rates of TB and malaria.”

Reuters, citing two people familiar with the matter, said Gates would meet with Chinese President Xi Jinping.

Gates is the latest business figure to visit China year, following Apple’s Tim Cook and Tesla’s Elon Musk.

Some information for this report came from The Associated Press, Agence France-Presse and Reuters.

Cambodian Facial Recognition Effort Raises Fears of Misuse

Experts are raising concerns that a recent Cambodian government order allocating around $1 million to a local company for a facial recognition technology project could pave the way for the technology to be used against citizens and human rights defenders.

The order, signed by Prime Minister Hun Sen and released in March in a recent tranche of government documents, would award the funds to HSC Co. Ltd., a Cambodian company led by tycoon Sok Hong that has previously printed Cambodian passports and installed CCTV cameras in Phnom Penh, Cambodia’s capital.

The Oct. 17 order appears to be the first direct indication of Cambodia’s interest in pursuing facial recognition, alarming experts who say such initiatives could eventually be used to target dissenters and build a stronger surveillance state similar to China’s. In recent months, the government has blocked the country’s main opposition party from participating in the July national elections, shut down independent media and jailed critics such as labor organizers and opposition politicians.

Neither the Interior Ministry nor the company would answer questions about what the project entails.

“This is national security and not everyone knows about how it works,” Khieu Sopheak, secretary of state and spokesperson for the Interior Ministry, told VOA by phone. “Even in the U.S., if you ask about the air defense system, they will tell you the same. This is the national security system, which we can’t tell everyone [about].”

The order names HSC, a company Sok Hong founded in 2007, as the funds’ recipient. HSC’s businesses span food and beverage, dredging and retail.

HSC also has close ties to the government: in addition to printing passports and providing CCTV cameras in Phnom Penh, it runs the system for national ID cards and has provided border checkpoint technology. Malaysian and Cambodian media identify Sok Hong as the son of Sok Kong, another tycoon who founded the conglomerate Sokimex Investment Group. Both father and son are oknhas or “lords,” a Cambodian honorific given to those who have donated more than $500,000 to the government.

When reached by phone, Sok Hong told VOA, “I think it shouldn’t be reported since it is related to national security.”

Cambodia’s history of repression, including monitoring dissidents in person and online, has raised suspicions that it could deploy such technology to target activists. Last year, labor leaders reported they were recorded via drones during protests.

“Authorities can use facial recognition technology to identify, track individuals and gather vast amounts of personal data without their consent, which could eventually lead to massive surveillance,” said Chak Sopheap, director of the Cambodian Center for Human Rights. “For instance, when a government uses facial recognition to monitor attendance at peaceful gatherings, these actions raise severe concerns about the safety of those citizens.”

In addition, giving control of facial recognition technology to a politically connected firm, and one that already has access to a trove of identity-related information, could centralize citizens’ data in a one-stop shop. That could make it easier to fine-tune algorithms quickly and later develop more facial recognition tools to be shared with the government in a mutually beneficial relationship, Joshua Kurlantzick, Council on Foreign Relations senior fellow for Southeast Asia, told VOA.

China — one of Cambodia’s oldest and closest allies — has pioneered collecting vast amounts of data to monitor citizens. In Xinjiang, home to about 12 million Uyghurs, Chinese authorities combine people’s biometric data and digital activities to create a detailed portrait of their lives.

In recent years, China has sought to influence Southeast Asia, “providing an explicit model for surveillance and a model for a closed and walled-garden internet,” Kurlantzick said, referring to methods of blocking or managing users’ access to certain content.

Some efforts have been formalized under the Digital Silk Road, China’s technology-focused subset of the Belt and Road initiative that provides support, infrastructure and subsidized products to recipient countries.

China’s investment in Cambodian monitoring systems dates back to the early days of the Digital Silk Road. In 2015, it installed an estimated $3 million worth of CCTV cameras in Phnom Penh and later promised more cameras to “allow a database to accumulate for the investigation of criminal cases,” according to reports at the time. There is no indication China is involved in the HSC project, however.

While dozens of countries use facial recognition technology for legitimate public safety uses, such investments must be accompanied by strict data protection laws and enforcement, said Gatra Priyandita, a cyber politics analyst at the Australian Strategic Policy Institute.

Cambodia does not have comprehensive data privacy regulations. The prime minister himself has monitored Zoom calls hosted by political foes, posting on Facebook that “Hun Sen’s people are everywhere.”

Given the country’s approach to digital privacy, housing facial recognition within a government-tied conglomerate is “concerning” but not surprising, Priyandita said.

“The long-term goal of these kinds of arrangements is the reinforcement of regime security, of course, particularly the protection of Cambodia’s main political and business families,” Priyandita said.

In the immediate future, Cambodia’s capacity to carry out mass surveillance is uncertain. The National Internet Gateway — a system for routing traffic through government servers which critics compared to China’s “Great Firewall” — was delayed in early 2022. Shortly before the scheduled rollout, the government advertised more than 100 positions related to data centers and artificial intelligence, sowing doubts about the technical knowledge behind the project.

Still, the government is pushing to strengthen its digital capabilities, fast-tracking controversial laws around cybercrime and cybersecurity and pursuing a 15-year plan to develop the digital economy, including a skilled technical workforce.

Sun Narin of VOA’s Khmer Service contributed to this report.

As Deepfake Fraud Permeates China, Authorities Target Political Challenges Posed By AI

Chinese authorities are cracking down on political and fraud cases driven by deepfakes, created with face- and voice-changing software that tricks targets into believing they are video chatting with a loved one or another trusted person.

How good are the deepfakes? Good enough to trick an executive at a Fuzhou tech company in Fujian province who almost lost $600,000 to a person he thought was a friend claiming to need a quick cash infusion.

The entire transaction took less than 10 minutes from the first contact via the phone app WeChat to police stopping the online bank transfer when the target called the authorities after learning his real friend had never requested the loan, according to Sina Technology.

Despite the public’s outcry about such AI-driven fraud, some experts say Beijing appears more concerned about the political challenges that deepfakes may pose, as shown by newly implemented regulations on “deep synthesis” management that outlaw activities that “endanger national security and interests and damage the national image.”

The rapid development of artificial intelligence technology has propelled cutting-edge technology to mass entertainment applications in just a few years.

In a 2017 demonstration of the risks, a video created by University of Washington researchers showed then-U.S. President Barack Obama saying things he hadn’t.

Two years later, Chinese smartphone apps like Zao let users swap their faces with celebrities so they could appear as if they were in a movie. Zao was removed from app stores in 2019 and Avatarify, another popular Chinese face-swapping app, was also banned in 2021, likely for violation of privacy and portrait rights, according to Chinese media.

Pavel Goldman-Kalaydin, head of artificial intelligence and machine learning at SumSub, a Berlin-based global antifraud company, explained how easy it is with a personal computer or smartphone to make a video in which a person appears to say things he or she never would.

“To create a deepfake, a fraudster uses a real person’s document, taking a photo of it and turning it into a 3D persona,” he said. “The problem is that the technology, it is becoming more and more democratized. Many people can use it. … They can create many deepfakes, and they try to bypass these checks that we try to enforce.”

Subbarao Kambhampati, professor at the School of Computing and Augmented Intelligence at Arizona State University, said in a telephone interview he was surprised by the apparent shift from voice cloning to deepfake video calling by scammers in China. He compared that to a rise in voice-cloning phone scams in the U.S.

“Audio alone, you’re more easily fooled, but audio plus video, it would be little harder to fool you. But apparently they’re able to do it,” Kambhampati said, adding that it is harder to make a video that appears trustworthy.

“Subconsciously we look at people’s faces … and realize that they’re not exactly behaving the way we normally see them behave in terms of their facial expressions.”

Experts say that AI fraud will become more sophisticated.

“We don’t expect the problem to go away. The biggest solution … is education, let people understand the days of trusting your ears and eyes are over, and you need to keep that in the back of your mind,” Kambhampati said.

The Internet Society of China issued a warning in May, calling on the public to be more vigilant as AI face-changing, voice-changing scams and slanders became common.

The Wall Street Journal reported on June 4 that local governments across China have begun to crack down on false information generated by artificial intelligence chatbots. Much of the false content designed as clickbait is similar to authentic material on topics that have already attracted public attention.

To regulate “deep synthesis” content, China’s administrative measures implemented on January 10 require service providers to “conspicuously mark” AI-generated content that “may cause public confusion or misidentification” so that users can tell authentic media content from deepfakes.

China’s practice of requiring technology platforms to “watermark” deepfake content has been widely discussed internationally.

Matt Sheehan, a fellow in the Asia Program at the Carnegie Endowment for International Peace, noted that deepfake regulations place the onus on the companies that develop and operate these technologies.

“If enforced well, the regulations could make it harder for criminals to get their hands on these AI tools,” he said in an email to VOA Mandarin. “It could throw up some hurdles to this kind of fraud.”

But he also said that much depends on how Beijing implements the regulations and whether bad actors can obtain AI tools outside China.

“So, it’s not a problem with the technology,” said SumSub’s Goldman-Kalaydin. “It is always a problem with the usage of the technology. So, you can regulate the usage, but not the technology.”

James Lewis, senior vice president of the strategic technologies program at the Center for Strategic and International Studies in Washington, told VOA Mandarin, “Chinese law needs to be modernized for changes in technology, and I know the Chinese are thinking about that. So, the cybercrime laws you have will probably catch things like deepfakes. What will be hard to handle is the volume and the sophistication of the new products, but I know the Chinese government is very worried about fraud and looking for ways to get control of it.”

Others suggest that in regulating AI, political stability is a bigger concern for the Chinese government.

“I think they have a stronger incentive to work on the political threats than they do for fraud,” said Bill Drexel, an associate fellow for the Technology and National Security Program at Center for a New American Security.

In May, the hashtag #AIFraudEruptingAcrossChina was trending on China’s social media platform Weibo. However, the hashtag has since been censored, according to the Wall Street Journal, suggesting authorities are discouraging discussion on AI-driven fraud.

“So even we can see from this incident, once it appeared that the Chinese public was afraid that there was too much AI-powered fraud, they censored,” Drexel told VOA Mandarin.

He continued, “The fact that official state-run media initially reported these incidents and then later discussion of it was censored just goes to show that they do ultimately care about covering themselves politically more than they care about addressing fraud.”

Adrianna Zhang contributed to this report.

Bill Gates in China to Meet President Xi on Friday – Sources 

Bill Gates, Microsoft Corp’s co-founder, is set to meet Chinese President Xi Jinping on Friday during his visit to China, two people with knowledge of the matter said.

The meeting will mark Xi’s first meeting with a foreign private entrepreneur in recent years. The people said the encounter may be a one-on-one meeting. A third source confirmed they would meet, without providing details.

The sources did not say what the two might discuss. Gates tweeted on Wednesday that he had landed in Beijing for the first time since 2019 and that he would meet with partners who had been working on global health and development challenges with the Bill & Melinda Gates Foundation.

The foundation and China’s State Council Information Office, which handles media queries on behalf of the Chinese government, did not immediately respond to Reuters requests for comment. 

Gates stepped down from Microsoft’s board in 2020 to focus on philanthropic works related to global health, education and climate change. He quit his full-time executive role at Microsoft in 2008. 

The last reported meeting between Xi and Gates was in 2015, when they met on the sidelines of the Boao forum in Hainan province. In early 2020, Xi wrote a letter to Gates thanking him, and the Bill & Melinda Gates Foundation, for pledging assistance to China including $5 million for its fight against COVID. 

The meeting would mark the end of a long hiatus by Xi in recent years from meeting foreign private entrepreneurs and business leaders, after the Chinese president stopped traveling abroad for nearly three years as China shut its borders during the pandemic. 

Several foreign CEOs have visited China since it reopened early this year but most have mainly met with government ministers. 

Premier Li Qiang met a group of CEOs including Apple’s Tim Cook in March and a source told Reuters that Tesla’s Elon Musk met vice-premier Ding Xuexiang last month.

EU Lawmakers Vote for Tougher AI Rules as Draft Moves to Final Stage

EU lawmakers on Wednesday voted for tougher landmark draft artificial intelligence rules that include a ban on the use of the technology in biometric surveillance and for generative AI systems like ChatGPT to disclose AI-generated content.

The lawmakers agreed to the amendments to the draft legislation proposed by the European Commission which is seeking to set a global standard for the technology used in everything from automated factories to bots and self-driving cars.

Rapid adoption of Microsoft-backed OpenAI’s ChatGPT and other bots has led top AI scientists and company executives including Elon Musk and OpenAI CEO Sam Altman to raise the potential risks posed to society.

“While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose,” said Brando Benifei, co-rapporteur of the draft act.

Among other changes, European Union lawmakers want any company using generative tools to disclose copyrighted material used to train its systems and for companies working on “high-risk application” to do a fundamental rights impact assessment and evaluate environmental impact.

Microsoft, which has called for AI rules, welcomed the lawmakers’ agreement.

“We believe that AI requires legislative guardrails, alignment efforts at an international level, and meaningful voluntary actions by companies that develop and deploy AI,” a Microsoft spokesperson said.

However, the Computer and Communications Industry Association said the amendments on high-risk AIs were likely to overburden European AI developers with “excessively prescriptive rules” and slow down innovation.

“AI raises a lot of questions – socially, ethically, economically. But now is not the time to hit any ‘pause button’. On the contrary, it is about acting fast and taking responsibility,” EU industry chief Thierry Breton said.

The Commission announced its draft rules two years ago aimed at setting a global standard for a technology key to almost every industry and business and in a bid to catch up with AI leaders the United States and China.

The lawmakers will now have to thrash out details with European Union countries before the draft rules become legislation. 

EU Regulators Order Google To Break up Digital Ad Business Over Competition Concerns

European Union antitrust regulators took aim at Google’s lucrative digital advertising business in an unprecedented decision ordering the tech giant to sell off some of its ad business to address competition concerns.

The European Commission, the bloc’s executive branch and top antitrust enforcer, said that its preliminary view after an investigation is that “only the mandatory divestment by Google of part of its services” would satisfy the concerns.

The 27-nation EU has led the global movement to crack down on Big Tech companies, but it has previously relied on issuing blockbuster fines, including three antitrust penalties for Google worth billions of dollars.

It’s the first time the bloc has ordered a tech giant to split up keys of business.

Google can now defend itself by making its case before the commission issues its final decision. The company didn’t immediately respond to a request for comment.

The commission’s decision stems from a formal investigation that it opened in June 2021, looking into whether Google violated the bloc’s competition rules by favoring its own online display advertising technology services at the expense of rival publishers, advertisers and advertising technology services.

YouTube was one focus of the commission’s investigation, which looked into whether Google was using the video sharing site’s dominant position to favor its own ad-buying services by imposing restrictions on rivals.

Google’s ad tech business is also under investigation by Britain’s antitrust watchdog and faces litigation in the U.S.

Brussels has previously hit Google with more than $8.6 billion worth of fines in three separate antitrust cases, involving its Android mobile operating system and shopping and search advertising services.

The company is appealing all three penalties. An EU court last year slightly reduced the Android penalty to 4.125 million euros. EU regulators have the power to impose penalties worth up to 10% of a company’s annual revenue.

Big Amazon Cloud Services Recovering After Outage Hits Thousands of Users

Amazon.com said cloud services offered by its unit Amazon Web Services were recovering after a big disruption on Tuesday affected websites of the New York Metropolitan Transportation Authority and The Boston Globe, among others.

Several hours after Downdetector.com started showing reports of outages, Amazon said many AWS services were fully recovered and marked resolved.

“We are continuing to work to fully recover all services,” AWS’ status page showed.

Tuesday’s impact stretching from transportation to financial services businesses underscores adoption of Amazon’s younger Lambda service and the degree to which many of its cloud offerings are crucial to companies in the internet age.

According to research in the past year from the cloud company Datadog, more than half of organizations operating in the cloud use Lambda or rival services, known as serverless technology.

Nearly 12,000 users had reported issues with accessing the service, according to Downdetector, which tracks outages by collating status reports from a number of sources, including user-submitted errors on its platform.

The disruption appeared smaller in time and breadth than one the company suffered in 2017 of its data-hosting service known as Amazon S3, representing the bread and butter of its cloud business.

The outage appeared to extend to AWS’s own webpage describing disruptions in its operations, which at one point failed to load on Tuesday, Reuters witnesses saw.

“We quickly narrowed down the root cause to be an issue with a subsystem responsible for capacity management for AWS Lambda, which caused errors directly for customers and indirectly through the use by other AWS services,” Amazon said.

AWS Lambda is a service that lets customers run computer programs without having to manage any underlying servers.

Twitter users expressed their frustration with the outage, with one user saying, “I don’t know, Alexa won’t tell me because #AWS and her services are down!”

Delta Air Lines also said it was facing problems but did not say if it was related to the AWS outage. The company did not immediately respond to a request for comment.

Other Amazon services such as Amazon Music and Alexa were also impacted, according to Downdetector.

Amazon had its last major outage in December 2021, when disruptions to its cloud services temporarily knocked out streaming platforms Netflix and Disney+, Robinhood, and Amazon’s e-commerce website ahead of Christmas.

McCartney: ‘Final Beatles Record’ Out This Year Aided by AI

A “final Beatles record”, created with the help of artificial intelligence, will be released later this year, Paul McCartney told the BBC in an interview broadcast on Tuesday.

“It was a demo that John (Lennon) had, and that we worked on, and we just finished it up,” said McCartney, who turns 81 next week.

The Beatles — Lennon, McCartney, George Harrison and Ringo Starr — split in 1970, with each going on to have solo careers, but they never reunited.

Lennon was shot dead in New York in 1980 aged 40 while Harrison died of lung cancer in 2001, aged 58.

McCartney did not name the song that has been recorded but according to the BBC it is likely to be a 1978 Lennon composition called “Now And Then”.

The track — one of several on a cassette that Lennon had recorded for McCartney a year before his death — was given to him by Lennon’s widow Yoko Ono in 1994.

Two of the songs, “Free As A Bird” and “Real Love”, were cleaned up by the producer Jeff Lynne, and released in 1995 and 1996.

An attempt was made to do the same with “Now And Then” but the project was abandoned because of background noise on the demo.

McCartney, who has previously talked about wanting to finish the song, said AI had given him a new chance to do so.

‘Now and Then’

Working with Peter Jackson, the film director behind the 2021 documentary series “The Beatles: Get Back”, AI was used to separate Lennon’s voice and a piano.

“They tell the machine, ‘That’s the voice. This is a guitar. Lose the guitar’,” he explained.

“So when we came to make what will be the last Beatles’ record, it was a demo that John had (and) we were able to take John’s voice and get it pure through this AI.

“Then we can mix the record, as you would normally do. So it gives you some sort of leeway.”

McCartney performed a two-hour set at last year’s Glastonbury festival in England, playing Beatles’ classics to the 100,000-strong crowd.

The set included a virtual duet with Lennon of the song “I’ve Got a Feeling”, from the Beatles’ last album “Let It Be”.

Last month, Sting warned that “defending our human capital against AI” would be a major battle for musicians in the coming years.

The use of AI in music is the subject of debate in the industry, with some denouncing copyright abuses and others praising its prowess.

McCartney said the use of the technology was “kind of scary but exciting because it’s the future”, adding: “We’ll just have to see where that leads.”

India Denies Dorsey’s Claims It Threatened to Shut Down Twitter

India threatened to shut Twitter down unless it complied with orders to restrict accounts critical of the government’s handling of farmer protests, co-founder Jack Dorsey said, an accusation Prime Minister Narendra Modi’s government called an “outright lie.”

Dorsey, who quit as Twitter CEO in 2021, said on Monday that India also threatened the company with raids on employees if it did not comply with government requests to take down certain posts.

“It manifested in ways such as: ‘We will shut Twitter down in India’, which is a very large market for us; ‘we will raid the homes of your employees’, which they did; And this is India, a democratic country,” Dorsey said in an interview with YouTube news show Breaking Points.

Deputy Minister for Information Technology Rajeev Chandrasekhar, a top ranking official in Modi’s government, lashed out against Dorsey in response, calling his assertions an “outright lie.”

“No one went to jail nor was Twitter ‘shut down’. Dorsey’s Twitter regime had a problem accepting the sovereignty of Indian law,” he said in a post on Twitter.

Dorsey’s comments again put the spotlight on the struggles faced by foreign technology giants operating under Modi’s rule. His government has often criticized Google, Facebook and Twitter for not doing enough to tackle fake or “anti-India” content on their platforms, or for not complying with rules.

The former Twitter CEO’s comments drew widespread attention as it is unusual for global companies operating in India to publicly criticize the government. Last year, Xiaomi in a court filing said India’s financial crime agency threatened its executives with “physical violence” and coercion, an allegation which the agency denied.

Dorsey also mentioned similar pressure from governments in Turkey and Nigeria, which had restricted the platform in their nations at different points over the years before lifting those bans.

Twitter was bought by Elon Musk in a $44 billion deal last year.

Chandrasekhar said Twitter under Dorsey and his team had repeatedly violated Indian law. He didn’t name Musk, but added Twitter had been in compliance since June 2022.

Big tech vs Modi

Modi and his ministers are prolific users of Twitter, but free speech activists say his administration resorts to excessive censorship of content it thinks is critical of its working. India maintains its content removal orders are aimed at protecting users and sovereignty of the state.

The public spat with Twitter during 2021 saw Modi’s government seeking an “emergency blocking” of the “provocative” Twitter hashtag “#ModiPlanningFarmerGenocide” and dozens of accounts. Farmers’ groups had been protesting against new agriculture laws at the time, one of the biggest challenges faced by the Modi government.

The government later gave in to the farmers’ demands. Twitter initially complied with the government requests but later restored most of the accounts, citing “insufficient justification”, leading to officials threatening legal consequences.

In subsequent weeks, police visited a Twitter office as part of another probe linked to tagging of some ruling party posts as manipulated. Twitter at the time said it was worried about staff safety.

Dorsey in his interview said many India content take down requests during the farmer protests were “around particular journalists that were critical of the government.”

Since Modi took office in 2014, India has slid from 140th in World Press Freedom Index to 161 this year, out of 180 countries, its lowest ranking ever.

Startup Firm Leads Kenya Into World of High-Tech Manufacturing

A three-year-old startup company is leading Kenya into the world of high-tech manufacturing, building a workforce capable of making semiconductors and nanotechnology products that operate modern devices from mobile phones to refrigerators. 

Anthony Githinji is the founder of Semiconductors Technologies Limited, or STL, located in Nyeri, about a three hours’ drive from Nairobi. 

He brought his know-how to Kenya from the United States, where he started work in 1997 on semiconductors — materials that conduct electricity and are used in thousands of products. 

He said the biggest barrier to entry in any high-tech business is finding a workforce with the right skills. In deciding to start a business in Kenya, his country of origin, Githinji said a meeting with the vice-chancellor of Dedan Kimathi University of Science and Technology, also known as DEKUT, was a game changer. 

“DEKUT and STL formed a partnership that allowed for us to engage STEM-related education and develop it, tool it and orient it toward our specific industry, which is the semiconductor and microchip space and so we started attaching students and having internships through STL, and it became very clear and very quickly that the level and caliber of the education system and the product of DEKUT, I believe most institutions of higher learning in Kenya are very high level,” Githinji said.

Female engineers

STL employs about 100 engineers, 70 percent of them women.

Irene Ngetich, a process engineer with a background in telecommunications and electrical engineering, graduated from DEKUT in 2019. She said she entered the STEM (Science, Technology, Engineering, and Mathematics) sector after reading an article recommended by her father about another woman in the field. 

“So, when I read through [it] … she mentioned that in her class there were only two ladies. Of course, I love doing challenging things; so that stood out for me,” she said.

Ngetich said the company’s goal is “to be the leading [computer] chip manufacturer in Africa.” 

Semiconductors are used in almost every sector of electronics. In consumer electronics, for example, they are used in microwave ovens, refrigerators, mobile phones, laptops, and video game consoles. 

Lorna Muturi, a mechatronics engineer who will be graduating from DEKUT this year, is just 22 years old, but already has been working at STL for two years.

“We build the semiconductor manufacturing machine within the plant and as a mechatronics engineer, I am involved in the automation of the system; [and] also involved in the diagnostics of the system in case there’s an issue,” she explained about her job.

Muturi said that at STL, she works with people who are comfortable with her and accept her as a woman engineer. Now she’s able to go out and inspire others to join the STEM field.

STL CEO Githinji said the company prides itself on being gender overbalanced on the female side. He said the company turned out that way because of an extremely vigilant human resource development program.

“What you see at STL, whether it’s deliberate or inadvertent, is the result of pretty rigorous attention to the human resource capacity of the individual. It so turns out that these young women in STEM at STL have a very compelling story to tell. They are extremely intelligent, they are doing exceptionally well, training very well and they are producing very well,” he told VOA.

He added, “We also do have a lot of young men who perform very well and are exceptional in what they do.”

Looking ahead

Githinji said the company is not profitable yet.

“We are still in the phase of building capacity, so there’s a lot of expense that sinks into creating that capacity,” he said. “The good news, though, is that we have customers, we have products, we have the view that these products are going to be more and more adaptable and compelling in the marketplace.”

The company is working to establish relationships with other universities in Kenya, such as Strathmore and University of Eldoret, as well as in Uganda and Rwanda.

Githinji said he has also established a foundation named after his mother and his mother-in-law, with a goal of empowering under-privileged girls through STEM. With partners, he has built a computer lab in a remote village near Mount Kenya with about 20 workstations so kids and their families can benefit. 

UN Chief Considering Watchdog Agency for AI   

U.N. Secretary-General Antonio Guterres said Monday that he will appoint a scientific advisory body in the coming days that will include outside experts on artificial intelligence, and said he is open to the idea of creating a new U.N. agency that would focus on AI.

“I would be favorable to the idea that we could have an artificial intelligence agency, I would say, inspired by what the International Atomic Energy Agency is today,” Guterres said of the U.N. nuclear watchdog agency.

He said he does not have the authority to create an IAEA-like agency — that is up to the organization’s 193-member states. But he said it has been discussed and he would see it as a positive development.

“What is the advantage of the IAEA — it is a very solid, knowledge-based institution,” Guterres told reporters. “And at the same time, even if limited, it has some regulatory functions. So, I believe this is a model that could be very interesting.”

The Vienna-based IAEA is the focal point for international nuclear cooperation. It has developed international nuclear safety standards and is both watchdog and advisor on the peaceful use of nuclear energy.

There are growing concerns about the power of artificial intelligence and how it can be abused for negative and even deadly purposes, including from Geoffrey Hinton, who is the scientist known as “the godfather of AI.”

Top U.S. cybersecurity officials have also warned of the growing dangers of AI. 

“I think ultimately there will have to be — and even industry is saying this — there will have to be some sort of regulation to govern the licensing and the use of these capabilities,” U.S. Cybersecurity and Infrastructure Security Agency (CISA) Director Jen Easterly told the Aspen Institute in Washington Monday. 

Easterly also emphasized the need for more dialogue on AI, pointing to proposals like the one being pursued by the U.N. 

“We can have conversations with our adversaries about nuclear weapons,” Easterly said. “I think we probably should think about having these conversations with our adversaries on AI which, after all, will be in my view the most powerful weapons of the century.” 

British Prime Minister Rishi Sunak announced last week plans for the UK to host the first major global summit on AI safety in the autumn.

Guterres said in terms of regulating AI, an industry where things move very quickly, a set of norms established one day can be outdated the next. So, something that is more flexible is necessary. 

“We need a process, a constant process of intervention of the different stakeholders, working together to permanently establish a number of soft law mechanisms, a number of — I would say — norms, codes of conduct and others,” he said.

Guterres said the scientific advisory body he will soon create will also include the chief scientists from the U.N. Educational, Scientific and Cultural Organization (UNESCO) and the International Telecommunication Union (ITU), which is a specialized U.N. agency related to information and telecommunication technology.

He said outside experts, including two from the AI sphere, would be a part of the advisory body.

The UN chief also announced plans for a digital compact he says would be a voluntary “code of conduct” that he hopes technology companies and governments will adhere to, with the aim of decreasing the spread of mis- and dis-information and hate speech to billions of people and making the internet a safer space.

“Its proposals are aimed at creating guardrails to help governments come together around guidelines that promote facts, while exposing conspiracies and lies, and safeguarding freedom of expression and information,” he said. “And to help tech companies navigate difficult ethical and legal issues and build business models based on a healthy information ecosystem.”

He said tech companies have done little to prevent their platforms from contributing to hate and violence, and he criticized governments for ignoring human rights and sometimes taking drastic measures, including sweeping internet shutdowns.

Guterres said he hopes to issue the code of conduct after discussions with member states and before the U.N. Summit of the Future, which is planned for September 2024.

VOA National Security Correspondent Jeff Seldin contributed to this report. 

Startup Firm Leads Kenya into World of High-Tech Manufacturing

A three-year-old startup company is leading Kenya into the world of high-tech manufacturing, building a sophisticated workforce capable of making the semiconductors and nanotechnology products that operate modern devices from mobile phones to refrigerators. VOA’s Africa correspondent Mariama Diallo visited the plant and has this story.