Ex-Twitter Executives to Testify About Hunter Biden Story Before House Panel

Former Twitter employees are expected to testify next week before the House Oversight Committee about the social media platform’s handling of reporting on President Joe Biden’s son, Hunter Biden.

The scheduled testimony, confirmed by the committee Monday, will be the first time the three former executives will appear before Congress to discuss the company’s decision to initially block from Twitter a New York Post article regarding Hunter Biden’s laptop in the weeks before the 2020 election.

Republicans have said the story was suppressed for political reasons, though no evidence has been released to support that claim. The witnesses for the February 8 hearing are expected to be Vijaya Gadde, former chief legal officer; James Baker, former deputy general counsel; and Yoel Roth, former head of safety and integrity.

The hearing is among the first of many in a GOP-controlled House to be focused on Biden and his family, as Republicans wield the power of their new, albeit slim, majority.

The New York Post first reported in October 2020 that it had received from former President Donald Trump’s personal attorney, Rudy Giuliani, a copy of a hard drive of a laptop that Hunter Biden had dropped off 18 months earlier at a Delaware computer repair shop and never retrieved. Twitter initially blocked people from sharing links to the story for several days.

Months later, Twitter’s then-CEO Jack Dorsey called the company’s communications around the Post article “not great.” He added that blocking the article’s URL with “zero context” around why it was blocked was “unacceptable.”

The Post article at the time was greeted with skepticism due to questions about the laptop’s origins, including Giuliani’s involvement, and because top officials in the Trump administration already had warned that Russia was working to denigrate Joe Biden ahead of the 2020 election.

The Kremlin had interfered in the 2016 race by hacking Democratic emails that were subsequently leaked, and there were widespread fears across Washington that Russia would meddle again in the 2020 race.

“This is why we’re investigating the Biden family for influence peddling,” Rep. James Comer, chairman of the Oversight committee, said at a press event Monday morning. “We want to make sure that our national security is not compromised.”

The White House has sought to discredit the Republican probes into Hunter Biden, calling them “divorced-from-reality political stunts.”

Nonetheless, Republicans now hold subpoena power in the House, giving them the authority to compel testimony and conduct an aggressive investigation. GOP staff has spent the past year analyzing messages and financial transactions found on the laptop that belonged to the president’s younger son. Comer has previously said the evidence they have compiled is “overwhelming,” but did not offer specifics.

Comer has pledged there won’t be hearings regarding the Biden family until the committee has the evidence to back up any claims of alleged wrongdoing. He also acknowledged the stakes are high whenever an investigation centers on the leader of a political party.

On Monday, the Kentucky Republican, speaking at a National Press Club event, said that he could not guarantee a subpoena of Hunter Biden during his term. “We’re going to go where the investigation leads us. Maybe there’s nothing there.”

Comer added, “We’ll see.” 

Microsoft bakes ChatGPT-Like Tech into Search Engine Bing

Microsoft is fusing ChatGPT-like technology into its search engine Bing, transforming an internet service that now trails far behind Google into a new way of communicating with artificial intelligence.

The revamping of Microsoft’s second-place search engine could give the software giant a head start against other tech companies in capitalizing on the worldwide excitement surrounding ChatGPT, a tool that’s awakened millions of people to the possibilities of the latest AI technology.

Along with adding it to Bing, Microsoft is also integrating the chatbot technology into its Edge browser. Microsoft announced the new technology at an event Tuesday at its headquarters in Redmond, Washington.

Microsoft said a public preview of the new Bing was to launch Tuesday for users who sign up for it, but the technology will scale to millions of users in coming weeks.

Yusuf Mehdi, corporate vice president and consumer chief marketing officer, said the new Bing will go live for desktop on limited preview. Everyone can try a limited number of queries, he said.

The strengthening partnership with ChatGPT-maker OpenAI has been years in the making, starting with a $1 billion investment from Microsoft in 2019 that led to the development of a powerful supercomputer specifically built to train the San Francisco startup’s AI models.

While it’s not always factual or logical, ChatGPT’s mastery of language and grammar comes from having ingested a huge trove of digitized books, Wikipedia entries, instruction manuals, newspapers and other online writings.

The shift to making search engines more conversational — able to confidently answer questions rather than offering links to other websites — could change the advertising-fueled search business, but also poses risks if the AI systems don’t get their facts right.

Their opaqueness also makes it hard to source back to the original human-made images and texts they’ve effectively memorized.

Google has been cautious about such moves. But in response to pressure over ChatGPT’s popularity, Google CEO Sundar Pichai on Monday announced a new conversational service named Bard that will be available exclusively to a group of “trusted testers” before being widely released later this year.

Google’s chatbot is supposed to be able to explain complex subjects such as outer space discoveries in terms simple enough for a child to understand. It also claims the service will also perform other more mundane tasks, such as providing tips for planning a party, or lunch ideas based on what food is left in a refrigerator. Other tech rivals such as Facebook parent Meta and Amazon also worked on similar technology, but Microsoft’s latest moves aim to position it at he center of the ChatGPT zeitgeist.

Microsoft disclosed in January that it was pouring billions more dollars into OpenAI as it looks to fuse the technology behind ChatGPT, the image-generator DALL-E and other OpenAI innovations into an array of Microsoft products tied to its cloud computing platform and its Office suite of workplace products like email and spreadsheets.

The most surprising might be the integration with Bing, which is the second-place search engine in many markets but has never come close to challenging Google’s dominant position.

Bing launched in 2009 as a rebranding of Microsoft’s earlier search engines and was run for a time by Nadella, years before he took over as CEO. Its significance was boosted when Yahoo and Microsoft signed a deal for Bing to power Yahoo’s search engine, giving Microsoft access to Yahoo’s greater search share. Similar deals infused Bing into the search features for devices made by other companies, though users wouldn’t necessarily know that Microsoft was powering their searches.

By making it a destination for ChatGPT-like conversations, Microsoft could invite more users to give Bing a try.

On the surface, at least, a Bing integration seems far different from what OpenAI has in mind for its technology.

OpenAI has long voiced an ambitious vision for safely guiding what’s known as AGI, or artificial general intelligence, a not-yet-realized concept that harkens back to ideas from science fiction about human-like machines. OpenAI’s website describes AGI as “highly autonomous systems that outperform humans at most economically valuable work.”

OpenAI started out as a nonprofit research laboratory when it launched in December 2015 with backing from Tesla CEO Elon Musk and others. Its stated aims were to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

That changed in 2018 when it incorporated a for-profit business Open AI LP, and shifted nearly all its staff into the business, not long after releasing its first generation of the GPT model for generating human-like paragraphs of readable text.

OpenAI’s other products include the image-generator DALL-E, first released in 2021, the computer programming assistant Codex and the speech recognition tool Whisper.

Google Hopes ‘Bard’ Will Outsmart ChatGPT, Microsoft in AI

Google is girding for a battle of wits in the field of artificial intelligence with “Bard,” a conversational service aimed at countering the popularity of the ChatGPT tool backed by Microsoft.

Bard initially will be available exclusively to a group of “trusted testers” before being widely released later this year, according to a Monday blog post from Google CEO Sundar Pichai.

Google’s chatbot is supposed to be able to explain complex subjects such as outer space discoveries in terms simple enough for a child to understand. It also claims the service will also perform other more mundane tasks, such as providing tips for planning a party, or lunch ideas based on what food is left in a refrigerator. Pichai didn’t say in his post whether Bard will be able to write prose in the vein of William Shakespeare, the playwright who apparently inspired the service’s name.

“Bard can be an outlet for creativity, and a launchpad for curiosity,” Pichai wrote.

Google announced Bard’s existence less than two weeks after Microsoft disclosed it’s pouring billions of dollars into OpenAI, the San Francisco-based maker of ChatGPT and other tools that can write readable text and generate new images.

Microsoft’s decision to up the ante on a $1 billion investment that it previously made in OpenAI in 2019 intensified the pressure on Google to demonstrate that it will be able to keep pace in a field of technology that many analysts believe will be as transformational as personal computers, the internet and smartphones have been in various stages over the past 40 years.

In a report last week, CNBC said a team of Google engineers working on artificial intelligence technology “has been asked to prioritize working on a response to ChatGPT.” Bard had been a service being developed under a project called “Atlas,” as part of Google’s “code red” effort to counter the success of ChatGPT, which has attracted tens of millions of users since its general release late last year, while also raising concerns in schools about its ability to write entire essays for students.

Pichai has been emphasizing the importance of artificial intelligence for the past six years, with one of the most visible byproducts materializing in 2021 as part of a system called “Language Model for Dialogue Applications,” or LaMDA, which will be used to power Bard.

Google also plans to begin incorporating LaMDA and other artificial intelligence advancements into its dominant search engine to provide more helpful answers to the increasingly complicated questions being posed by its billion of users. Without providing a specific timeline, Pichai indicated the artificial intelligence tools will be deployed in Google’s search in the near future.

In another sign of Google’s deepening commitment to the field, Google announced last week that it is investing in and partnering with Anthropic, an AI startup led by some former leaders at OpenAI. Anthropic has also built its own AI chatbot named Claude and has a mission centered on AI safety.

Ukraine’s Blackouts Force It to Embrace Greener Energy

As Russia’s targeted attacks on the Ukrainian energy infrastructure continue, Ukraine is forced to rethink its energy future. While inventing ways to quickly restore and improve the resilience of its energy system, Ukraine is also looking for green energy solutions. Anna Chernikova has the story from Irpin, one of the hardest-hit areas of the Kyiv region. Camera: Eugene Shynkar.

Technology Brings Hope to Ukraine’s Wounded

The war in Ukraine has left thousands of wounded soldiers, many of whom require the latest technologies to heal and return to normal life. For VOA, Anna Chernikova visited a rehabilitation center near Kyiv, where cutting edge technology and holistic care are giving soldiers hope. (Myroslava Gongadze contributed to this report. Camera: Eugene Shynkar )       

Ransomware Attacks in Europe Target Old VMware, Agencies Say

Cybersecurity agencies in Europe are warning of ransomware attacks exploiting a two-year-old computer bug as Italy experienced widespread internet outages. 

The Italian premier’s office said Sunday night the attacks affecting computer systems in the country involved “ransomware already in circulation” in a product made by cloud technology provider VMware. 

A Friday technical bulletin from a French cybersecurity agency said the attack campaigns target VMware ESXi hypervisors, which are used to monitor virtual machines. 

Palo Alto, California-based VMware fixed the bug back in February 2021, but the attacks are targeting older, unpatched versions of the product. 

The company said in a statement Sunday that its customers should take action to apply the patch if they have not already done so. 

“Security hygiene is a key component of preventing ransomware attacks,” it said. 

The U.S. Cybersecurity and Infrastructure Security Agency said Sunday it is “working with our public and private sector partners to assess the impacts of these reported incidents and providing assistance where needed.” 

The problem attracted particular public attention in Italy on Sunday because it coincided with a nationwide internet outage affecting telecommunications operator Telecom Italia, which interfered with streaming the Spezia v. Napoli soccer match but appeared largely resolved by the time of the later Derby della Madonnina between Inter Milan and AC Milan. It was unclear whether the outages were related to the ransomware attacks. 

Seeing Is Believing? Global Scramble to Tackle Deepfakes

Chatbots spouting falsehoods, face-swapping apps crafting porn videos, and cloned voices defrauding companies of millions — the scramble is on to rein in AI deepfakes that have become a misinformation super spreader.

Artificial Intelligence is redefining the proverb “seeing is believing,” with a deluge of images created out of thin air and people shown mouthing things they never said in real-looking deepfakes that have eroded online trust.

“Yikes. (Definitely) not me,” tweeted billionaire Elon Musk last year in one vivid example of a deepfake video that showed him promoting a cryptocurrency scam.

China recently adopted expansive rules to regulate deepfakes but most countries appear to be struggling to keep up with the fast-evolving technology amid concerns that regulation could stymie innovation or be misused to curtail free speech.

Experts warn that deepfake detectors are vastly outpaced by creators, who are hard to catch as they operate anonymously using AI-based software that was once touted as a specialized skill but is now widely available at low cost.

Facebook owner Meta last year said it took down a deepfake video of Ukrainian President Volodymyr Zelenskyy urging citizens to lay down their weapons and surrender to Russia.

And British campaigner Kate Isaacs, 30, said her “heart sank” when her face appeared in a deepfake porn video that unleashed a barrage of online abuse after an unknown user posted it on Twitter.

“I remember just feeling like this video was going to go everywhere — it was horrendous,” Isaacs, who campaigns against non-consensual porn, was quoted as saying by the BBC in October.

The following month, the British government voiced concern about deepfakes and warned of a popular website that “virtually strips women naked.”

‘Information apocalypse’

With no barriers to creating AI-synthesized text, audio and video, the potential for misuse in identity theft, financial fraud and tarnishing reputations has sparked global alarm.

The Eurasia group called the AI tools “weapons of mass disruption.”

“Technological advances in artificial intelligence will erode social trust, empower demagogues and authoritarians, and disrupt businesses and markets,” the group warned in a report.

“Advances in deepfakes, facial recognition, and voice synthesis software will render control over one’s likeness a relic of the past.”

This week AI startup ElevenLabs admitted that its voice cloning tool could be misused for “malicious purposes” after users posted a deepfake audio purporting to be actor Emma Watson reading Adolf Hitler’s biography “Mein Kampf.”

The growing volume of deepfakes may lead to what the European law enforcement agency Europol described as an “information apocalypse,” a scenario where many people are unable to distinguish fact from fiction.

“Experts fear this may lead to a situation where citizens no longer have a shared reality or could create societal confusion about which information sources are reliable,” Europol said in a report.

That was demonstrated last weekend when NFL player Damar Hamlin spoke to his fans in a video for the first time since he suffered a cardiac arrest during a match.

Hamlin thanked medical professionals responsible for his recovery, but many who believed conspiracy theories that the COVID-19 vaccine was behind his on-field collapse baselessly labeled his video a deepfake.

‘Super spreader’

China enforced new rules last month that will require businesses offering deepfake services to obtain the real identities of their users. They also require deepfake content to be appropriately tagged to avoid “any confusion.”

The rules came after the Chinese government warned that deepfakes present a “danger to national security and social stability.”

In the United States, where lawmakers have pushed for a task force to police deepfakes, digital rights activists caution against legislative overreach that could kill innovation or target legitimate content.

The European Union, meanwhile, is locked in heated discussions over its proposed “AI Act.”

The law, which the EU is racing to pass this year, will require users to disclose deepfakes but many fear the legislation could prove toothless if it does not cover creative or satirical content.

“How do you reinstate digital trust with transparency? That is the real question right now,” Jason Davis, a research professor at Syracuse University, told AFP.

“The [detection] tools are coming and they’re coming relatively quickly. But the technology is moving perhaps even quicker. So like cyber security, we will never solve this, we will only hope to keep up.”

Many are already struggling to comprehend advances such as ChatGPT, a chatbot created by the U.S.-based OpenAI that is capable of generating strikingly cogent texts on almost any topic.

In a study, media watchdog NewsGuard, which called it the “next great misinformation super spreader,” said most of the chatbot’s responses to prompts related to topics such as COVID-19 and school shootings were “eloquent, false and misleading.”

“The results confirm fears … about how the tool can be weaponized in the wrong hands,” NewsGuard said.

Musk Found Not Liable in Tesla Tweet Trial

Jurors on Friday cleared Elon Musk of liability for investors’ losses in a fraud trial over his 2018 tweets falsely claiming that he had funding in place to take Tesla private.

The tweets sent the Tesla share price on a rollercoaster ride, and Musk was sued by shareholders who said the tycoon acted recklessly in an effort to squeeze investors who had bet against the company.

Jurors deliberated for barely two hours before returning to the San Francisco courtroom to say they unanimously agreed that neither Musk nor the Tesla board perpetrated fraud with the tweets and in their aftermath.

“Thank goodness, the wisdom of the people has prevailed!” tweeted Musk, who had tried but failed to get the trial moved to Texas on the grounds jurors in California would be biased against him.

“I am deeply appreciative of the jury’s unanimous finding of innocence in the Tesla 420 take-private case.”

Attorney Nicholas Porritt, who represents Glen Littleton and other investors in Tesla, had argued in court that the case was about making sure the rich and powerful have to abide by the same stock market rules as everyone else.

“Elon Musk published tweets that were false with reckless disregard as to their truth,” Porritt told the panel of nine jurors during closing arguments.

Porritt pointed to expert testimony estimating that Musk’s claim about funding, which turned out not to be true, cost investors billions of dollars overall and that Musk and the Tesla board should be made to pay damages.

But Musk attorney Alex Spiro successfully countered that the billionaire may have erred on wording in a hasty tweet, but that he did not set out to deceive anyone.

Spiro also portrayed the mercurial entrepreneur, who now owns Twitter, as having had a troubled childhood and having come to the United States as a poor youth chasing dreams.

No joke

Musk testified during three days on the witness stand that his 2018 tweet about taking Tesla private at $420 a share was no joke and that Saudi Arabia’s sovereign wealth fund was serious about helping him do it.

“To Elon Musk, if he believes it or even just thinks about it then it’s true no matter how objectively false or exaggerated it may be,” Porritt told jurors.

Tesla and its board were also to blame, because they let Musk use his Twitter account to post news about the company, Porritt argued.

The case revolved around a pair of tweets in which Musk said “funding secured” for a project to buy out the publicly traded electric automaker, then in a second tweet added that “investor support is confirmed.”

“He wrote two words ‘funding secured’ that were technically inaccurate,” Spiro said of Musk while addressing jurors.

“Whatever you think of him, this isn’t a bad tweeter trial, it’s a ‘did they prove this man committed fraud?’ trial.”

Musk did not intend to deceive anyone with the tweets and had the connections and wealth to take Tesla private, Spiro contended.

During the trial playing out in federal court in San Francisco, Spiro said that even though the tweets may have been a “reckless choice of words,” they were not fraud.

“I’m being accused of fraud; it’s outrageous,” Musk said while testifying in person.

Musk said he fired off the tweets at issue after learning of a Financial Times story about a Saudi Arabian investment fund wanting to acquire a stake in Tesla.

The trial came at a sensitive time for Musk, who has dominated the headlines for his chaotic takeover of Twitter where he has laid off more than half of the 7,500 employees and scaled down content moderation. 

ChatGPT: The Promises, Pitfalls and Panic

Excitement around ChatGPT — an easy to use AI chatbot that can deliver an essay or computer code upon request and within seconds — has sent schools into panic and turned Big Tech green with envy.

The potential impact of ChatGPT on society remains complicated and unclear even as its creator Wednesday announced a paid subscription version in the United States.

Here is a closer look at what ChatGPT is (and is not):

Is this a turning point?  

It is entirely possible that November’s release of ChatGPT by California company OpenAI will be remembered as a turning point in introducing a new wave of artificial intelligence to the wider public.  

What is less clear is whether ChatGPT is actually a breakthrough with some critics calling it a brilliant PR move that helped OpenAI score billions of dollars in investments from Microsoft.

Yann LeCun, Chief AI Scientist at Meta and professor at New York University, believes “ChatGPT is not a particularly interesting scientific advance,” calling the app a “flashy demo” built by talented engineers.

LeCun, speaking to the Big Technology Podcast, said ChatGPT is void of “any internal model of the world” and is merely churning “one word after another” based on inputs and patterns found on the internet.

“When working with these AI models, you have to remember that they’re slot machines, not calculators,” warned Haomiao Huang of Kleiner Perkins, the Silicon Valley venture capital firm.

“Every time you ask a question and pull the arm, you get an answer that could be marvelous… or not… The failures can be extremely unpredictable,” Huang wrote in Ars Technica, the tech news website.

Just like Google

ChatGPT is powered by an AI language model that is nearly three years old — OpenAI’s GPT-3 — and the chatbot only uses a part of its capability.  

The true revolution is the humanlike chat, said Jason Davis, research professor at Syracuse University.

“It’s familiar, it’s conversational and guess what? It’s kind of like putting in a Google search request,” he said.

ChatGPT’s rockstar-like success even shocked its creators at OpenAI, which received billions in new financing from Microsoft in January.

“Given the magnitude of the economic impact we expect here, more gradual is better,” OpenAI CEO Sam Altman said in an interview to StrictlyVC, a newsletter.

“We put GPT-3 out almost three years ago… so the incremental update from that to ChatGPT, I felt like should have been predictable and I want to do more introspection on why I was sort of miscalibrated on that,” he said.

The risk, Altman added, was startling the public and policymakers and on Tuesday his company unveiled a tool for detecting text generated by AI amid concerns from teachers that students may rely on artificial intelligence to do their homework.

What now?

From lawyers to speechwriters, from coders to journalists, everyone is waiting breathlessly to feel disruption caused by ChatGPT. OpenAI just launched a paid version of the chatbot – $20 per month for an improved and faster service.

For now, officially, the first significant application of OpenAI’s tech will be for Microsoft software products.  

Though details are scarce, most assume that ChatGPT-like capabilities will turn up on the Bing search engine and in the Office suite.

“Think about Microsoft Word. I don’t have to write an essay or an article, I just have to tell Microsoft Word what I wanted to write with a prompt,” said Davis.

He believes influencers on TikTok and Twitter will be the earliest adopters of this so-called generative AI since going viral requires huge amounts of content and ChatGPT can take care of that in no time.

This of course raises the specter of disinformation and spamming carried out at an industrial scale.  

For now, Davis said the reach of ChatGPT is very limited by computing power, but once this is ramped up, the opportunities and potential dangers will grow exponentially.

And much like the ever imminent arrival of self-driving cars that never quite happens, experts disagree on whether that is a question of months or years.

Ridicule

LeCun said Meta and Google have refrained from releasing AI as potent as ChatGPT out of fear of ridicule and backlash.

Quieter releases of language-based bots – like Meta’s Blenderbot or Microsoft’s Tay for example – were quickly shown capable of generating racist or inappropriate content.

Tech giants have to think hard before releasing something “that is going to spew nonsense” and disappoint, he said.

Zimbabwe Plans to Build $60 Billion ‘Cyber City’ to Ease Harare Congestion

Zimbabwe plans to build “Zim Cyber City,” a modern capital expected to cost up to $60 billion in raised funds and include new government buildings and a presidential palace. Critics are blasting the plan as wasteful when more than half the population lives in poverty and the government has let the current capital, Harare, fall apart. Columbus Mavhunga reports from Mount Hampden, Zimbabwe. Camera: Blessing Chigwenhembe

Zimbabwe Plans to Build $60 Billion ‘Cyber City’ to Easy Harare Congestion

Zimbabwe plans to build “Zim Cyber City,” a modern capital expected to cost up to $60 billion in raised funds and include new government buildings and a presidential palace. Critics are blasting the plan as wasteful when more than half the population lives in poverty and the government has let the current capital, Harare, fall apart. Columbus Mavhunga reports from Mount Hampden, Zimbabwe. Camera: Blessing Chigwenhembe