Russian Malware Targeting Ukrainian Mobile Devices

Ukrainian troops using Android mobile devices are coming under attack from Russian hackers, who are using a new kind of malware to try to steal information critical to the ongoing counteroffensive.

Cyber officials from the United States, along with counterparts from Australia, Britain, Canada and New Zealand, issued a warning Thursday about the malware, named Infamous Chisel, which aims to scan files, monitor communications and “periodically steal sensitive information.”

The U.S. Cybersecurity and Infrastructure Security Agency, or CISA, describes the new malware as “a collection of components which enable persistent access to an infected Android device … which periodically collates and exfiltrates victim information.”

 

A CISA report published Thursday shared additional technical details about the Russian campaign, with officials warning the malware could be employed against other targets.

Thursday’s warning reflects “the need for all organizations to keep their Shields Up to detect and mitigate Russian cyber activity, and the importance of continued focus on maintaining operational resilience under all conditions,” said Eric Goldstein, CISA executive assistant director for cybersecurity, in a statement.

According to the report by the U.S. and its allies, the malware is designed to persist on a system by replacing legitimate coding with other coding from outside the system that is not directly attached to the malware itself.

It also said the malware’s components are of “low to medium sophistication and appear to have been developed with little regard to defense evasion or concealment of malicious activity.”

Ukraine’s SBU security agency first discovered the Russian malware earlier in August, saying it was being used to “gain access to the combat data exchange system of the Armed Forces of Ukraine.”

Ukrainian officials said at the time they were able to launch defensive cyber operations to expose and block the Russian efforts.

An SBU investigation determined that Russia was able to launch the malware attack after capturing Ukrainian computer tablets on the battlefield.

Ukraine attributed the attack to a cyber threat actor known as Sandworm, which U.S. and British officials have previously linked to the GRU, Russia’s military intelligence service.

FBI-Led Operation Dismantles Notorious Qakbot Malware

A global operation led by the FBI has dismantled one of the most notorious cybercrime tools used to launch ransomware attacks and steal sensitive data.

U.S. law enforcement officials announced on Tuesday that the FBI and its international partners had disrupted the Qakbot infrastructure and seized nearly $9 million in cryptocurrency in illicit profits.

Qakbot, also known as Qbot, was a sophisticated botnet and malware that infected hundreds of thousands of computers around the world, allowing cybercriminals to access and control them remotely.

“The Qakbot malicious code is being deleted from victim computers, preventing it from doing any more harm,” the U.S. Attorney’s Office for the Central District of California said in a statement.

Martin Estrada, the U.S. attorney for the Central District of California, and Don Alway, the FBI assistant director in charge of the Los Angeles field office, announced the operation at a press conference in Los Angeles.

Estrada called the operation “the largest U.S.-led financial and technical disruption of a botnet infrastructure” used by cybercriminals to carry out ransomware, financial fraud, and other cyber-enabled crimes.

“Qakbot was the botnet of choice for some of the most infamous ransomware gangs, but we have now taken it out,” Estrada said.

Law enforcement agencies from France, Germany, the Netherlands, the United Kingdom, Romania, and Latvia took part in the operation, code-named Duck Hunt.

“These actions will prevent an untold number of cyberattacks at all levels, from the compromised personal computer to a catastrophic attack on our critical infrastructure,” Alway said.

As part of the operation, the FBI was able to gain access to the Qakbot infrastructure and identify more than 700,000 infected computers around the world, including more than 200,000 in the United States.

To disrupt the botnet, the FBI first seized the Qakbot servers and command and control system. Agents then rerouted the Qakbot traffic to servers controlled by the FBI. That in turn instructed users of infected computers to download a file created by law enforcement that would uninstall Qakbot malware.

Meta Fights Sprawling Chinese ‘Spamouflage’ Operation

Meta on Tuesday said it purged thousands of Facebook accounts that were part of a widespread online Chinese spam operation trying to covertly boost China and criticize the West.

The campaign, which became known as “Spamouflage,” was active across more than 50 platforms and forums including Facebook, Instagram, TikTok, YouTube and X, formerly known as Twitter, according to a Meta threat report.

“We assess that it’s the largest, though unsuccessful, and most prolific covert influence operation that we know of in the world today,” said Meta Global Threat Intelligence Lead Ben Nimmo.

“And we’ve been able to link Spamouflage to individuals associated with Chinese law enforcement.”

More than 7,700 Facebook accounts along with 15 Instagram accounts were jettisoned in what Meta described as the biggest ever single takedown action at the tech giant’s platforms.

“For the first time we’ve been able to tie these many clusters together to confirm that they all go to one operation,” Nimmo said.

The network typically posted praise for China and its Xinjiang province and criticisms of the United States, Western foreign policies, and critics of the Chinese government including journalists and researchers, the Meta report says.

The operation originated in China and its targets included Taiwan, the United States, Australia, Britain, Japan, and global Chinese-speaking audiences. 

Facebook or Instagram accounts or pages identified as part of the “large and prolific covert influence operation” were taken down for violating Meta rules against coordinated deceptive behavior on its platforms.

Meta’s team said the network seemed to garner scant engagement, with viewer comments tending to point out bogus claims.

Clusters of fake accounts were run from various parts of China, with the cadence of activity strongly suggesting groups working from an office with daily job schedules, according to Meta.

‘Doppelganger’ campaign

Some tactics used in China were similar to those of a Russian online deception network exposed in 2019, which suggested the operations might be learning from one another, according to Nimmo.

Meta’s threat report also provided analysis of the Russian influence campaign called Doppelganger, which was first disrupted by the security team a year ago.

The core of the operation was to mimic websites of mainstream news outlets in Europe and post bogus stories about Russia’s war on Ukraine, then try to spread them online, said Meta head of security policy Nathaniel Gleicher.  

Companies involved in the campaign were recently sanctioned by the European Union.

Meta said Germany, France and Ukraine remained the most targeted countries overall, but that the operation had added the United States and Israel to its list of targets.

This was done by spoofing the domains of major news outlets, including The Washington Post and Fox News.

Gleicher described Doppelganger, which is intended to weaken support of Ukraine, as the largest and most aggressively persistent influence operation from Russia that Meta has seen since 2017.

Glitch Halts Toyota Factories in Japan

Toyota said Tuesday it has been hit by a technical glitch forcing it to suspend production at all 14 factories in Japan.

The world’s biggest automaker gave no further details on the stoppage, which began Tuesday morning, but said it did not appear to be caused by a cyberattack.

The company said the glitch prevented its system from processing orders for parts, resulting in a suspension of a dozen factories or 25 production lines on Tuesday morning.

The company later decided to halt the afternoon shift of the two other operational factories, suspending all of Toyota’s domestic plants, or 28 production lines.

“We do not believe the problem was caused by a cyberattack,” the company said in a statement to AFP.

“We will continue to investigate the cause and to restore the system as soon as possible.”

The incident affected only Japanese factories, Toyota said.

It was not immediately clear exactly when normal production might resume. 

The news briefly sent Toyota’s stocks into the red in the morning session before recovering.

Last year, Toyota had to suspend all of its domestic factories after a subsidiary was hit by a cyberattack.

The company is one of the biggest in Japan, and its production activities have an outsized impact on the country’s economy.

Toyota is famous for its “just-in-time” production system of providing only small deliveries of necessary parts and other items at various steps of the assembly process.

This practice minimizes costs while improving efficiency and is studied by other manufacturers and at business schools around the world, but also comes with risks.

The auto titan retained its global top-selling auto crown for the third year in a row in 2022 and aims to earn an annual net profit of $17.6 billion this fiscal year.

Major automakers are enjoying a robust surge of global demand after the COVID-19 pandemic slowed manufacturing activities.

Severe shortages of semiconductors had limited production capacity for a host of goods ranging from cars to smartphones.

Toyota has said chip supplies were improving and that it had raised product prices, while it worked with suppliers to bring production back to normal. 

However, the company was still experiencing delays in the deliveries of new vehicles to customers, it added.

ChatGPT Turns to Business as Popularity Wanes

OpenAI on Monday said it was launching a business version of ChatGPT as its artificial intelligence sensation grapples with declining usership nine months after its historic debut.

ChatGPT Enterprise will offer business customers a premium version of the bot, with “enterprise grade” security and privacy enhancements from previous versions, OpenAI said in a blog post.

The question of data security has become an important one for OpenAI, with major companies, including Apple, Amazon and Samsung, blocking employees from using ChatGPT out of fear that sensitive information will be divulged.

“Today marks another step towards an AI assistant for work that helps with any task, is customized for your organization, and that protects your company data,” OpenAI said.

The ChatGPT business version resembles Bing Chat Enterprise, an offering by Microsoft, which uses the same OpenAI technology through a major partnership.

ChatGPT Enterprise will be powered by GPT-4, OpenAI’s highest performing model, much like ChatGPT Plus, the company’s subscription version for individuals, but business customers will have special perks, including better speed.

“We believe AI can assist and elevate every aspect of our working lives and make teams more creative and productive,” the company said.

It added that companies including Carlyle, The Estée Lauder Companies and PwC were already early adopters of ChatGPT Enterprise.

The release came as ChatGPT is struggling to maintain the excitement that made it the world’s fastest downloaded app in the weeks after its release.

That distinction was taken over last month by Threads, the Twitter rival from Facebook-owner Meta.

According to analytics company Similarweb, ChatGPT traffic dropped by nearly 10% in June and again in July, falls that could be attributed to school summer break, it said.

Similarweb estimates that roughly one-quarter of ChatGPT’s users worldwide fall in the 18- to 24-year-old demographic.

OpenAI is also facing pushback from news publishers and other platforms — including X, formerly known as Twitter, and Reddit — that are now blocking OpenAI web crawlers from mining their data for AI model training.

A pair of studies by pollster Pew Research Center released on Monday also pointed to doubts about AI and ChatGPT in particular.

Two-thirds of the U.S.-based respondents who had heard of ChatGPT say their main concern is that the government will not go far enough in regulating its use.

The research also found that the use of ChatGPT for learning and work tasks has ticked up from 12% of those who had heard of ChatGPT in March to 16% in July.

Pew also reported that 52% of Americans say they feel more concerned than excited about the increased use of artificial intelligence.

New Study: Don’t Ask Alexa or Siri if You Need Info on Lifesaving CPR

Ask Alexa or Siri about the weather. But if you want to save someone’s life? Call 911 for that.

Voice assistants often fall flat when asked how to perform CPR, according to a study published Monday.

Researchers asked voice assistants eight questions that a bystander might pose in a cardiac arrest emergency. In response, the voice assistants said:

  • “Hmm, I don’t know that one.”

  • “Sorry, I don’t understand.”

  • “Words fail me.”

  • “Here’s an answer … that I translated: The Indian Penal Code.”

Only nine of 32 responses suggested calling emergency services for help — an important step recommended by the American Heart Association. Some voice assistants sent users to web pages that explained CPR, but only 12% of the 32 responses included verbal instructions.

Verbal instructions are important because immediate action can save a life, said study co-author Dr. Adam Landman, chief information officer at Mass General Brigham in Boston.

Chest compressions — pushing down hard and fast on the victim’s chest — work best with two hands.

“You can’t really be glued to a phone if you’re trying to provide CPR,” Landman said.

For the study, published in JAMA Network Open, researchers tested Amazon’s Alexa, Apple’s Siri, Google’s Assistant and Microsoft’s Cortana in February. They asked questions such as “How do I perform CPR?” and “What do you do if someone does not have a pulse?”

Not surprisingly, better questions yielded better responses. But when the prompt was simply “CPR,” the voice assistants misfired. One played news from a public radio station. Another gave information about a movie titled “CPR.” A third gave the address of a local CPR training business.

ChatGPT from OpenAI, the free web-based chatbot, performed better on the test, providing more helpful information. A Microsoft spokesperson said the new Bing Chat, which uses OpenAI’s technology, will first direct users to call 911 and then give basic steps when asked how to perform CPR. Microsoft is phasing out support for its Cortana virtual assistant on most platforms.

Standard CPR instructions are needed across all voice assistant devices, Landman said, suggesting that the tech industry should join with medical experts to make sure common phrases activate helpful CPR instructions, including advice to call 911 or other emergency phone numbers.

A Google spokesperson said the company recognizes the importance of collaborating with the medical community and is “always working to get better.” An Amazon spokesperson declined to comment on Alexa’s performance on the CPR test, and an Apple spokesperson did not provide answers to AP’s questions about how Siri performed.

Cybercrime Set to Threaten Canada’s Security, Prosperity, Says Spy Agency

Organized cybercrime is set to pose a threat to Canada’s national security and economic prosperity over the next two years, a national intelligence agency said on Monday.

In a report released Monday, the Communications Security Establishment (CSE) identified Russia and Iran as cybercrime safe havens where criminals can operate against Western targets.

Ransomware attacks on critical infrastructure such as hospitals and pipelines can be particularly profitable, the report said. Cyber criminals continue to show resilience and an ability to innovate their business model, it said.

“Organized cybercrime will very likely pose a threat to Canada’s national security and economic prosperity over the next two years,” said CSE, which is the Canadian equivalent of the U.S. National Security Agency.

“Ransomware is almost certainly the most disruptive form of cybercrime facing Canada because it is pervasive and can have a serious impact on an organization’s ability to function,” it said.

Official data show that in 2022, there were 70,878 reports of cyber fraud in Canada with over C$530 million ($390 million) stolen.

But Chris Lynam, director general of Canada’s National Cybercrime Coordination Centre, said very few crimes were reported and the real amount stolen last year could easily be C$5 billion or more.

“Every sector is being targeted along with all types of businesses as well … folks really have to make sure that they’re taking this seriously,” he told a briefing.

Russian intelligence services and law enforcement almost certainly maintain relationships with cyber criminals and allow them to operate with near impunity as long as they focus on targets outside the former Soviet Union, CSE said.

Moscow has consistently denied that it carries out or supports hacking operations.

Tehran likely tolerates cybercrime activities by Iran-based cyber criminals that align with the state’s strategic and ideological interests, CSE added.

Tesla Braces for Its First Trial Involving Autopilot Fatality

Tesla Inc TSLA.O is set to defend itself for the first time at trial against allegations that failure of its Autopilot driver assistant feature led to death, in what will likely be a major test of Chief Executive Elon Musk’s assertions about the technology.

Self-driving capability is central to Tesla’s financial future, according to Musk, whose own reputation as an engineering leader is being challenged with allegations by plaintiffs in one of two lawsuits that he personally leads the group behind technology that failed. Wins by Tesla could raise confidence and sales for the software, which costs up to $15,000 per vehicle.

Tesla faces two trials in quick succession, with more to follow.

The first, scheduled for mid-September in a California state court, is a civil lawsuit containing allegations that the Autopilot system caused owner Micah Lee’s Model 3 to suddenly veer off a highway east of Los Angeles at 65 miles per hour, strike a palm tree and burst into flames, all in the span of seconds.

The 2019 crash, which has not been previously reported, killed Lee and seriously injured his two passengers, including a then-8-year old boy who was disemboweled. The lawsuit, filed against Tesla by the passengers and Lee’s estate, accuses Tesla of knowing that Autopilot and other safety systems were defective when it sold the car. 

Musk ‘de facto leader’ of autopilot team

The second trial, set for early October in a Florida state court, arose out of a 2019 crash north of Miami where owner Stephen Banner’s Model 3 drove under the trailer of an 18-wheeler big rig truck that had pulled into the road, shearing off the Tesla’s roof and killing Banner. Autopilot failed to brake, steer or do anything to avoid the collision, according to the lawsuit filed by Banner’s wife.

Tesla denied liability for both accidents, blamed driver error and said Autopilot is safe when monitored by humans. Tesla said in court documents that drivers must pay attention to the road and keep their hands on the steering wheel.

“There are no self-driving cars on the road today,” the company said.

The civil proceedings will likely reveal new evidence about what Musk and other company officials knew about Autopilot’s capabilities – and any possible deficiencies. Banner’s attorneys, for instance, argue in a pretrial court filing that internal emails show Musk is the Autopilot team’s “de facto leader.”

Tesla and Musk did not respond to Reuters’ emailed questions for this article, but Musk has made no secret of his involvement in self-driving software engineering, often tweeting about his test-driving of a Tesla equipped with “Full Self-Driving” software. He has for years promised that Tesla would achieve self-driving capability only to miss his own targets.

Tesla won a bellwether trial in Los Angeles in April with a strategy of saying that it tells drivers that its technology requires human monitoring, despite the “Autopilot” and “Full Self-Driving” names. The case was about an accident where a Model S swerved into the curb and injured its driver, and jurors told Reuters after the verdict that they believed Tesla warned drivers about its system and driver distraction was to blame. 

Stakes higher for Tesla

The stakes for Tesla are much higher in the September and October trials, the first of a series related to Autopilot this year and next, because people died.

“If Tesla backs up a lot of wins in these cases, I think they’re going to get more favorable settlements in other cases,” said Matthew Wansley, a former General Counsel of nuTonomy, an automated driving startup and Associate Professor of Law at Cardozo School of Law.

On the other hand, “a big loss for Tesla – especially with a big damages award” could “dramatically shape the narrative going forward,” said Bryant Walker Smith, a law professor at the University of South Carolina.

In court filings, the company has argued that Lee consumed alcohol before getting behind the wheel and that it is not clear whether Autopilot was on at the time of crash.

Jonathan Michaels, an attorney for the plaintiffs, declined to comment on Tesla’s specific arguments, but said “we’re fully aware of Tesla’s false claims including their shameful attempts to blame the victims for their known defective autopilot system.”

In the Florida case, Banner’s attorneys also filed a motion arguing punitive damages were warranted. The attorneys have deposed several Tesla executives and received internal documents from the company that they said show Musk and engineers were aware of, and did not fix, shortcomings.

In one deposition, former executive Christopher Moore testified there are limitations to Autopilot, saying it “is not designed to detect every possible hazard or every possible obstacle or vehicle that could be on the road,” according to a transcript reviewed by Reuters.

In 2016, a few months after a fatal accident where a Tesla crashed into a semi-trailer truck, Musk told reporters that the automaker was updating Autopilot with improved radar sensors that likely would have prevented the fatality.

But Adam (Nicklas) Gustafsson, a Tesla Autopilot systems engineer who investigated both accidents in Florida, said that in the almost three years between that 2016 crash and Banner’s accident, no changes were made to Autopilot’s systems to account for cross-traffic, according to court documents submitted by plaintiff lawyers.

The lawyers tried to blame the lack of change on Musk. “Elon Musk has acknowledged problems with the Tesla autopilot system not working properly,” according to plaintiffs’ documents. Former Autopilot engineer Richard Baverstock, who was also deposed, stated that “almost everything” he did at Tesla was done at the request of “Elon,” according to the documents.

Tesla filed an emergency motion in court late on Wednesday seeking to keep deposition transcripts of its employees and other documents secret. Banner’s attorney, Lake “Trey” Lytal III, said he would oppose the motion.

“The great thing about our judicial system is Billion Dollar Corporations can only keep secrets for so long,” he wrote in a text message.

New Crew for Space Station Launches With Astronauts From 4 Countries

Four astronauts from four countries rocketed toward the International Space Station on Saturday.

They should reach the orbiting lab in their SpaceX capsule Sunday, replacing four astronauts who have been living up there since March.

A NASA astronaut was joined on the predawn liftoff from Kennedy Space Center by fliers from Denmark, Japan and Russia. They clasped one another’s gloved hands upon reaching orbit.

It was the first U.S. launch in which every spacecraft seat was occupied by a different country — until now, NASA had always included two or three of its own on its SpaceX taxi flights. A fluke in timing led to the assignments, officials said.

“We’re a united team with a common mission,” NASA’s Jasmin Moghbeli radioed from orbit. Added NASA’s Ken Bowersox, space operations mission chief: “Boy, what a beautiful launch … and with four international crew members, really an exciting thing to see.”

Moghbeli, a Marine pilot serving as commander, is joined on the six-month mission by the European Space Agency’s Andreas Mogensen, Japan’s Satoshi Furukawa and Russia’s Konstantin Borisov.

“To explore space, we need to do it together,” the European Space Agency’s director general, Josef Aschbacher, said minutes before liftoff. “Space is really global, and international cooperation is key.”

The astronauts’ paths to space couldn’t be more different.

Moghbeli’s parents fled Iran during the 1979 revolution. Born in Germany and raised on New York’s Long Island, she joined the Marines and flew attack helicopters in Afghanistan. The first-time space traveler hopes to show Iranian girls that they, too, can aim high. “Belief in yourself is something really powerful,” she said before the flight.

Mogensen worked on oil rigs off the West African coast after getting an engineering degree. He told people puzzled by his job choice that “in the future we would need drillers in space” like Bruce Willis’ character in the killer asteroid film “Armageddon.” He’s convinced the rig experience led to his selection as Denmark’s first astronaut.

Furukawa spent a decade as a surgeon before making Japan’s astronaut cut. Like Mogensen, he has visited the station before.

Borisov, a space rookie, turned to engineering after studying business. He runs a freediving school in Moscow and judges the sport, in which divers shun oxygen tanks and hold their breath underwater.

One of the perks of an international crew, they noted, is the food. Among the delicacies soaring with them: Persian herbed stew, Danish chocolate and Japanese mackerel.

SpaceX’s first-stage booster returned to Cape Canaveral several minutes after liftoff, an extra treat for the thousands of spectators gathered in the early-morning darkness.

Liftoff was delayed a day for additional data reviews of valves in the capsule’s life-support system. The countdown almost was halted again Saturday after a tiny fuel leak cropped up in the capsule’s thruster system. SpaceX engineers managed to verify the leak would pose no threat with barely two minutes remaining on the clock, said Benji Reed, the company’s senior director for human spaceflight.

Another NASA astronaut will launch to the station from Kazakhstan in mid-September under a barter agreement, along with two Russians.

SpaceX has now launched eight crews for NASA. Boeing was hired at the same time nearly a decade ago but has yet to fly astronauts. Its crew capsule is grounded until 2024 by parachute and other issues.

Thailand Threatens Facebook Shutdown Over Scam Ads

Thailand said this week it is preparing to sue Facebook in a move that could see the platform shut down nationwide over scammers allegedly exploiting the social networking site to cheat local users out of tens of millions of dollars a year.

The country’s minister of digital economy and society, Chaiwut Thanakamanusorn, announced the planned lawsuit after a ministry meeting on Monday.

Ministry spokesperson Wetang Phuangsup told VOA on Thursday the case would be filed in one to two weeks, possibly by the end of the month.

“We are in the stage of gathering information, gathering evidence, and we will file to the court to issue the final judgment on how to deal with Facebook since they are a part of the scamming,” he said.

Some of the most common scams, Wetang said, involve paid advertisements on the site urging people to invest in fake companies, often using the logo of Thailand’s Securities and Exchange Commission or sham endorsements from local celebrities to lure them in.

Of the roughly 16,000 online scamming complaints filed in Thailand last year, he said, 70% to 80% involved Facebook and cost users upwards of $100 million.

“We believe that Facebook has a responsibility,” Wetang said. “Facebook is taking money from advertising a lot, and basically even taking money from Thai society as a whole. Facebook should be more responsible to society, should screen the advertising. … We believe that by doing so it would definitely decrease the investment scam in Thailand on the Facebook.”

Wetang said the ministry had been urging the company to do more to screen and vet paid ads for the past year and was now turning to the courts to possibly shut the site down as a last resort.

“If you are supporting the crime, especially on the internet, you will be liable [for] the crime, and by the law, it’s possible the court can issue the shutdown of Facebook,” he said. “By law, we can ask the court to suspend or punish all the people who support the crime, of course with evidence.”

Neither Facebook nor its parent company, Meta, replied to VOA’s repeated requests for comment or interviews.

The Asia Internet Coalition, an industry association that counts Meta among its members, acknowledged that online scamming was a growing problem across the region. Other members include Google, Amazon, Apple and X, formerly known as Twitter.

“While it’s getting challenging from the scale perspective, it’s also getting complicated and sophisticated because of the technology that has been used when it comes to application on the platforms but also how this technology can be misused,” the coalition’s secretariat, Sarthak Luthra, told VOA.

Luthra would not speak for Meta or address Thailand’s specific complaints against Facebook but said tech companies were taking steps to thwart scammers, including teaching users how to spot them.

Last year, for example, Meta launched a #StayingSafeOnline campaign in Thailand “to raise awareness about some of the most common kinds of online scams, including helping people understand the different kinds of scamsters, their tricks, and tips to stay safe online,” according to the company’s website.

Luthra said tech companies have been facing a growing number of criminal and civil penalties for their content across the region while urging governments to give them more room to regulate themselves and to apply “safe harbor” rules that shield the companies from legal liability for content created by users.

Shutting down any platform on a nationwide scale is not the answer, he said, and he warned of the unintended consequences.

“It really, first, impacts the ease of doing business and also the perception around the digital economy development of a country, so shutting down a platform is of course not a solution to a challenge in this case,” Luthra said.

“A government really needs to think of how do we promote online safety while maintaining an open internet environment,” he said. “From the economic perspective, it does impact investment sentiment, business sentiment and the ability to operate in that particular country.”

At a recent company event in Thailand, Meta said there were some 65 million Facebook users in the country, which also has the second-largest economy in Southeast Asia.

Shutting down the platform would have a “huge” impact on the vast majority of people using the site to make money legally and honestly, said Sutawan Chanprasert, executive director of DigitalReach, a digital rights group based in Thailand.

She said a shutdown would cut off a vital channel for free speech in Thailand and an important tool for independent local media outlets.

“Some of them rely predominantly on Facebook because it’s the most popular social media platform in Thailand, so they publish their content on Facebook in order to reach out to audiences because they don’t have a means to set up … a full-fledged media channel,” she said.

Taking all that away to foil scammers would be “too extreme,” Sutawan said, suggesting the government focus instead on strengthening the country’s cybercrime and security laws and enforcing them.

Ministry spokesperson Wetang said the government was aware of the collateral damage a shutdown could cause but had to risk a lawsuit that could bring it on.

“Definitely we are really concerned about the people on Facebook,” he said. “But since this is a crime that already happened, the evidence is so clear … it is impossible that we don’t take action.”

Meta Faces Backlash Over Canada News Block as Wildfires Rage

Meta is being accused of endangering lives by blocking news links in Canada at a crucial moment, when thousands have fled their homes and are desperate for wildfire updates that once would have been shared widely on Facebook.

The situation “is dangerous,” said Kelsey Worth, 35, one of nearly 20,000 residents of Yellowknife and thousands more in small towns ordered to evacuate the Northwest Territories as wildfires advanced.

She described to AFP how “insanely difficult” it has been for herself and other evacuees to find verifiable information about the fires blazing across the near-Arctic territory and other parts of Canada.

“Nobody’s able to know what’s true or not,” she said.

“And when you’re in an emergency situation, time is of the essence,” she said, explaining that many Canadians until now have relied on social media for news.

Meta on Aug. 1 started blocking the distribution of news links and articles on its Facebook and Instagram platforms in response to a recent law requiring digital giants to pay publishers for news content.

The company has been in a virtual showdown with Ottawa over the bill passed in June, but which only takes effect next year.

Building on similar legislation introduced in Australia, the bill aims to support a struggling Canadian news sector that has seen a flight of advertising dollars and hundreds of publications closed in the last decade.

It requires companies like Meta and Google to make fair commercial deals with Canadian outlets for the news and information — estimated in a report to parliament to be worth US$250 million per year — that is shared on their platforms or face binding arbitration.

But Meta has said the bill is flawed and insisted that news outlets share content on its Facebook and Instagram platforms to attract readers, benefiting them and not the Silicon Valley firm.

Profits over safety

Canadian Prime Minister Justin Trudeau this week assailed Meta, telling reporters it was “inconceivable that a company like Facebook is choosing to put corporate profits ahead of (safety)… and keeping Canadians informed about things like wildfires.”

Almost 80% of all online advertising revenues in Canada go to Meta and Google, which has expressed its own reservations about the new law.

Ollie Williams, director of Cabin Radio in the far north, called Meta’s move to block news sharing “stupid and dangerous.”

He suggested in an interview with AFP that “Meta could lift the ban temporarily in the interests of preservation of life and suffer no financial penalty because the legislation has not taken effect yet.”

Nicolas Servel, over at Radio Taiga, a French-language station in Yellowknife, noted that some had found ways of circumventing Meta’s block.

They “found other ways to share” information, he said, such as taking screen shots of news articles and sharing them from personal — rather than corporate — social media accounts.

‘Life and death’

Several large newspapers in Canada such as The Globe and Mail and the Toronto Star have launched campaigns to try to attract readers directly to their sites.

But for many smaller news outlets, workarounds have proven challenging as social media platforms have become entrenched.

Public broadcaster CBC in a letter this week pressed Meta to reverse course.

“Time is of the essence,” wrote CBC president Catherine Tait. “I urge you to consider taking the much-needed humanitarian action and immediately lift your ban on vital Canadian news and information to communities dealing with this wildfire emergency.”

As more than 1,000 wildfires burn across Canada, she said, “The need for reliable, trusted, and up-to-date information can literally be the difference between life and death.”

Meta — which did not respond to AFP requests for comment — rejected CBC’s suggestion. Instead, it urged Canadians to use the “Safety Check” function on Facebook to let others know if they are safe or not.

Patrick White, a professor at the University of Quebec in Montreal, said Meta has shown itself to be a “bad corporate citizen.”

“It’s a matter of public safety,” he said, adding that he remains optimistic Ottawa will eventually reach a deal with Meta and other digital giants that addresses their concerns.