US Treasury: Chinese hackers remotely accessed workstations, documents

WASHINGTON — Chinese hackers remotely accessed several U.S. Treasury Department workstations and unclassified documents after compromising a third-party software service provider, the agency said Monday. 

The department did not provide details on how many workstations had been accessed or what sort of documents the hackers may have obtained, but it said in a letter to lawmakers revealing the breach that “at this time there is no evidence indicating the threat actor has continued access to Treasury information.” 

“Treasury takes very seriously all threats against our systems, and the data it holds,” the department said. “Over the last four years, Treasury has significantly bolstered its cyber defense, and we will continue to work with both private and public sector partners to protect our financial system from threat actors.” 

The department said it learned of the problem on Dec. 8 when a third-party software service provider, BeyondTrust, flagged that hackers had stolen a key used by the vendor that helped it override the system and gain remote access to several employee workstations. 

The compromised service has since been taken offline, and there’s no evidence that the hackers still have access to department information, Aditi Hardikar, an assistant Treasury secretary, said in the letter Monday to leaders of the Senate Banking Committee. 

The department said it was working with the FBI and the Cybersecurity and Infrastructure Security Agency, and that the hack had been attributed to Chinese culprits. It did not elaborate.

Venezuela fines TikTok $10 million over viral challenge deaths

Caracas, Venezuela — Venezuela’s highest court Monday fined TikTok $10 million in connection with viral challenges that authorities say left three adolescents dead from intoxication by chemical substances.

Supreme Tribunal of Justice Judge Tania D’Amelio said that the popular video-sharing app had been negligent in failing to implement “necessary and adequate measures” to stop the spread of content encouraging the challenges.

TikTok, which is owned by China’s ByteDance, was ordered to open an office in the South American country and given eight days to pay the fine or face “appropriate” measures.

Venezuela would use the money to “create a TikTok victims fund, intended to compensate for the psychological, emotional and physical damages to users, especially if these users are children and adolescents,” D’Amelio said.

The company told the court that it “understands the seriousness of the matter,” she said.

According to Venezuelan authorities, three adolescents died and 200 were intoxicated in schools across the country after ingesting chemical substances as part of social media “challenges.”

TikTok’s huge global success has been partly built on the success of its challenges — a call that invites users to create videos featuring dances, jokes or games that sometimes go viral.

The app has been accused of putting users in danger with the spread of hazardous challenge videos.

TikTok’s official policy prohibits videos promoting self-harm and suicide.

In November, President Nicolas Maduro threatened “severe measures” against TikTok if it did not remove content related to what he called “criminal challenges.”

Parliament is considering laws regulating social networks, which Maduro said after his disputed reelection in July was being used to promote “hate,” “fascism” and “division.”

He has accused Elon Musk, the billionaire owner of social media platform X, of orchestrating “attacks against Venezuela.”

AI technology helps level playing field for students with disabilities

For Makenzie Gilkison, spelling is such a struggle that a word like rhinoceros might come out as “rineanswsaurs” or sarcastic as “srkastik.” 

The 14-year-old from suburban Indianapolis can sound out words, but her dyslexia makes the process so draining that she often struggles with comprehension.

“I just assumed I was stupid,” she recalled of her early grade school years. 

But assistive technology powered by artificial intelligence has helped her keep up with classmates. Last year, Makenzie was named to the National Junior Honor Society. She credits a customized AI-powered chatbot, a word prediction program and other tools that can read for her. 

“I would have just probably given up if I didn’t have them,” she said. 

New tech; countless possibilities

Artificial intelligence holds the promise of helping countless  students with a range of visual, speech, language and hearing impairments to execute tasks that come easily to others. Schools everywhere have been wrestling with how and where to incorporate AI, but many are fast-tracking applications for students with disabilities. 

Getting the latest technology into the hands of students with disabilities is a priority for the U.S. Education Department, which has told schools they must consider whether students need tools like text-to-speech and alternative communication devices. New rules from the Department of Justice also will require schools and other government entities to make apps and online content accessible to those with disabilities. 

There is concern about how to ensure students using it — including those with disabilities — are still learning. 

Students can use artificial intelligence to summarize jumbled thoughts into an outline, summarize complicated passages, or even translate Shakespeare into common English. And computer-generated voices that can read passages for visually impaired and dyslexic students are becoming less robotic and more natural. 

“I’m seeing that a lot of students are kind of exploring on their own, almost feeling like they’ve found a cheat code in a video game,” said Alexis Reid, an educational therapist in the Boston area who works with students with learning disabilities. But in her view, it is far from cheating: “We’re meeting students where they are.” 

Programs fortify classroom lessons 

Ben Snyder, a 14-year-old freshman from Larchmont, New York, who was recently diagnosed with a learning disability, has been increasingly using AI to help with homework. 

“Sometimes in math, my teachers will explain a problem to me, but it just makes absolutely no sense,” he said. “So if I plug that problem into AI, it’ll give me multiple different ways of explaining how to do that.” 

He likes a program called Question AI. Earlier in the day, he asked the program to help him write an outline for a book report — a task he completed in 15 minutes that otherwise would have taken him an hour and a half because of his struggles with writing and organization. But he does think using AI to write the whole report crosses a line. 

“That’s just cheating,” Ben said. 

Schools weigh pros, cons 

Schools have been trying to balance the technology’s benefits against the risk that it will do too much. If a special education plan sets reading growth as a goal, the student needs to improve that skill. AI can’t do it for them, said Mary Lawson, general counsel at the Council of the Great City Schools. 

But the technology can help level the playing field for students with disabilities, said Paul Sanft, director of a Minnesota-based center where families can try out different assistive technology tools and borrow devices. 

“There are definitely going to be people who use some of these tools in nefarious ways. That’s always going to happen,” Sanft said. “But I don’t think that’s the biggest concern with people with disabilities, who are just trying to do something that they couldn’t do before.” 

Another risk is that AI will track students into less rigorous courses of study. And, because it is so good at identifying patterns, AI might be able to figure out a student has a disability. Having that disclosed by AI and not the student or their family could create ethical dilemmas, said Luis Perez, the disability and digital inclusion lead at CAST, formerly the Center for Applied Specialized Technology. 

Schools are using the technology to help students who struggle academically, even if they do not qualify for special education services. In Iowa, a new law requires students deemed not proficient — about a quarter of them — to get an individualized reading plan. As part of that effort, the state’s education department spent $3 million on an AI-driven personalized tutoring program. When students struggle, a digital avatar intervenes. 

Educators anticipate more tools 

The U.S. National Science Foundation is funding AI research and development. One firm is developing tools to help children with speech and language difficulties. Called the National AI Institute for Exceptional Education, it is headquartered at the University of Buffalo, which did pioneering work on handwriting recognition that helped the U.S. Postal Service save hundreds of millions of dollars by automating processing. 

“We are able to solve the postal application with very high accuracy. When it comes to children’s handwriting, we fail very badly,” said Venu Govindaraju, the director of the institute. He sees it as an area that needs more work, along with speech-to-text technology, which isn’t as good at understanding children’s voices, particularly if there is a speech impediment. 

Sorting through the sheer number of programs developed by education technology companies can be a time-consuming challenge for schools. Richard Culatta, CEO of the International Society for Technology in Education, said the nonprofit launched an effort this fall to make it easier for districts to vet what they are buying and ensure it is accessible. 

Mother sees potential

Makenzie wishes some of the tools were easier to use. Sometimes a feature will inexplicably be turned off, and she will be without it for a week while the tech team investigates. The challenges can be so cumbersome that some students resist the technology entirely. 

But Makenzie’s mother, Nadine Gilkison, who works as a technology integration supervisor at Franklin Township Community School Corporation in Indiana, said she sees more promise than downside. 

In September, her district rolled out chatbots to help special education students in high school. She said teachers, who sometimes struggled to provide students the help they needed, became emotional when they heard about the program. Until now, students were reliant on someone to help them, unable to move ahead on their own. 

“Now we don’t need to wait anymore,” she said. 

Trump sides with Musk in H-1B visa debate, saying he supports program

WEST PALM BEACH, FLORIDA — President-elect Donald Trump on Saturday sided with key supporter and billionaire tech CEO Elon Musk in a public dispute over the use of the H-1B visa, saying he fully backs the program for foreign tech workers opposed by some of his supporters. 

Trump’s remarks followed a series of social media posts from Musk, the CEO of Tesla and SpaceX, who vowed late Friday to “go to war” to defend the visa program for foreign tech workers. 

Trump, who moved to limit the visas’ use during his first presidency, told The New York Post on Saturday he was likewise in favor of the visa program. 

“I have many H-1B visas on my properties. I’ve been a believer in H-1B. I have used it many times. It’s a great program,” he was quoted as saying.  

Musk, a naturalized U.S. citizen born in South Africa, has held an H-1B visa, and his electric-car company Tesla obtained 724 of the visas this year. H-1B visas are typically for three-year periods, though holders can extend them or apply for permanent residency. 

The altercation was set off earlier this week by far-right activists who criticized Trump’s selection of Sriram Krishnan, an Indian American venture capitalist, to be an adviser on artificial intelligence, saying he would have influence on the Trump administration’s immigration policies. 

Musk’s tweet was directed at Trump’s supporters and immigration hard-liners who have increasingly pushed for the H-1B visa program to be scrapped amid a heated debate over immigration and the place of skilled immigrants and foreign workers brought into the country on work visas. 

On Friday, Steve Bannon, a longtime Trump confidante, critiqued “big tech oligarchs” for supporting the H-1B program and cast immigration as a threat to Western civilization. 

In response, Musk and many other tech billionaires drew a line between what they view as legal immigration and illegal immigration. 

Trump has promised to deport all immigrants who are in the U.S. illegally, deploy tariffs to help create more jobs for American citizens, and severely restrict immigration. 

The visa issue highlights how tech leaders such as Musk — who has taken an important role in the presidential transition by advising on key personnel and policy areas — are now drawing scrutiny from his base. 

The U.S. tech industry relies on the government’s H-1B visa program to hire foreign skilled workers to help run its companies, a labor force that critics say undercuts wages for American citizens.  

Musk spent more than a quarter of a billion dollars helping Trump get elected in November. He has posted regularly this week about the lack of homegrown talent to fill all the needed positions in American tech companies. 

Internet is rife with fake reviews – will AI make it worse?

The emergence of generative artificial intelligence tools that allow people to efficiently produce novel and detailed online reviews with almost no work has put merchants, service providers and consumers in uncharted territory, watchdog groups and researchers say. 

Phony reviews have long plagued many popular consumer websites, such as Amazon and Yelp. They are typically traded on private social media groups between fake review brokers and businesses willing to pay. Sometimes, such reviews are initiated by businesses that offer customers incentives such as gift cards for positive feedback. 

But AI-infused text generation tools, popularized by OpenAI’s ChatGPT, enable fraudsters to produce reviews faster and in greater volume, according to tech industry experts. 

The deceptive practice, which is illegal in the U.S., is carried out year-round but becomes a bigger problem for consumers during the holiday shopping season, when many people rely on reviews to help them purchase gifts. 

Where fakes are appearing 

Fake reviews are found across a wide range of industries, from e-commerce, lodging and restaurants to services such as home repairs, medical care and piano lessons. 

The Transparency Company, a tech company and watchdog group that uses software to detect fake reviews, said it started to see AI-generated reviews show up in large numbers in mid-2023 and they have multiplied ever since. 

For a report released this month, the Transparency Company analyzed 73 million reviews in three sectors: home, legal and medical services. Nearly 14% of the reviews were likely fake, and the company expressed a “high degree of confidence” that 2.3 million reviews were partly or entirely AI-generated. 

“It’s just a really, really good tool for these review scammers,” said Maury Blackman, an investor and adviser to tech startups, who reviewed the Transparency Company’s work and is set to lead the organization starting Jan. 1. 

In August, software company DoubleVerify said it was observing a “significant increase” in mobile phone and smart TV apps with reviews crafted by generative AI. The reviews often were used to deceive customers into installing apps that could hijack devices or run ads constantly, the company said. 

The following month, the Federal Trade Commission sued the company behind an AI writing tool and content generator called Rytr, accusing it of offering a service that could pollute the marketplace with fraudulent reviews. 

The FTC, which this year banned the sale or purchase of fake reviews, said some of Rytr’s subscribers used the tool to produce hundreds and perhaps thousands of reviews for garage door repair companies, sellers of “replica” designer handbags and other businesses. 

Likely on prominent online sites, too 

Max Spero, CEO of AI detection company Pangram Labs, said the software his company uses has detected with almost certainty that some AI-generated appraisals posted on Amazon bubbled up to the top of review search results because they were so detailed and appeared to be well thought out. 

But determining what is fake or not can be challenging. External parties can fall short because they don’t have “access to data signals that indicate patterns of abuse,” Amazon has said. 

Pangram Labs has done detection for some prominent online sites, which Spero declined to name because of nondisclosure agreements. He said he evaluated Amazon and Yelp independently. 

Many of the AI-generated comments on Yelp appeared to be posted by individuals who were trying to publish enough reviews to earn an “Elite” badge, which is intended to let users know they should trust the content, Spero said. 

The badge provides access to exclusive events with local business owners. Fraudsters also want it so their Yelp profiles can look more realistic, said Kay Dean, a former federal criminal investigator who runs a watchdog group called Fake Review Watch. 

To be sure, just because a review is AI-generated doesn’t necessarily mean it’s fake. Some consumers might experiment with AI tools to generate content that reflects their genuine sentiments. Some non-native English speakers say they turn to AI to make sure they use accurate language in the reviews they write. 

“It can help with reviews [and] make it more informative if it comes out of good intentions,” said Michigan State University marketing professor Sherry He, who has researched fake reviews. She says tech platforms should focus on the behavioral patterns of bad actors, which prominent platforms already do, instead of discouraging legitimate users from turning to AI tools. 

What companies are doing 

Prominent companies are developing policies for how AI-generated content fits into their systems for removing phony or abusive reviews. Some already employ algorithms and investigative teams to detect and take down fake reviews but are giving users some flexibility to use AI. 

Spokespeople for Amazon and Trustpilot, for example, said they would allow customers to post AI-assisted reviews as long as they reflect their genuine experience. Yelp has taken a more cautious approach, saying its guidelines require reviewers to write their own copy. 

“With the recent rise in consumer adoption of AI tools, Yelp has significantly invested in methods to better detect and mitigate such content on our platform,” the company said in a statement. 

The Coalition for Trusted Reviews, which Amazon, Trustpilot, employment review site Glassdoor, and travel sites Tripadvisor, Expedia and Booking.com launched last year, said that even though deceivers may put AI to illicit use, the technology also presents “an opportunity to push back against those who seek to use reviews to mislead others.” 

“By sharing best practice and raising standards, including developing advanced AI detection systems, we can protect consumers and maintain the integrity of online reviews,” the group said. 

The FTC’s rule banning fake reviews, which took effect in October, allows the agency to fine businesses and individuals who engage in the practice. Tech companies hosting such reviews are shielded from the penalty because they are not legally liable under U.S. law for the content that outsiders post on their platforms. 

Tech companies, including Amazon, Yelp and Google, have sued fake review brokers they accuse of peddling counterfeit reviews on their sites. The companies say their technology has blocked or removed a huge swath of suspect reviews and suspicious accounts. However, some experts say they could be doing more. 

“Their efforts thus far are not nearly enough,” said Dean of Fake Review Watch. “If these tech companies are so committed to eliminating review fraud on their platforms, why is it that I, one individual who works with no automation, can find hundreds or even thousands of fake reviews on any given day?” 

Spotting fake reviews 

Consumers can try to spot fake reviews by watching out for a few possible warning signs, according to researchers. Overly enthusiastic or negative reviews are red flags. Jargon that repeats a product’s full name or model number is another potential giveaway. 

When it comes to AI, research conducted by Balazs Kovacs, a Yale University professor of organization behavior, has shown that people can’t tell the difference between AI-generated and human-written reviews. Some AI detectors may also be fooled by shorter texts, which are common in online reviews, the study said. 

However, there are some “AI tells” that online shoppers and service seekers should keep it mind. Panagram Labs says reviews written with AI are typically longer, highly structured and include “empty descriptors,” such as generic phrases and attributes. The writing also tends to include cliches like “the first thing that struck me” and “game-changer.”

Trump asks court to delay possible TikTok ban until he can weigh in as president

U.S. President-elect Donald Trump asked the Supreme Court on Friday to pause the potential TikTok ban from going into effect until his administration can pursue a “political resolution” to the issue.

The request came as TikTok and the Biden administration filed opposing briefs to the court, in which the company argued the court should strike down a law that could ban the platform by January 19 while the government emphasized its position that the statute is needed to eliminate a national security risk.

“President Trump takes no position on the underlying merits of this dispute. Instead, he respectfully requests that the court consider staying the act’s deadline for divestment of January 19, 2025, while it considers the merits of this case,” said Trump’s amicus brief, which supported neither party in the case.

The filings come ahead of oral arguments scheduled for January 10 on whether the law, which requires TikTok to divest from its China-based parent company or face a ban, unlawfully restricts speech in violation of the First Amendment.

Earlier this month, a panel of three federal judges on the U.S. Court of Appeals for the District of Columbia Circuit unanimously upheld the statute, leading TikTok to appeal the case to the Supreme Court.

The brief from Trump said he opposes banning TikTok at this junction and “seeks the ability to resolve the issues at hand through political means once he takes office.”

US proposes cybersecurity rules to limit impact of health data leaks

Health care organizations may be required to bolster their cybersecurity to better prevent sensitive information from being leaked by cyberattacks like the ones that hit Ascension and UnitedHealth, a senior White House official said Friday.

Anne Neuberger, the U.S. deputy national security adviser for cyber and emerging technology, told reporters that proposed requirements are necessary in light of the massive number of Americans whose data has been affected by large breaches of health care information. The proposals include encrypting data so it cannot be accessed, even if leaked, and requiring compliance checks to ensure networks meet cybersecurity rules.

The full proposed rule was posted to the Federal Register on Friday, and the Department of Health and Human Services posted a more condensed breakdown on its website.

She said that the health care information of more than 167 million people was affected in 2023 as a result of cybersecurity incidents.

The proposed rule from the Office for Civil Rights (OCR) within HHS would update standards under the Health Insurance Portability and Accountability Act and would cost an estimated $9 billion in the first year, and $6 billion in years two through five, Neuberger said.

“We’ve made some significant proposals that we think will improve cybersecurity and ultimately everyone’s health information, if any of these proposals are ultimately finalized,” an OCR spokesperson told Reuters late Friday. The next step in the process is a 60-day public comment period before any final decisions will be made.

Large health care breaches caused by hacking and ransomware have increased by 89% and 102%, respectively, since 2019, Neuberger said.

“In this job, one of the most concerning and really troubling things we deal with is hacking of hospitals, hacking of health care data,” she said.

Hospitals have been forced to operate manually and Americans’ sensitive health care data, mental health information and other information are “being leaked on the dark web with the opportunity to blackmail individuals,” Neuberger said.

Massive Chinese espionage scheme hit 9th telecom firm, US says

WASHINGTON — A sprawling Chinese espionage campaign hacked a ninth U.S. telecom firm, a top White House official said Friday.

The Chinese hacking blitz known as Salt Typhoon gave officials in Beijing access to private texts and phone conversations of an unknown number of Americans. The White House earlier this month said the attack affected at least eight telecommunications companies and dozens of nations.

Anne Neuberger, the deputy national security adviser for cyber and emerging technologies, told reporters Friday that a ninth victim was identified after the administration released guidance to companies about how to hunt for Chinese culprits in their networks.

The update from Neuberger is the latest development in a massive hacking operation that alarmed national security officials, exposed cybersecurity vulnerabilities in the private sector and laid bare China’s hacking sophistication.

The hackers compromised the networks of telecommunications companies to obtain customer call records and gain access to the private communications of “a limited number of individuals.” Although the FBI has not publicly identified any of the victims, officials believe senior U.S. government officials and prominent political figures are among those whose communications were accessed.

Neuberger said officials did not yet have a precise sense of how many Americans overall were affected by Salt Typhoon, in part because the Chinese were careful about their techniques, but a “large number” were in or near Washington.

Officials believe the goal of the hackers was to identify who owned the phones and, if they were “government targets of interest,” spy on their texts and phone calls, she said.

The FBI said most of the people targeted by the hackers are “primarily involved in government or political activity.”

Neuberger said the episode highlighted the need for required cybersecurity practices in the telecommunications industry, something the Federal Communications Commission is to take up at a meeting next month.

“We know that voluntary cybersecurity practices are inadequate to protect against China, Russia and Iran hacking of our critical infrastructure,” she said.

The Chinese government has denied responsibility for the hacking.

NASA spacecraft ‘safe’ after closest-ever approach to sun

NASA said on Friday that its Parker Solar Probe was “safe” and operating normally after successfully completing the closest-ever approach to the sun by any human-made object. 

The spacecraft passed 6.1 million kilometers from the solar surface on Tuesday, flying into the sun’s outer atmosphere — called the corona — on a mission to help scientists learn more about Earth’s closest star. 

The agency said the operations team at the Johns Hopkins Applied Physics Laboratory in Maryland received the signal, a beacon tone, from the probe just before midnight on Thursday. 

The spacecraft is expected to send detailed telemetry data about its status on January 1, NASA added. 

Moving at up to 692,000 kilometers per hour the spacecraft endured temperatures of up to 982 degrees Celsius, according to the NASA website. 

“This close-up study of the sun allows Parker Solar Probe to take measurements that help scientists better understand how material in this region gets heated to millions of degrees, trace the origin of the solar wind (a continuous flow of material escaping the Sun), and discover how energetic particles are accelerated to near light speed,” the agency added. 

“We’re rewriting the textbooks on how the sun works with the data from this probe,” Dr. Joseph Westlake, NASA’s heliophysics director, told Reuters. 

“This mission was theorized in the fifties,” he said, adding that it is an “amazing achievement to create technologies that let us delve into our understanding of how the sun operates.” 

The Parker Solar Probe was launched in 2018 and has been gradually circling closer toward the sun, using flybys of Venus to gravitationally pull it into a tighter orbit with the sun. 

Westlake said the team is preparing for even more flybys in the extended mission phase, hoping to capture unique events. 

Ukraine tech company presents latest military simulators

Russia’s invasion has pushed Ukrainian tech companies working with defense simulation technology to seriously compete in global markets. One such company is SKIFTECH, which specializes in high-tech military simulators. Iryna Solomko visited the company’s production site in Kyiv. Anna Rice narrates the story. Camera: Pavlo Terekhov

Japan Airlines suffers delays after carrier reports cyberattack

TOKYO — Japan Airlines reported a cyberattack on Thursday that caused delays to domestic and international flights but later said it had found and addressed the cause.

The airline, Japan’s second biggest after All Nippon Airways (ANA), said 24 domestic flights had been delayed by more than half an hour.

Public broadcaster NHK said problems with the airline’s baggage check-in system had caused delays at several Japanese airports but no major disruption was reported.

“We identified and addressed the cause of the issue. We are checking the system recovery status,” Japan Airlines (JAL) said in a post on social media platform X.

“Sales for both domestic and international flights departing today have been suspended. We apologize for any inconvenience caused,” the post said.

A JAL spokesperson told AFP earlier the company had been subjected to a cyberattack.

Japanese media said it may have been a so-called DDoS attack aimed at overwhelming and disrupting a website or server.

Network disruption began at 7:24 a.m. Thursday (2224 GMT Wednesday), JAL said in a statement, adding that there was no impact on the safety of its operations.

Then “at 8:56 a.m., we temporarily isolated the router (a device for exchanging data between networks) that was causing the disruption,” it said.

Report on January collision

JAL shares fell as much as 2.5% in morning trade after the news emerged, before recovering slightly.

The airline is just the latest Japanese firm to be hit by a cyberattack.

Japan’s space agency JAXA was targeted in 2023, although no sensitive information about rockets or satellites was accessed.

The same year one of Japan’s busiest ports was hit by a ransomware attack blamed on the Russia-based Lockbit group.

In 2022, a cyberattack at a Toyota supplier forced the top-selling automaker to halt operations at domestic plants.

More recently, the popular Japanese video-sharing website Niconico came under a large cyberattack in June.

Separately, a transport ministry committee tasked with probing a fatal January 2024 collision involving a JAL passenger jet released an interim report on Wednesday blaming human error for the incident that killed five people.

The collision at Tokyo’s Haneda Airport was with a coast guard plane carrying six crew members — of whom five were killed — that was on mission to deliver relief supplies to a quake-hit central region of Japan.

According to the report, the smaller plane’s pilot mistook an air traffic control officer’s instructions to mean authorization had been given to enter the runway.

The captain was also “in a hurry” at the time because the coast guard plane’s departure was 40 minutes behind schedule, the report said.

The traffic controller failed to notice the plane had intruded into the runway, oblivious even to an alarm system warning against its presence.

All 379 people on board the JAL Airbus escaped just before the aircraft was engulfed in flames.

Iran cyberspace council votes to lift ban on WhatsApp

TEHRAN, IRAN — Iran’s top council responsible for safeguarding the internet voted Tuesday to lift a ban on the popular messaging application WhatsApp, which has been subject to restrictions for over two years, state media reported. 

“The ban on WhatsApp and Google Play was removed by unanimous vote of the members of the Supreme Council of Cyberspace,” the official IRNA news agency said. 

The council is headed by the president, and its members include the parliament speaker, the head of the judiciary and several ministers. 

It was not immediately clear when the decision would come into force. 

‘Restrictions … achieved nothing but anger’

The move has sparked a debate in Iran, with critics of the restrictions arguing the controls were costly for the country.  

“The restrictions have achieved nothing but anger and added costs to people’s lives,” presidential adviser Ali Rabiei said on X Tuesday. 

“President Masoud Pezeshkian believes in removing restrictions and does not consider the bans to be in the interest of the people and the country. All experts also believe that this issue is not beneficial to the country’s security,” Vice President Mohammad Javad Zarif said on Tuesday. 

Lifting restrictions ‘a gift to enemies’

Others, however, warned against lifting the restrictions.  

The reformist Shargh daily on Tuesday reported that 136 lawmakers in the 290-member parliament sent a letter to the council saying the move would be a “gift to [Iran’s] enemies.”  

The lawmakers called for allowing access to restricted online platforms only “if they are committed to the values of Islamic society and comply with the laws of” Iran.  

Iranian officials have in the past called for the foreign companies that own popular international apps to introduce representative offices in Iran. 

Meta, the American giant that owns Facebook, Instagram and WhatsApp, has said it had no intention of setting up offices in the Islamic republic, which remains under U.S. sanctions. 

Iranians have over the years grown accustomed to using virtual private networks, or VPNs, to bypass internet restrictions.  

Other popular social media platforms, including Facebook, X and YouTube, remain blocked after being banned in 2009. 

Telegram was also banned by a court order in April 2018. 

Instagram and WhatsApp were added to the list of blocked applications following nationwide protests that erupted after the September 2022 death in custody of Mahsa Amini.  

Amini, a 22-year-old Iranian Kurd, was arrested for an alleged breach of Iran’s dress code for women. 

Hundreds of people, including dozens of security personnel, were killed in the subsequent months-long nationwide protests, and thousands of demonstrators were arrested. 

Pezeshkian, who took office in July, had vowed during his campaign to ease the long-standing internet restrictions. 

in the past several years, Iran has introduced domestic applications to supplant popular foreign ones.