Facebook Caught in an Election-security Catch-22

When it comes to dealing with hate speech and attempted election manipulation, Facebook just can’t win.

If it takes a hands-off attitude, it takes the blame for undermining democracy and letting civil society unravel. If it makes the investment necessary to take the problems seriously, it spooks its growth-hungry investors.

That dynamic was on display in Facebook’s earnings report Tuesday, when the social network reported a slight revenue miss but stronger than expected profit for the July-September period.

Shares were volatile in after-hours trading — dropping the most, briefly, when executives discussed a decline in expected revenue growth and increasing expenses during the conference call.

With the myriad problems Facebook is facing, that passes for good news these days. It was definitely an improvement over three months ago, when Facebook shares suffered their worst one-day drop in history, wiping out $119 billion of its market value after executives predicted rising expenses to deal with security issues along with slowing growth.

“Overall, given all the challenges Facebook has faced this year, this is a decent earnings report,” said eMarketer analyst Debra Aho Williamson.

Facebook had 2.27 billion monthly users at the end of the quarter, below the 2.29 billion analysts were expecting. Facebook says it changed the way it calculates users, which reduced the total slightly. The company’s user base was still up 10 percent from 2.07 billion monthly users a year ago.

The company earned $5.14 billion, or $1.76 per share, up 9 percent from $4.71 billion, or $1.59 per share, a year earlier. Revenue was $13.73 billion, an increase of 33 percent, for the July-September period.

Analysts had expected earnings of $1.46 per share on revenue of $13.77 billion, according to FactSet.

CEO Mark Zuckerberg called 2019 “another year of significant investment” during the earnings call. After that, he said “I know that we need to make sure our costs and revenue are better matched over time.”

The company had already warned last quarter that its revenue growth will slow down significantly for at least the rest of this year and that expenses will continue to balloon as it spends on security, hiring more content moderators around the world and on developing its products, be they messaging apps, video or virtual reality headsets.

The following day the stock plunged 19 percent. Shares not only haven’t recovered, they’ve since fallen further amid a broader decline in tech stocks .

Facebook’s investors, users, employees and executives have been grappling not just with questions over how much money the company makes and how many people use it, but its effects on users’ mental health and worries over what it’s doing to political discourse and elections around the world. Is Facebook killing us? Is it killing democracy?

The problems have been relentless for the past two years. Facebook can hardly crawl its way out of one before another comes up. It began with “fake news” and its effects on the 2016 presidential election (a notion Zuckerberg initially dismissed) and continued with claims of bias among conservatives that still haven’t relented.

Then there’s hate speech, hacks and a massive privacy scandal in which Facebook exposed the data of up to 87 million users to a data mining firm, along with resulting moves toward government regulation of social media. Amid all this, there have been sophisticated attempts from Russia and Iran to interfere with elections and stir up political discord in the U.S.

All this would be more than enough to deal with. But the business challenges are also piling up. There are stricter privacy regulations in Europe that can impede how much data it collects on users. Facebook and other tech companies face a new ”digital tax ” in the UK.

On Tuesday, Arjuna Capital and the New York State Common Retirement Fund filed a shareholder proposal asking Facebook to publish a report on its policies for governing what is posted on its platform and explain what it is doing to “address content that threatens democracy, human rights, and freedom of expression.”

“Young users are deleting the app and all users are taking breaks from Facebook,” said Natasha Lamb, managing partner at Arjuna Capital. “When you start to see users turn away from the platform, that’s when investors get concerned.”

A recent Pew Research Center survey found that more than a quarter of U.S. Facebook users have deleted the app from their phones and 42 percent have taken a break for at least a few weeks. Younger users were much more likely to delete the app than their older counterparts.

Nonetheless, Facebook is still enjoying healthy user growth outside the U.S.

Facebook’s stock climbed $4.07, or 2.8 percent, to $150.29 in after-hours trading. The stock had closed at $146.22, down 17 percent year-to-date.

 

S. Korean Voting Machines at Center of DRC Election Dispute

As elections approach in the central African nation the Democratic Republic of Congo, concerns have been raised over the integrity of electronic voting machines being used in the national poll that were made by South Korea’s Miryu Systems. VOA’s Steve Miller reports from Seoul on the risks.

Google Spinoff to Test Truly Driverless Cars in California

The robotic car company created by Google is poised to attempt a major technological leap in California, where its vehicles will hit the roads without a human on hand to take control in emergencies.

The regulatory approval announced Tuesday allows Waymo’s driverless cars to cruise through California at speeds up to 65 miles per hour. 

The self-driving cars have traveled millions of miles on the state’s roads since Waymo began as a secretive project within Google nearly a decade ago. But a backup driver had been required to be behind the wheel until new regulations in April set the stage for the transition to true autonomy. 

Waymo is the first among dozens of companies testing self-driving cars in California to persuade state regulators its technology is safe enough to permit them on the roads without a safety driver in them. An engineer still must monitor the fully autonomous cars from a remote location and be able to steer and stop the vehicles if something goes wrong.

Free rides in Arizona

California, however, won’t be the first state to have Waymo’s fully autonomous cars on its streets. Waymo has been giving rides to a group of volunteer passengers in Arizona in driverless cars since last year. It has pledged to deploy its fleet of fully autonomous vans in Arizona in a ride-hailing service open to all comers in the Phoenix area by the end of this year.

But California has a much larger population and far more congestion than Arizona, making it even more challenging place for robotic cars to get around.

Waymo is moving into its next phase in California cautiously. To start, the fully autonomous cars will only give rides to Waymo’s employees and confine their routes to roads in its home town of Mountain View, California, and four neighboring Silicon Valley cities — Sunnyvale, Los Altos, Los Altos Hills, and Palo Alto.

If all goes well, Waymo will then seek volunteers who want to be transported in fully autonomous vehicles, similar to its early rider program in Arizona . That then could lead to a ride-hailing service like the one Waymo envisions in Arizona.

Can Waymo cars be trusted?

But Waymo’s critics are not convinced there is enough evidence that the fully autonomous cars can be trusted to be driving through neighborhoods without humans behind the wheel. 

“This will allow Waymo to test its robotic cars using people as human guinea pigs,” said John Simpson, privacy and technology project director for Consumer Watchdog, a group that has repeatedly raised doubts about the safety of self-driving cars.

Those concerns escalated in March after fatal collision involving a self-driving car being tested by the leading ride-hailing service, Uber. In that incident, an Uber self-driving car with a human safety driver struck and killed a pedestrian crossing a darkened street in a Phoenix suburb.

Waymo’s cars with safety drivers have been involved in dozens of accidents in California, but those have mostly been minor fender benders at low speeds.

 All told, Waymo says its self-driving cars have collectively logged more than 10 million miles in 25 cities in a handful of states while in autonomous mode, although most of those trips have occurred with safety drivers.

Will Waymo save lives?

Waymo contends its robotic vehicles will save lives because so many crashes are caused by human motorists who are intoxicated, distracted or just bad drivers.

“If a Waymo vehicle comes across a situation it doesn’t understand, it does what any good driver would do: comes to a safe stop until it does understand how to proceed,” the company said Tuesday.

China Steps Up VPN Blocks Ahead of Major Trade, Internet Shows

Chinese authorities have stepped up efforts to block virtual private networks (VPN), service providers said Tuesday in describing a “cat-and-mouse” game with censors ahead of a major trade expo and internet conference.

VPNs allow internet users in China, including foreign companies, to access overseas sites that authorities bar through the so-called Great Firewall, such as Facebook Inc and Alphabet Inc’s Google.

Since Xi Jinping became president in 2013, authorities have sought to curb VPN use, with providers suffering periodic lags in connectivity because of government blocks.

“This time, the Chinese government seemed to have staff on the ground monitoring our response in real time and deploying additional blocks,” said Sunday Yokubaitis, the chief executive of Golden Frog, the maker of the VyprVPN service.

Authorities started blocking some of its services on Sunday, he told Reuters, although VyprVPN’s service has since been restored in China.

“Our counter measures usually work for a couple of days before the attack profile changes and they block us again,” Yokubaitis said.

The latest attacks were more aggressive than the “steadily increasing blocks” the firm had experienced in the second half of the year, he added.

The Cyberspace Administration of China did not respond immediately to a faxed request from Reuters to seek comment.

Another provider, ExpressVPN, also acknowledged connectivity issues on its services in China on Monday that sparked user complaints.

“There has long been a cat-and-mouse game with VPNs in China and censors regularly change their blocking techniques,” its spokesman told Reuters.

Last year, Apple Inc dropped a number of unapproved VPN apps from its app store in China, after Beijing adopted tighter rules.

Although fears of a blanket block on services have not materialized, industry experts say VPN connections often face outages around the time of major events in China.

Xi will attend a huge trade fair in Shanghai next week designed to promote China as a global importer and calm foreign concern about its trade practices, while the eastern town of Wuzhen hosts the annual World Internet Conference to showcase China’s vision for internet governance.

Censors may be testing new technology that blocks VPNs more effectively, said Lokman Tsui, who studies freedom of expression and digital rights at the Chinese University of Hong Kong.

“It could be just a wave of experiments,” he said of the latest service disruptions.

Apple’s New iPads Embrace Facial Recognition

Apple’s new iPads will resemble its latest iPhones as the company ditches a home button and fingerprint sensor to make room for the screen.

 

As with the iPhone XR and XS models, the new iPad Pro will use facial-recognition technology to unlock the device and authorize app and Apple Pay purchases.

 

Apple also unveiled new Mac models at an opera house in New York, where the company emphasized artistic uses for its products such as creating music, video and sketches. New Macs include a MacBook Air laptop with a better screen.

 

Research firm IDC says tablet sales have been declining overall, though Apple saw a 3 percent increase in iPad sales last year to nearly 44 million, commanding a 27 percent market share.

 

UN Human Rights Expert Urges States to Curb Intolerance Online

Following the shooting deaths of 11 worshippers at a synagogue in the eastern United States, a U.N. human rights expert urged governments on Monday to do more to curb racist and anti-Semitic intolerance, especially online.

“That event should be a catalyst for urgent action against hate crimes, but also a reminder to fight harder against the current climate of intolerance that has made racist, xenophobic and anti-Semitic attitudes and beliefs more acceptable,” U.N. Special Rapporteur Tendayi Achiume said of Saturday’s attack on a synagogue in Pittsburgh, Pennsylvania.

Achiume, whose mandate is the elimination of racism, racial discrimination, xenophobia and related intolerance, noted in her annual report that “Jews remain especially vulnerable to anti-Semitic attacks online.”

She said that Nazi and neo-Nazi groups exploit the internet to spread and incite hate because it is “largely unregulated, decentralized, cheap” and anonymous.

Achiume, a law professor at the University of California, Los Angeles (UCLA) School of Law, said neo-Nazi groups are increasingly relying on the internet and social media platforms to recruit new members.

Facebook, Twitter and YouTube are among their favorites.

On Facebook, for example, hate groups connect with sympathetic supporters and use the platform to recruit new members, organize events and raise money for their activities. YouTube, which has over 1.5 billion viewers each month, is another critical communications tool for propaganda videos and even neo-Nazi music videos. On Twitter, according to one 2012 study cited in the special rapporteur’s report, the presence of white nationalist movements on that platform has increased by more than 600 percent.

The special rapporteur noted that while digital technology has become an integral and positive part of most people’s lives, “these developments have also aided the spread of hateful movements.”

She said in the past year, platforms including Facebook, Twitter and YouTube have banned individual users who have contributed to hate movements or threatened violence, but ensuring the removal of racist content online remains difficult.

Some hate groups try to get around raising red flags by using racially coded messaging, which makes it harder for social media platforms to recognize their hate speech and shut down their presence.

Achiume cited as an example the use of a cartoon character “Pepe the Frog,” which was appropriated by members of neo-Nazi and white supremacist groups and was widely displayed during a white supremacist rally in the southern U.S. city of Charlottesville, Virginia, in 2017.

The special rapporteur welcomed actions in several states to counter intolerance online, but cautioned it must not be used as a pretext for censorship and other abuses. She also urged governments to work with the private sector — specifically technology companies — to fight such prejudices in the digital space.

How Green Is My Forest? There’s an App to Tell You

A web-based application that monitors the impact of successful forest-rights claims can help rural communities manage resources better and improve their livelihoods, according to analysts.

The app was developed by the Indian School of Business (ISB) to track community rights in India, where the 2006 Forest Rights Act aimed to improve the lives of rural people by recognizing their entitlement to inhabit and live off forests.

With a smartphone or tablet, the app can be used to track the status of a community rights claim.

After the claim is approved, community members can use it to collect data on tree cover, burned areas and other changes in the forest and analyze it, said Arvind Khare at Washington D.C.-based advocacy Rights and Resources Initiative (RRI).

“Even in areas that have made great progress in awarding rights, it is very hard to track the socio-ecological impact of the rights on the community,” said Khare, a senior director at RRI, which is testing the app in India.

“Recording the data and analyzing it can tell you which resources need better management, so that these are not used haphazardly, but in a manner that benefits them most,” he told the Thomson Reuters Foundation.

For example, community members can record data on forest products they use such as leaves, flowers, wood and sap, making it easier to ensure that they are not over-exploited, he said.

While indigenous and local communities own more than half the world’s land under customary rights, they have secure legal rights to only 10 percent, according to RRI.

Governments maintain legal and administrative authority over more than two-thirds of global forest area, giving limited access for local communities.

In India, under the 2006 law, at least 150 million people could have their rights recognized to about 40 million hectares (154,400 sq miles) of forest land.

But rights to only 3 percent of land have been granted, with states largely rejecting community claims, campaigners say.

While the app is being tested in India, Khare said it can also be used in countries including Peru, Mali, Liberia and Indonesia, where RRI supports rural communities in scaling up forest rights claims.

Data can be entered offline on the app, and then uploaded to the server when the device is connected to the internet. Data is stored in the cloud and accessible to anyone, said Ashwini Chhatre, an associate professor at ISB.

“All this while local communities have been fighting simply for the right to live in the forest and use its resources. Now, they can use data to truly benefit from it,” he said.

Teen’s Program Could Improve Pancreatic Cancer Treatment

Pancreatic cancer treatment could become more advanced with help from 13-year-old Rishab Jain. He’s created a tool for doctors to locate the hard-to-find pancreas more quickly and precisely during cancer treatment. The teen recently won a prestigious young scientist award for his potentially game-changing idea. VOA’s Julie Taboh has more.

Q&A: Facebook Describes How It Detects ‘Inauthentic Behavior’

Facebook announced Friday that it had removed 82 Iranian-linked accounts on Facebook and Instagram. A Facebook spokesperson answered VOA’s questions about its process and efforts to detect what it calls “coordinated inauthentic behavior” by accounts pretending to be U.S. and U.K. citizens and aimed at U.S. and U.K. audiences.

Q: Facebook’s post says there were 7 “events hosted.” Any details about where, when, who?

A: Of seven events, the first was scheduled for February 2016, and the most recent was scheduled for June 2018. One hundred and ten people expressed interest in at least one of these events, and two events received no interest. We cannot confirm whether any of these events actually occurred. Some appear to have been planned to occur only online. The themes are similar to the rest of the activity we have described.

Q: Is there any indication this was an Iranian government-linked program?

A: We recently discussed the challenges involved with determining who is behind information operations. In this case, we have not been able to determine any links to the Iranian government, but we are continuing to investigate. Also, Atlantic Council’s Digital Forensic Research Lab has shared their take on the content in this case here.

​Q: How long was the time between discovering this and taking down the pages?

A: We first detected this activity one week ago. As soon as we detected this activity, the teams in our elections war room worked quickly to investigate and remove these bad actors. Given the elections, we took action as soon as we’d completed our initial investigation and shared the information with U.S. and U.K. government officials, U.S. law enforcement, Congress, other technology companies and the Atlantic Council’s Digital Forensic Research Lab.

Q: How have you improved the reporting processes in the past year to speed the ability to remove such content?

A: Just to clarify, today’s takedown was a result of our teams proactively discovering suspicious signals on a page that appeared to be run by Iranian users. From there, we investigated and found the set of pages, groups and accounts that we removed today.

To your broader question on how we’ve improved over the past two years: To ensure that we stay ahead, we’ve invested heavily in better technology and more people. There are now over 20,000 people working on safety and security at Facebook, and thanks to improvements in artificial intelligence we detect many fake accounts, the root cause of so many issues, before they are even created. We’re also working more closely with governments, law enforcement, security experts and other companies because no one organization can do this on its own.

Q: How many people do you have monitoring content in English now? In Persian?

A: We have over 7,500 content reviewers globally. We don’t provide breakdowns of the number of people working in specific languages or regions because that alone doesn’t reflect the number of people working to review content for a particular country or region at any particular time.

Q: How are you training people to spot this content? What’s the process?

A: To be clear, today’s takedown was the result of an internal investigation involving a combination of manual work by our teams of skilled investigators and data science teams using automated tools to look for larger patterns to identify potentially inauthentic behavior. In this case, we relied on both of these techniques working together.

On your separate question about training content reviewers, here is more on our content reviewers and how we support them.

Q: Does Facebook have any more information on how effective this messaging is at influencing behavior?

A: We aren’t in a position to know.

Google Abandons Berlin Campus Plan After Locals Protest

Google is abandoning plans to establish a campus for tech startups in Berlin after protests from residents worried about gentrification.

The internet giant confirmed reports Thursday it will sublet the former electrical substation in the capital’s Kreuzberg district to two charitable organizations, Betterplace.org and Karuna.

Google has more than a dozen so-called campuses around the world. They are intended as hubs to bring together potential employees, startups and investors.

Protesters had recently picketed the Umspannwerk site with placards such as “Google go home.”

Karuna, which helps disadvantaged children, said Google will pay 14 million euros ($16 million) toward renovation and maintenance for the coming five years.

Google said it will continue to work with startups in Berlin, which has become a magnet for tech companies in Germany in recent years.

Google Abandons Planned Berlin Office Hub

Campaigners in a bohemian district of Berlin celebrated Wednesday after Internet giant Google abandoned strongly-opposed plans to open a large campus there.

The US firm had planned to set up an incubator for start-up companies in Kreuzberg, one of the older districts in the west of the capital.

But the company’s German spokesman Ralf Bremer announced Wednesday that the 3,000 square-metre (3,590 square-yard) space — planned to host offices, cafes and communal work areas, would instead go to two local humanitarian associations.

Bremer did not say if local resistance to the plans over the past two years had played a part in the change of heart, although he had told the Berliner Zeitung daily that Google does not allow protests dictate its actions.

“The struggle pays off,” tweeted “GloReiche Nachbarschaft”, one of the groups opposed to the Kreuzberg campus plan and part of the “F**k off Google” campaign.

Some campaigners objected to what they described as Google’s “evil” corporate practices, such as tax evasion and the unethical use of personal data.

Some opposed the gentrification of the district, pricing too many people out of the area.

A recent study carried out by the Knight Fox consultancy concluded that property prices are rising faster in Berlin than anywhere else in the world: they jumped 20.5 percent between 2016 and 2017.

In Kreuzberg over the same period, the rise was an astonishing 71 percent.

Kreuzberg, which straddled the Berlin Wall that divided East and West Berlin during the Cold War, has traditionally been a bastion of the city’s underground and radical culture.

Facebook Unveils Systems for Catching Child Nudity, ‘Grooming’ of Children

Facebook Inc said on Wednesday that company moderators during the last quarter removed 8.7 million user images of child nudity with the help of previously undisclosed software that automatically flags such photos.

The machine learning tool rolled out over the last year identifies images that contain both nudity and a child, allowing increased enforcement of Facebook’s ban on photos that show minors in a sexualized context.

A similar system also disclosed Wednesday catches users engaged in “grooming,” or befriending minors for sexual exploitation.

Facebook’s global head of safety Antigone Davis told Reuters in an interview that the “machine helps us prioritize” and “more efficiently queue” problematic content for the company’s trained team of reviewers.

The company is exploring applying the same technology to its Instagram app.

Under pressure from regulators and lawmakers, Facebook has vowed to speed up removal of extremist and illicit material.

Machine learning programs that sift through the billions of pieces of content users post each day are essential to its plan.

Machine learning is imperfect, and news agencies and advertisers are among those that have complained this year about Facebook’s automated systems wrongly blocking their posts.

Davis said the child safety systems would make mistakes but users could appeal.

“We’d rather err on the side of caution with children,” she said.

Facebook’s rules for years have banned even family photos of lightly clothed children uploaded with “good intentions,” concerned about how others might abuse such images.

Before the new software, Facebook relied on users or its adult nudity filters to catch child images. A separate system blocks child pornography that has previously been reported to authorities.

Facebook has not previously disclosed data on child nudity removals, though some would have been counted among the 21 million posts and comments it removed in the first quarter for sexual activity and adult nudity.

Facebook said the program, which learned from its collection of nude adult photos and clothed children photos, has led to more removals. It makes exceptions for art and history, such as the Pulitzer Prize-winning photo of a naked girl fleeing a Vietnam War napalm attack.

Protecting minors

The child grooming system evaluates factors such as how many people have blocked a particular user and whether that user quickly attempts to contact many children, Davis said.

Michelle DeLaune, chief operating officer at the National Center for Missing and Exploited Children (NCMEC), said the organization expects to receive about 16 million child porn tips worldwide this year from Facebook and other tech companies, up from 10 million last year.

With the increase, NCMEC said it is working with Facebook to develop software to decide which tips to assess first.

Still, DeLaune acknowledged that a crucial blind spot is encrypted chat apps and secretive “dark web” sites where much of new child pornography originates.

Encryption of messages on Facebook-owned WhatsApp, for example, prevents machine learning from analyzing them.

DeLaune said NCMEC would educate tech companies and “hope they use creativity” to address the issue.