Facebook Says Human Rights Report Shows It Should Do More in Myanmar

Facebook on Monday said a human rights report it commissioned on its presence in Myanmar showed it had not done enough to prevent its social network from being used to incite violence.

The report by San Francisco-based nonprofit Business for Social Responsibility (BSR) recommended that Facebook more strictly enforce its content policies, increase engagement with both Myanmar officials and civil society groups and regularly release additional data about its progress in the country.

“The report concludes that, prior to this year, we weren’t doing enough to help prevent our platform from being used to foment division and incite offline violence. We agree that we can and should do more,” Alex Warofka, a Facebook product policy manager, said in a blog post.

BSR also warned that Facebook must be prepared to handle a likely onslaught of misinformation during Myanmar’s 2020 elections, and new problems as use of its WhatsApp grows in Myanmar, according to the report, which Facebook released.

A Reuters special report in August found that Facebook failed to promptly heed numerous warnings from organizations in Myanmar about social media posts fueling attacks on minority groups such as the Rohingya.

In August 2017 the military led a crackdown in Myanmar’s Rakhine State in response to attacks by Rohingya insurgents, pushing more than 700,000 Muslims to neighboring Bangladesh, according to U.N. agencies.

The social media website in August removed several Myanmar military officials from the platform to prevent the spread of “hate and misinformation,” for the first time banning a country’s military or political leaders.

It also removed dozens of accounts for engaging in a campaign that “used seemingly independent news and opinion pages to covertly push the messages of the Myanmar military.”

The move came hours after United Nations investigators said the army carried out mass killings and gang rapes of Muslim Rohingya with “genocidal intent.”

Facebook said it has begun correcting shortcomings.

Facebook said that it now has 99 Myanmar language specialists reviewing potentially questionable content. In addition, it has expanded use of automated tools to reduce distribution of violent and dehumanizing posts while they undergo review.

In the third quarter, the company said it “took action” on about 64,000 pieces of content that violated its hate speech policies. About 63 percent were identified by automated software, up from 52 percent in the prior quarter.

Facebook has roughly 20 million users in Myanmar, according to BSR, which warned Facebook faces several unresolved challenges in Myanmar.

BSR said locating staff there, for example, could aid in Facebook’s understanding of how its services are used locally but said its workers could be targeted by the country’s military, which has been accused by the U.N. of ethnic cleansing of the Rohingya.

Facebook Says Human Rights Report Shows It Should Do More in Myanmar

Facebook on Monday said a human rights report it commissioned on its presence in Myanmar showed it had not done enough to prevent its social network from being used to incite violence.

The report by San Francisco-based nonprofit Business for Social Responsibility (BSR) recommended that Facebook more strictly enforce its content policies, increase engagement with both Myanmar officials and civil society groups and regularly release additional data about its progress in the country.

“The report concludes that, prior to this year, we weren’t doing enough to help prevent our platform from being used to foment division and incite offline violence. We agree that we can and should do more,” Alex Warofka, a Facebook product policy manager, said in a blog post.

BSR also warned that Facebook must be prepared to handle a likely onslaught of misinformation during Myanmar’s 2020 elections, and new problems as use of its WhatsApp grows in Myanmar, according to the report, which Facebook released.

A Reuters special report in August found that Facebook failed to promptly heed numerous warnings from organizations in Myanmar about social media posts fueling attacks on minority groups such as the Rohingya.

In August 2017 the military led a crackdown in Myanmar’s Rakhine State in response to attacks by Rohingya insurgents, pushing more than 700,000 Muslims to neighboring Bangladesh, according to U.N. agencies.

The social media website in August removed several Myanmar military officials from the platform to prevent the spread of “hate and misinformation,” for the first time banning a country’s military or political leaders.

It also removed dozens of accounts for engaging in a campaign that “used seemingly independent news and opinion pages to covertly push the messages of the Myanmar military.”

The move came hours after United Nations investigators said the army carried out mass killings and gang rapes of Muslim Rohingya with “genocidal intent.”

Facebook said it has begun correcting shortcomings.

Facebook said that it now has 99 Myanmar language specialists reviewing potentially questionable content. In addition, it has expanded use of automated tools to reduce distribution of violent and dehumanizing posts while they undergo review.

In the third quarter, the company said it “took action” on about 64,000 pieces of content that violated its hate speech policies. About 63 percent were identified by automated software, up from 52 percent in the prior quarter.

Facebook has roughly 20 million users in Myanmar, according to BSR, which warned Facebook faces several unresolved challenges in Myanmar.

BSR said locating staff there, for example, could aid in Facebook’s understanding of how its services are used locally but said its workers could be targeted by the country’s military, which has been accused by the U.N. of ethnic cleansing of the Rohingya.

Don’t Leave Half the World Offline and Behind, Urges Web Founder

British computer scientist Tim Berners-Lee, who invented the World Wide Web, appealed on Monday for companies and governments not to leave behind half of the world population yet to have internet access, which includes billions of women and girls.

Berners-Lee told the opening of the Europe’s largest technology conference that everyone had assumed his breakthrough in 1989, that connected humanity to technology, would lead to good things – and it had for a while.

But he said the internet was “coming of age” and going awry, with fake news and issues with privacy, hate speech and political polarization, as well as a growing digital divide between those in richer and poorer countries.

He called on companies and governments to join a “contract for the web” by next May in order to rebuild trust in the internet and find new ways to monetize, regulate and ensure fair and affordable access to the online world.

“Everything we do … to make the web more powerful, it means we increase the digital divide,” Berners-Lee, 63, told the opening of the ninth edition of the Web Summit, dubbed “the Davos for geeks,” that attracts up to 70,000 people. “We’ve an obligation to look after both parts of the world.”

Berners-Lee highlighted studies showing that half of the world population will be online by next year – but the rate of take-up was slowing considerably, potentially leaving billions cut off from government services, education and public debate.

His concerns were echoed by U.N. Secretary-General Antonio Guterres who stressed the need for a “digital future that is safe and beneficial to all” to meet the United Nation’s global goals of ending inequality and extreme poverty by 2030.

In 2016 the United Nations passed a resolution to make disruption of internet access a violation of human rights.

Google’s head of philanthropy, Jacqueline Fuller, said it was huge milestone for the web to reach 30 next year, adding her company was one of 50 organizations to have already signed up to the pact developed by Berners-Lee’s World Wide Web Foundation.

Other supporters include Facebook, British billionaire entrepreneur Richard Branson and the French government.

“This is also a great opportunity for us,” Fuller told the Web Summit. “Women and girls are much less likely to have access (to the internet).”

Despite the challenges, Berners-Lee said he was optimistic about the future of the internet.

“The ad-based funding model doesn’t have to work in the same way. It doesn’t have to create clickbait,” he said.

Don’t Leave Half the World Offline and Behind, Urges Web Founder

British computer scientist Tim Berners-Lee, who invented the World Wide Web, appealed on Monday for companies and governments not to leave behind half of the world population yet to have internet access, which includes billions of women and girls.

Berners-Lee told the opening of the Europe’s largest technology conference that everyone had assumed his breakthrough in 1989, that connected humanity to technology, would lead to good things – and it had for a while.

But he said the internet was “coming of age” and going awry, with fake news and issues with privacy, hate speech and political polarization, as well as a growing digital divide between those in richer and poorer countries.

He called on companies and governments to join a “contract for the web” by next May in order to rebuild trust in the internet and find new ways to monetize, regulate and ensure fair and affordable access to the online world.

“Everything we do … to make the web more powerful, it means we increase the digital divide,” Berners-Lee, 63, told the opening of the ninth edition of the Web Summit, dubbed “the Davos for geeks,” that attracts up to 70,000 people. “We’ve an obligation to look after both parts of the world.”

Berners-Lee highlighted studies showing that half of the world population will be online by next year – but the rate of take-up was slowing considerably, potentially leaving billions cut off from government services, education and public debate.

His concerns were echoed by U.N. Secretary-General Antonio Guterres who stressed the need for a “digital future that is safe and beneficial to all” to meet the United Nation’s global goals of ending inequality and extreme poverty by 2030.

In 2016 the United Nations passed a resolution to make disruption of internet access a violation of human rights.

Google’s head of philanthropy, Jacqueline Fuller, said it was huge milestone for the web to reach 30 next year, adding her company was one of 50 organizations to have already signed up to the pact developed by Berners-Lee’s World Wide Web Foundation.

Other supporters include Facebook, British billionaire entrepreneur Richard Branson and the French government.

“This is also a great opportunity for us,” Fuller told the Web Summit. “Women and girls are much less likely to have access (to the internet).”

Despite the challenges, Berners-Lee said he was optimistic about the future of the internet.

“The ad-based funding model doesn’t have to work in the same way. It doesn’t have to create clickbait,” he said.

Musk Tweets New Video of LA-area Transportation Test Tunnel

Elon Musk has tweeted a new video of a tunnel constructed under a Los Angeles suburb to test a new type of transportation system.

 

Musk tweeted Saturday that he walked the length of the tunnel and commented that it is “disturbingly long.”

 

The tunnel runs about 2 miles (3.2 kilometers) under the streets of Hawthorne, where Musk’s SpaceX headquarters is located.

 

Musk envisions a transportation system in which vehicles or people pods are moved through tunnels on electrically powered platforms called skates.

 

He plans to show off the test tunnel with an opening party on Dec. 10 and offer free rides the next day.

 

Musk has proposed a tunnel across western Los Angeles and another between a subway line and Dodger Stadium.

 

 

Musk Tweets New Video of LA-area Transportation Test Tunnel

Elon Musk has tweeted a new video of a tunnel constructed under a Los Angeles suburb to test a new type of transportation system.

 

Musk tweeted Saturday that he walked the length of the tunnel and commented that it is “disturbingly long.”

 

The tunnel runs about 2 miles (3.2 kilometers) under the streets of Hawthorne, where Musk’s SpaceX headquarters is located.

 

Musk envisions a transportation system in which vehicles or people pods are moved through tunnels on electrically powered platforms called skates.

 

He plans to show off the test tunnel with an opening party on Dec. 10 and offer free rides the next day.

 

Musk has proposed a tunnel across western Los Angeles and another between a subway line and Dodger Stadium.

 

 

As Americans Vote, Facebook Struggles With Misinformation

As U.S. voters prepare to head to the polls Tuesday, the election will also be a referendum on Facebook.

In recent months, the social networking giant has beefed up scrutiny of what is posted on its site, looking for fake accounts, misinformation and hate speech, while encouraging people to go on Facebook to express their views.

“A lot of the work of content moderation for us begins with our company mission, which is to build community and bring the world closer together,” Peter Stern, who works on product policy stakeholder engagement at Facebook, said at a recent event at St. John’s University in New York City.

Facebook wants people to feel safe when they visit the site, Stern said. To that end, it is on track to hire 20,000 people to tackle safety and security on the platform.

As part of its stepped-up effort, Facebook works with third-party fact-checkers and takes down misinformation that contributes to violence, according to a blog post by Mark Zuckerberg, Facebook’s CEO.

But most popular content, often dubbed “viral,” is frequently the most extreme. Facebook devalues posts it deems are incorrect, reducing their viralness, or future views, by 80 percent, Zuckerberg said.

Disinformation campaigns

Recently Facebook removed accounts followed by more than 1 million people that it said were linked to Iran but pretended to look like they were created by people in the U.S. Some were about the upcoming midterm elections.

The firm also removed hundreds of American accounts that it said were spamming political misinformation.

Still, Facebook is criticized for what at times appears to be flaws in its processes.

Vice News recently posed as all 100 U.S. senators and bought fake political ads on the site. After approving them all, Facebook said it made a mistake.

Politicians in Britain and Canada have asked Zuckerberg to testify on Facebook’s role on spreading disinformation.

“I think they are really struggling and that’s not surprising, because it’s a very hard problem,” said Daphne Keller, who used to be on Google’s legal team and is now with Stanford University.

“If you think about it, they get millions, billions of new posts a day, most of them some factual claim or sentiment that nobody has ever posted before, so to go through these and figure out which are misinformation, which are false, which are intending to affect an electoral outcome, that is a huge challenge,” Keller said. “There isn’t a human team that can do that in the world, there isn’t a machine that can do that in the world.”

​Transparency

While it has been purging its site of accounts that violate its policies, the company has also revealed more about how decisions are made in removing posts. In a 27-page document, Facebook described in detail what content it removes and why, and updated its appeals process. 

Stern, of Facebook, supports the company’s efforts at transparency.

“Having a system that people view as legitimate and basically fair even when they don’t agree with any individual decision that we’ve made is extremely important,” he said.

The stepped-up efforts to give users more clarity about the rules and the steps to challenge decisions are signs Facebook is moving in the right direction, Stanford’s Keller said.

“We need to understand that it is built into the system that there will be a fair amount of failure and there needs to be appeals process and transparency to address that,” she said.

As Americans Vote, Facebook Struggles With Misinformation

As U.S. voters prepare to head to the polls Tuesday, the election will also be a referendum on Facebook.

In recent months, the social networking giant has beefed up scrutiny of what is posted on its site, looking for fake accounts, misinformation and hate speech, while encouraging people to go on Facebook to express their views.

“A lot of the work of content moderation for us begins with our company mission, which is to build community and bring the world closer together,” Peter Stern, who works on product policy stakeholder engagement at Facebook, said at a recent event at St. John’s University in New York City.

Facebook wants people to feel safe when they visit the site, Stern said. To that end, it is on track to hire 20,000 people to tackle safety and security on the platform.

As part of its stepped-up effort, Facebook works with third-party fact-checkers and takes down misinformation that contributes to violence, according to a blog post by Mark Zuckerberg, Facebook’s CEO.

But most popular content, often dubbed “viral,” is frequently the most extreme. Facebook devalues posts it deems are incorrect, reducing their viralness, or future views, by 80 percent, Zuckerberg said.

Disinformation campaigns

Recently Facebook removed accounts followed by more than 1 million people that it said were linked to Iran but pretended to look like they were created by people in the U.S. Some were about the upcoming midterm elections.

The firm also removed hundreds of American accounts that it said were spamming political misinformation.

Still, Facebook is criticized for what at times appears to be flaws in its processes.

Vice News recently posed as all 100 U.S. senators and bought fake political ads on the site. After approving them all, Facebook said it made a mistake.

Politicians in Britain and Canada have asked Zuckerberg to testify on Facebook’s role on spreading disinformation.

“I think they are really struggling and that’s not surprising, because it’s a very hard problem,” said Daphne Keller, who used to be on Google’s legal team and is now with Stanford University.

“If you think about it, they get millions, billions of new posts a day, most of them some factual claim or sentiment that nobody has ever posted before, so to go through these and figure out which are misinformation, which are false, which are intending to affect an electoral outcome, that is a huge challenge,” Keller said. “There isn’t a human team that can do that in the world, there isn’t a machine that can do that in the world.”

​Transparency

While it has been purging its site of accounts that violate its policies, the company has also revealed more about how decisions are made in removing posts. In a 27-page document, Facebook described in detail what content it removes and why, and updated its appeals process. 

Stern, of Facebook, supports the company’s efforts at transparency.

“Having a system that people view as legitimate and basically fair even when they don’t agree with any individual decision that we’ve made is extremely important,” he said.

The stepped-up efforts to give users more clarity about the rules and the steps to challenge decisions are signs Facebook is moving in the right direction, Stanford’s Keller said.

“We need to understand that it is built into the system that there will be a fair amount of failure and there needs to be appeals process and transparency to address that,” she said.