How Green Is My Forest? There’s an App to Tell You

A web-based application that monitors the impact of successful forest-rights claims can help rural communities manage resources better and improve their livelihoods, according to analysts.

The app was developed by the Indian School of Business (ISB) to track community rights in India, where the 2006 Forest Rights Act aimed to improve the lives of rural people by recognizing their entitlement to inhabit and live off forests.

With a smartphone or tablet, the app can be used to track the status of a community rights claim.

After the claim is approved, community members can use it to collect data on tree cover, burned areas and other changes in the forest and analyze it, said Arvind Khare at Washington D.C.-based advocacy Rights and Resources Initiative (RRI).

“Even in areas that have made great progress in awarding rights, it is very hard to track the socio-ecological impact of the rights on the community,” said Khare, a senior director at RRI, which is testing the app in India.

“Recording the data and analyzing it can tell you which resources need better management, so that these are not used haphazardly, but in a manner that benefits them most,” he told the Thomson Reuters Foundation.

For example, community members can record data on forest products they use such as leaves, flowers, wood and sap, making it easier to ensure that they are not over-exploited, he said.

While indigenous and local communities own more than half the world’s land under customary rights, they have secure legal rights to only 10 percent, according to RRI.

Governments maintain legal and administrative authority over more than two-thirds of global forest area, giving limited access for local communities.

In India, under the 2006 law, at least 150 million people could have their rights recognized to about 40 million hectares (154,400 sq miles) of forest land.

But rights to only 3 percent of land have been granted, with states largely rejecting community claims, campaigners say.

While the app is being tested in India, Khare said it can also be used in countries including Peru, Mali, Liberia and Indonesia, where RRI supports rural communities in scaling up forest rights claims.

Data can be entered offline on the app, and then uploaded to the server when the device is connected to the internet. Data is stored in the cloud and accessible to anyone, said Ashwini Chhatre, an associate professor at ISB.

“All this while local communities have been fighting simply for the right to live in the forest and use its resources. Now, they can use data to truly benefit from it,” he said.

How Green Is My Forest? There’s an App to Tell You

A web-based application that monitors the impact of successful forest-rights claims can help rural communities manage resources better and improve their livelihoods, according to analysts.

The app was developed by the Indian School of Business (ISB) to track community rights in India, where the 2006 Forest Rights Act aimed to improve the lives of rural people by recognizing their entitlement to inhabit and live off forests.

With a smartphone or tablet, the app can be used to track the status of a community rights claim.

After the claim is approved, community members can use it to collect data on tree cover, burned areas and other changes in the forest and analyze it, said Arvind Khare at Washington D.C.-based advocacy Rights and Resources Initiative (RRI).

“Even in areas that have made great progress in awarding rights, it is very hard to track the socio-ecological impact of the rights on the community,” said Khare, a senior director at RRI, which is testing the app in India.

“Recording the data and analyzing it can tell you which resources need better management, so that these are not used haphazardly, but in a manner that benefits them most,” he told the Thomson Reuters Foundation.

For example, community members can record data on forest products they use such as leaves, flowers, wood and sap, making it easier to ensure that they are not over-exploited, he said.

While indigenous and local communities own more than half the world’s land under customary rights, they have secure legal rights to only 10 percent, according to RRI.

Governments maintain legal and administrative authority over more than two-thirds of global forest area, giving limited access for local communities.

In India, under the 2006 law, at least 150 million people could have their rights recognized to about 40 million hectares (154,400 sq miles) of forest land.

But rights to only 3 percent of land have been granted, with states largely rejecting community claims, campaigners say.

While the app is being tested in India, Khare said it can also be used in countries including Peru, Mali, Liberia and Indonesia, where RRI supports rural communities in scaling up forest rights claims.

Data can be entered offline on the app, and then uploaded to the server when the device is connected to the internet. Data is stored in the cloud and accessible to anyone, said Ashwini Chhatre, an associate professor at ISB.

“All this while local communities have been fighting simply for the right to live in the forest and use its resources. Now, they can use data to truly benefit from it,” he said.

App Taken Down After Pittsburgh Gunman Revealed as User

Gab, a social networking site often accused of being a haven for white supremacists, neo-Nazis and other hate groups, went offline Monday after being refused by several web hosting providers following revelations that Pittsburgh synagogue shooting suspect Robert Bowers used the platform to threaten Jews.

“Gab isn’t going anywhere,” said Andrew Torba, chief executive officer and creator of Gab.com. “We will exercise every possible avenue to keep Gab online and defend free speech and individual liberty for all people.

Founded two years ago as an alternative to mainstream social networking sites like Facebook and Twitter, Torba billed Gab as a haven for free speech. The site soon began attracting online members of the alt-right and other extremist ideologies unwelcome on other platforms.

“What makes the entirely left-leaning Big Social monopoly qualified to tell us what is ‘news’ and what is ‘trending’ and to define what “harassment” means?” Torba wrote in a 2016 email to Buzzfeed News.

The tide swiftly turned against Gab after Bowers entered the Tree of Life synagogue Saturday morning with an assault rifle and several handguns, killing 11 and wounding six.

It came to light that Bowers had made several anti-Semitic posts on the site, including one the morning of the shooting that read “HIAS likes to bring invaders in that kill our people. I can’t sit by and watch my people get slaughtered. Screw your optics, I’m going in.” HIAS (Hebrew Immigration Aid Society) helps refugees resettle in the United States.

Following Bowers’ posts being picked up by national media, PayPal and payment processor Stripe announced that they would be ending their relationship with Gab. Hosting providers followed soon after, and the website was nonfunctional by Monday morning.

In an interview with NPR aired Monday, Torba defended leaving up Bowers’ post from the morning of the shooting.

“Do you see a direct threat in there?” Torba said. “Because I don’t. What would you expect us to do with a post like that? You want us to just censor anybody who says the phrase ‘I’m going in’? Because that’s just absurd.”

Teen’s Program Could Improve Pancreatic Cancer Treatment

Pancreatic cancer treatment could become more advanced with help from 13-year-old Rishab Jain. He’s created a tool for doctors to locate the hard-to-find pancreas more quickly and precisely during cancer treatment. The teen recently won a prestigious young scientist award for his potentially game-changing idea. VOA’s Julie Taboh has more.

Teen’s Program Could Improve Pancreatic Cancer Treatment

Pancreatic cancer treatment could become more advanced with help from 13-year-old Rishab Jain. He’s created a tool for doctors to locate the hard-to-find pancreas more quickly and precisely during cancer treatment. The teen recently won a prestigious young scientist award for his potentially game-changing idea. VOA’s Julie Taboh has more.

Plant Fibers Make Stronger Concrete

It may surprise you that cement is responsible for 7 percent of the world’s carbon emissions. That’s because it takes a lot of heat to produce the basic powdery base of cement that eventually becomes concrete. But it turns out that simple fibers from carrots could not only reduce that carbon footprint but also make concrete stronger. VOA’s Kevin Enochs reports.

Plant Fibers Make Stronger Concrete

It may surprise you that cement is responsible for 7 percent of the world’s carbon emissions. That’s because it takes a lot of heat to produce the basic powdery base of cement that eventually becomes concrete. But it turns out that simple fibers from carrots could not only reduce that carbon footprint but also make concrete stronger. VOA’s Kevin Enochs reports.

Q&A: Facebook Describes How It Detects ‘Inauthentic Behavior’

Facebook announced Friday that it had removed 82 Iranian-linked accounts on Facebook and Instagram. A Facebook spokesperson answered VOA’s questions about its process and efforts to detect what it calls “coordinated inauthentic behavior” by accounts pretending to be U.S. and U.K. citizens and aimed at U.S. and U.K. audiences.

Q: Facebook’s post says there were 7 “events hosted.” Any details about where, when, who?

A: Of seven events, the first was scheduled for February 2016, and the most recent was scheduled for June 2018. One hundred and ten people expressed interest in at least one of these events, and two events received no interest. We cannot confirm whether any of these events actually occurred. Some appear to have been planned to occur only online. The themes are similar to the rest of the activity we have described.

Q: Is there any indication this was an Iranian government-linked program?

A: We recently discussed the challenges involved with determining who is behind information operations. In this case, we have not been able to determine any links to the Iranian government, but we are continuing to investigate. Also, Atlantic Council’s Digital Forensic Research Lab has shared their take on the content in this case here.

​Q: How long was the time between discovering this and taking down the pages?

A: We first detected this activity one week ago. As soon as we detected this activity, the teams in our elections war room worked quickly to investigate and remove these bad actors. Given the elections, we took action as soon as we’d completed our initial investigation and shared the information with U.S. and U.K. government officials, U.S. law enforcement, Congress, other technology companies and the Atlantic Council’s Digital Forensic Research Lab.

Q: How have you improved the reporting processes in the past year to speed the ability to remove such content?

A: Just to clarify, today’s takedown was a result of our teams proactively discovering suspicious signals on a page that appeared to be run by Iranian users. From there, we investigated and found the set of pages, groups and accounts that we removed today.

To your broader question on how we’ve improved over the past two years: To ensure that we stay ahead, we’ve invested heavily in better technology and more people. There are now over 20,000 people working on safety and security at Facebook, and thanks to improvements in artificial intelligence we detect many fake accounts, the root cause of so many issues, before they are even created. We’re also working more closely with governments, law enforcement, security experts and other companies because no one organization can do this on its own.

Q: How many people do you have monitoring content in English now? In Persian?

A: We have over 7,500 content reviewers globally. We don’t provide breakdowns of the number of people working in specific languages or regions because that alone doesn’t reflect the number of people working to review content for a particular country or region at any particular time.

Q: How are you training people to spot this content? What’s the process?

A: To be clear, today’s takedown was the result of an internal investigation involving a combination of manual work by our teams of skilled investigators and data science teams using automated tools to look for larger patterns to identify potentially inauthentic behavior. In this case, we relied on both of these techniques working together.

On your separate question about training content reviewers, here is more on our content reviewers and how we support them.

Q: Does Facebook have any more information on how effective this messaging is at influencing behavior?

A: We aren’t in a position to know.

Q&A: Facebook Describes How It Detects ‘Inauthentic Behavior’

Facebook announced Friday that it had removed 82 Iranian-linked accounts on Facebook and Instagram. A Facebook spokesperson answered VOA’s questions about its process and efforts to detect what it calls “coordinated inauthentic behavior” by accounts pretending to be U.S. and U.K. citizens and aimed at U.S. and U.K. audiences.

Q: Facebook’s post says there were 7 “events hosted.” Any details about where, when, who?

A: Of seven events, the first was scheduled for February 2016, and the most recent was scheduled for June 2018. One hundred and ten people expressed interest in at least one of these events, and two events received no interest. We cannot confirm whether any of these events actually occurred. Some appear to have been planned to occur only online. The themes are similar to the rest of the activity we have described.

Q: Is there any indication this was an Iranian government-linked program?

A: We recently discussed the challenges involved with determining who is behind information operations. In this case, we have not been able to determine any links to the Iranian government, but we are continuing to investigate. Also, Atlantic Council’s Digital Forensic Research Lab has shared their take on the content in this case here.

​Q: How long was the time between discovering this and taking down the pages?

A: We first detected this activity one week ago. As soon as we detected this activity, the teams in our elections war room worked quickly to investigate and remove these bad actors. Given the elections, we took action as soon as we’d completed our initial investigation and shared the information with U.S. and U.K. government officials, U.S. law enforcement, Congress, other technology companies and the Atlantic Council’s Digital Forensic Research Lab.

Q: How have you improved the reporting processes in the past year to speed the ability to remove such content?

A: Just to clarify, today’s takedown was a result of our teams proactively discovering suspicious signals on a page that appeared to be run by Iranian users. From there, we investigated and found the set of pages, groups and accounts that we removed today.

To your broader question on how we’ve improved over the past two years: To ensure that we stay ahead, we’ve invested heavily in better technology and more people. There are now over 20,000 people working on safety and security at Facebook, and thanks to improvements in artificial intelligence we detect many fake accounts, the root cause of so many issues, before they are even created. We’re also working more closely with governments, law enforcement, security experts and other companies because no one organization can do this on its own.

Q: How many people do you have monitoring content in English now? In Persian?

A: We have over 7,500 content reviewers globally. We don’t provide breakdowns of the number of people working in specific languages or regions because that alone doesn’t reflect the number of people working to review content for a particular country or region at any particular time.

Q: How are you training people to spot this content? What’s the process?

A: To be clear, today’s takedown was the result of an internal investigation involving a combination of manual work by our teams of skilled investigators and data science teams using automated tools to look for larger patterns to identify potentially inauthentic behavior. In this case, we relied on both of these techniques working together.

On your separate question about training content reviewers, here is more on our content reviewers and how we support them.

Q: Does Facebook have any more information on how effective this messaging is at influencing behavior?

A: We aren’t in a position to know.

Study: Online Attacks on Jews Ramp Up Before Election Day

Far-right extremists have ramped up an intimidating wave of anti-Semitic harassment against Jewish journalists, political candidates and others ahead of next month’s U.S. midterm elections, according to a report released Friday by a Jewish civil rights group.

The Anti-Defamation League’s report says its researchers analyzed more than 7.5 million Twitter messages from Aug. 31 to Sept. 17 and found nearly 30 percent of the accounts repeatedly tweeting derogatory terms about Jews appeared to be automated “bots.”

But accounts controlled by real-life humans often mount the most “worrisome and harmful” anti-Semitic attacks, sometimes orchestrated by leaders of neo-Nazi or white nationalist groups, the researchers said.

“Both anonymity and automation have been used in online propaganda offensives against the Jewish community during the 2018 midterms,” they wrote.

Billionaire philanthropist George Soros was a leading subject of harassing tweets. Soros, a Hungarian-born Jew demonized by right-wing conspiracy theorists, is one of the prominent Democrats who had pipe bombs sent to them this week.

The ADL’s study concludes online disinformation and abuse is disproportionately targeting Jews in the U.S. “during this crucial political moment.”

“Prior to the election of President Donald Trump, anti-Semitic harassment and attacks were rare and unexpected, even for Jewish Americans who were prominently situated in the public eye. Following his election, anti-Semitism has become normalized and harassment is a daily occurrence,” the report says.

The New York City-based ADL has commissioned other studies of online hate, including a report in May that estimated about 3 million Twitter users posted or re-posted at least 4.2 million anti-Semitic tweets in English over a 12-month period ending Jan. 28. An earlier report from the group said anti-Semitic incidents in the U.S. in the previous year had reached the highest tally it has counted in more than two decades.

For the latest report, researchers interviewed five Jewish people, including two recent political candidates, who had faced “human-based attacks” against them on social media this year. Their experiences demonstrated that anti-Semitic harassment “has a chilling effect on Jewish Americans’ involvement in the public sphere,” their report says.

“While each interview subject spoke of not wanting to let threats of the trolls affect their online activity, political campaigns, academic research or news reporting, they all admitted the threats of violence and deluges of anti-Semitism had become part of their internal equations,” researchers wrote.

The most popular term used in tweets containing #TrumpTrain was “Soros.” The study also found a “surprising” abundance of tweets referencing “QAnon,” a right-wing conspiracy theory that started on an online message board and has been spread by Trump supporters.

“There are strong anti-Semitic undertones, as followers decry George Soros and the Rothschild family as puppeteers,” researchers wrote.

Study: Online Attacks on Jews Ramp Up Before Election Day

Far-right extremists have ramped up an intimidating wave of anti-Semitic harassment against Jewish journalists, political candidates and others ahead of next month’s U.S. midterm elections, according to a report released Friday by a Jewish civil rights group.

The Anti-Defamation League’s report says its researchers analyzed more than 7.5 million Twitter messages from Aug. 31 to Sept. 17 and found nearly 30 percent of the accounts repeatedly tweeting derogatory terms about Jews appeared to be automated “bots.”

But accounts controlled by real-life humans often mount the most “worrisome and harmful” anti-Semitic attacks, sometimes orchestrated by leaders of neo-Nazi or white nationalist groups, the researchers said.

“Both anonymity and automation have been used in online propaganda offensives against the Jewish community during the 2018 midterms,” they wrote.

Billionaire philanthropist George Soros was a leading subject of harassing tweets. Soros, a Hungarian-born Jew demonized by right-wing conspiracy theorists, is one of the prominent Democrats who had pipe bombs sent to them this week.

The ADL’s study concludes online disinformation and abuse is disproportionately targeting Jews in the U.S. “during this crucial political moment.”

“Prior to the election of President Donald Trump, anti-Semitic harassment and attacks were rare and unexpected, even for Jewish Americans who were prominently situated in the public eye. Following his election, anti-Semitism has become normalized and harassment is a daily occurrence,” the report says.

The New York City-based ADL has commissioned other studies of online hate, including a report in May that estimated about 3 million Twitter users posted or re-posted at least 4.2 million anti-Semitic tweets in English over a 12-month period ending Jan. 28. An earlier report from the group said anti-Semitic incidents in the U.S. in the previous year had reached the highest tally it has counted in more than two decades.

For the latest report, researchers interviewed five Jewish people, including two recent political candidates, who had faced “human-based attacks” against them on social media this year. Their experiences demonstrated that anti-Semitic harassment “has a chilling effect on Jewish Americans’ involvement in the public sphere,” their report says.

“While each interview subject spoke of not wanting to let threats of the trolls affect their online activity, political campaigns, academic research or news reporting, they all admitted the threats of violence and deluges of anti-Semitism had become part of their internal equations,” researchers wrote.

The most popular term used in tweets containing #TrumpTrain was “Soros.” The study also found a “surprising” abundance of tweets referencing “QAnon,” a right-wing conspiracy theory that started on an online message board and has been spread by Trump supporters.

“There are strong anti-Semitic undertones, as followers decry George Soros and the Rothschild family as puppeteers,” researchers wrote.

Facebook Removes Iran-Linked Accounts Spreading False Info

Facebook says it has removed 82 pages, accounts and groups linked to Iran from its service and from Instagram for spreading misinformation.

The company says the accounts were targeting U.S. and U.K. citizens and typically represented themselves to be American or British citizens, posting about politically charged topics such as race relations and opposition to President Donald Trump.

Facebook said Friday that a manual review of the accounts linked them to Iran. Facebook has traditionally relied heavily on automated checks to detect misinformation and other bad behavior on its service.

The company has already disclosed that it found and removed similar activity originating in Iran in August.

The removals come less than two weeks before the U.S. midterm elections.