Apple’s New iPads Embrace Facial Recognition

Apple’s new iPads will resemble its latest iPhones as the company ditches a home button and fingerprint sensor to make room for the screen.

 

As with the iPhone XR and XS models, the new iPad Pro will use facial-recognition technology to unlock the device and authorize app and Apple Pay purchases.

 

Apple also unveiled new Mac models at an opera house in New York, where the company emphasized artistic uses for its products such as creating music, video and sketches. New Macs include a MacBook Air laptop with a better screen.

 

Research firm IDC says tablet sales have been declining overall, though Apple saw a 3 percent increase in iPad sales last year to nearly 44 million, commanding a 27 percent market share.

 

UN Human Rights Expert Urges States to Curb Intolerance Online

Following the shooting deaths of 11 worshippers at a synagogue in the eastern United States, a U.N. human rights expert urged governments on Monday to do more to curb racist and anti-Semitic intolerance, especially online.

“That event should be a catalyst for urgent action against hate crimes, but also a reminder to fight harder against the current climate of intolerance that has made racist, xenophobic and anti-Semitic attitudes and beliefs more acceptable,” U.N. Special Rapporteur Tendayi Achiume said of Saturday’s attack on a synagogue in Pittsburgh, Pennsylvania.

Achiume, whose mandate is the elimination of racism, racial discrimination, xenophobia and related intolerance, noted in her annual report that “Jews remain especially vulnerable to anti-Semitic attacks online.”

She said that Nazi and neo-Nazi groups exploit the internet to spread and incite hate because it is “largely unregulated, decentralized, cheap” and anonymous.

Achiume, a law professor at the University of California, Los Angeles (UCLA) School of Law, said neo-Nazi groups are increasingly relying on the internet and social media platforms to recruit new members.

Facebook, Twitter and YouTube are among their favorites.

On Facebook, for example, hate groups connect with sympathetic supporters and use the platform to recruit new members, organize events and raise money for their activities. YouTube, which has over 1.5 billion viewers each month, is another critical communications tool for propaganda videos and even neo-Nazi music videos. On Twitter, according to one 2012 study cited in the special rapporteur’s report, the presence of white nationalist movements on that platform has increased by more than 600 percent.

The special rapporteur noted that while digital technology has become an integral and positive part of most people’s lives, “these developments have also aided the spread of hateful movements.”

She said in the past year, platforms including Facebook, Twitter and YouTube have banned individual users who have contributed to hate movements or threatened violence, but ensuring the removal of racist content online remains difficult.

Some hate groups try to get around raising red flags by using racially coded messaging, which makes it harder for social media platforms to recognize their hate speech and shut down their presence.

Achiume cited as an example the use of a cartoon character “Pepe the Frog,” which was appropriated by members of neo-Nazi and white supremacist groups and was widely displayed during a white supremacist rally in the southern U.S. city of Charlottesville, Virginia, in 2017.

The special rapporteur welcomed actions in several states to counter intolerance online, but cautioned it must not be used as a pretext for censorship and other abuses. She also urged governments to work with the private sector — specifically technology companies — to fight such prejudices in the digital space.

How Green Is My Forest? There’s an App to Tell You

A web-based application that monitors the impact of successful forest-rights claims can help rural communities manage resources better and improve their livelihoods, according to analysts.

The app was developed by the Indian School of Business (ISB) to track community rights in India, where the 2006 Forest Rights Act aimed to improve the lives of rural people by recognizing their entitlement to inhabit and live off forests.

With a smartphone or tablet, the app can be used to track the status of a community rights claim.

After the claim is approved, community members can use it to collect data on tree cover, burned areas and other changes in the forest and analyze it, said Arvind Khare at Washington D.C.-based advocacy Rights and Resources Initiative (RRI).

“Even in areas that have made great progress in awarding rights, it is very hard to track the socio-ecological impact of the rights on the community,” said Khare, a senior director at RRI, which is testing the app in India.

“Recording the data and analyzing it can tell you which resources need better management, so that these are not used haphazardly, but in a manner that benefits them most,” he told the Thomson Reuters Foundation.

For example, community members can record data on forest products they use such as leaves, flowers, wood and sap, making it easier to ensure that they are not over-exploited, he said.

While indigenous and local communities own more than half the world’s land under customary rights, they have secure legal rights to only 10 percent, according to RRI.

Governments maintain legal and administrative authority over more than two-thirds of global forest area, giving limited access for local communities.

In India, under the 2006 law, at least 150 million people could have their rights recognized to about 40 million hectares (154,400 sq miles) of forest land.

But rights to only 3 percent of land have been granted, with states largely rejecting community claims, campaigners say.

While the app is being tested in India, Khare said it can also be used in countries including Peru, Mali, Liberia and Indonesia, where RRI supports rural communities in scaling up forest rights claims.

Data can be entered offline on the app, and then uploaded to the server when the device is connected to the internet. Data is stored in the cloud and accessible to anyone, said Ashwini Chhatre, an associate professor at ISB.

“All this while local communities have been fighting simply for the right to live in the forest and use its resources. Now, they can use data to truly benefit from it,” he said.

Teen’s Program Could Improve Pancreatic Cancer Treatment

Pancreatic cancer treatment could become more advanced with help from 13-year-old Rishab Jain. He’s created a tool for doctors to locate the hard-to-find pancreas more quickly and precisely during cancer treatment. The teen recently won a prestigious young scientist award for his potentially game-changing idea. VOA’s Julie Taboh has more.

Q&A: Facebook Describes How It Detects ‘Inauthentic Behavior’

Facebook announced Friday that it had removed 82 Iranian-linked accounts on Facebook and Instagram. A Facebook spokesperson answered VOA’s questions about its process and efforts to detect what it calls “coordinated inauthentic behavior” by accounts pretending to be U.S. and U.K. citizens and aimed at U.S. and U.K. audiences.

Q: Facebook’s post says there were 7 “events hosted.” Any details about where, when, who?

A: Of seven events, the first was scheduled for February 2016, and the most recent was scheduled for June 2018. One hundred and ten people expressed interest in at least one of these events, and two events received no interest. We cannot confirm whether any of these events actually occurred. Some appear to have been planned to occur only online. The themes are similar to the rest of the activity we have described.

Q: Is there any indication this was an Iranian government-linked program?

A: We recently discussed the challenges involved with determining who is behind information operations. In this case, we have not been able to determine any links to the Iranian government, but we are continuing to investigate. Also, Atlantic Council’s Digital Forensic Research Lab has shared their take on the content in this case here.

​Q: How long was the time between discovering this and taking down the pages?

A: We first detected this activity one week ago. As soon as we detected this activity, the teams in our elections war room worked quickly to investigate and remove these bad actors. Given the elections, we took action as soon as we’d completed our initial investigation and shared the information with U.S. and U.K. government officials, U.S. law enforcement, Congress, other technology companies and the Atlantic Council’s Digital Forensic Research Lab.

Q: How have you improved the reporting processes in the past year to speed the ability to remove such content?

A: Just to clarify, today’s takedown was a result of our teams proactively discovering suspicious signals on a page that appeared to be run by Iranian users. From there, we investigated and found the set of pages, groups and accounts that we removed today.

To your broader question on how we’ve improved over the past two years: To ensure that we stay ahead, we’ve invested heavily in better technology and more people. There are now over 20,000 people working on safety and security at Facebook, and thanks to improvements in artificial intelligence we detect many fake accounts, the root cause of so many issues, before they are even created. We’re also working more closely with governments, law enforcement, security experts and other companies because no one organization can do this on its own.

Q: How many people do you have monitoring content in English now? In Persian?

A: We have over 7,500 content reviewers globally. We don’t provide breakdowns of the number of people working in specific languages or regions because that alone doesn’t reflect the number of people working to review content for a particular country or region at any particular time.

Q: How are you training people to spot this content? What’s the process?

A: To be clear, today’s takedown was the result of an internal investigation involving a combination of manual work by our teams of skilled investigators and data science teams using automated tools to look for larger patterns to identify potentially inauthentic behavior. In this case, we relied on both of these techniques working together.

On your separate question about training content reviewers, here is more on our content reviewers and how we support them.

Q: Does Facebook have any more information on how effective this messaging is at influencing behavior?

A: We aren’t in a position to know.

Google Abandons Berlin Campus Plan After Locals Protest

Google is abandoning plans to establish a campus for tech startups in Berlin after protests from residents worried about gentrification.

The internet giant confirmed reports Thursday it will sublet the former electrical substation in the capital’s Kreuzberg district to two charitable organizations, Betterplace.org and Karuna.

Google has more than a dozen so-called campuses around the world. They are intended as hubs to bring together potential employees, startups and investors.

Protesters had recently picketed the Umspannwerk site with placards such as “Google go home.”

Karuna, which helps disadvantaged children, said Google will pay 14 million euros ($16 million) toward renovation and maintenance for the coming five years.

Google said it will continue to work with startups in Berlin, which has become a magnet for tech companies in Germany in recent years.

Google Abandons Planned Berlin Office Hub

Campaigners in a bohemian district of Berlin celebrated Wednesday after Internet giant Google abandoned strongly-opposed plans to open a large campus there.

The US firm had planned to set up an incubator for start-up companies in Kreuzberg, one of the older districts in the west of the capital.

But the company’s German spokesman Ralf Bremer announced Wednesday that the 3,000 square-metre (3,590 square-yard) space — planned to host offices, cafes and communal work areas, would instead go to two local humanitarian associations.

Bremer did not say if local resistance to the plans over the past two years had played a part in the change of heart, although he had told the Berliner Zeitung daily that Google does not allow protests dictate its actions.

“The struggle pays off,” tweeted “GloReiche Nachbarschaft”, one of the groups opposed to the Kreuzberg campus plan and part of the “F**k off Google” campaign.

Some campaigners objected to what they described as Google’s “evil” corporate practices, such as tax evasion and the unethical use of personal data.

Some opposed the gentrification of the district, pricing too many people out of the area.

A recent study carried out by the Knight Fox consultancy concluded that property prices are rising faster in Berlin than anywhere else in the world: they jumped 20.5 percent between 2016 and 2017.

In Kreuzberg over the same period, the rise was an astonishing 71 percent.

Kreuzberg, which straddled the Berlin Wall that divided East and West Berlin during the Cold War, has traditionally been a bastion of the city’s underground and radical culture.

Facebook Unveils Systems for Catching Child Nudity, ‘Grooming’ of Children

Facebook Inc said on Wednesday that company moderators during the last quarter removed 8.7 million user images of child nudity with the help of previously undisclosed software that automatically flags such photos.

The machine learning tool rolled out over the last year identifies images that contain both nudity and a child, allowing increased enforcement of Facebook’s ban on photos that show minors in a sexualized context.

A similar system also disclosed Wednesday catches users engaged in “grooming,” or befriending minors for sexual exploitation.

Facebook’s global head of safety Antigone Davis told Reuters in an interview that the “machine helps us prioritize” and “more efficiently queue” problematic content for the company’s trained team of reviewers.

The company is exploring applying the same technology to its Instagram app.

Under pressure from regulators and lawmakers, Facebook has vowed to speed up removal of extremist and illicit material.

Machine learning programs that sift through the billions of pieces of content users post each day are essential to its plan.

Machine learning is imperfect, and news agencies and advertisers are among those that have complained this year about Facebook’s automated systems wrongly blocking their posts.

Davis said the child safety systems would make mistakes but users could appeal.

“We’d rather err on the side of caution with children,” she said.

Facebook’s rules for years have banned even family photos of lightly clothed children uploaded with “good intentions,” concerned about how others might abuse such images.

Before the new software, Facebook relied on users or its adult nudity filters to catch child images. A separate system blocks child pornography that has previously been reported to authorities.

Facebook has not previously disclosed data on child nudity removals, though some would have been counted among the 21 million posts and comments it removed in the first quarter for sexual activity and adult nudity.

Facebook said the program, which learned from its collection of nude adult photos and clothed children photos, has led to more removals. It makes exceptions for art and history, such as the Pulitzer Prize-winning photo of a naked girl fleeing a Vietnam War napalm attack.

Protecting minors

The child grooming system evaluates factors such as how many people have blocked a particular user and whether that user quickly attempts to contact many children, Davis said.

Michelle DeLaune, chief operating officer at the National Center for Missing and Exploited Children (NCMEC), said the organization expects to receive about 16 million child porn tips worldwide this year from Facebook and other tech companies, up from 10 million last year.

With the increase, NCMEC said it is working with Facebook to develop software to decide which tips to assess first.

Still, DeLaune acknowledged that a crucial blind spot is encrypted chat apps and secretive “dark web” sites where much of new child pornography originates.

Encryption of messages on Facebook-owned WhatsApp, for example, prevents machine learning from analyzing them.

DeLaune said NCMEC would educate tech companies and “hope they use creativity” to address the issue.

Apple CEO Backs Privacy Laws, Warns Data Being ‘Weaponized’

The head of Apple on Wednesday endorsed tough privacy laws for both Europe and the U.S. and renewed the technology giant’s commitment to protecting personal data, which he warned was being “weaponized” against users.

 

Speaking at an international conference on data privacy, Apple CEO Tim Cook applauded European Union authorities for bringing in a strict new data privacy law this year and said the iPhone maker supports a U.S. federal privacy law.

 

Cook’s remarks, along with comments due later from Google and Facebook top bosses, in the European Union’s home base in Brussels, underscore how the U.S. tech giants are jostling to curry favor in the region as regulators tighten their scrutiny.

 

Data protection has become a major political issue worldwide, and European regulators have led the charge in setting new rules for the big internet companies. The EU’s new General Data Protection Regulation, or GDPR, requires companies to change the way they do business in the region, and a number of headline-grabbing data breaches have raised public awareness of the issue.

 

“In many jurisdictions, regulators are asking tough questions. It is time for rest of the world, including my home country, to follow your lead,” Cook said.

 

“We at Apple are in full support of a comprehensive federal privacy law in the United States,” he said, to applause from hundreds of privacy officials from more than 70 countries.

 

In the U.S., California is moving to put in regulations similar to the EU’s strict rules by 2020 and other states are mulling more aggressive laws. That’s rattled the big tech companies, which are pushing for a federal law that would treat them more leniently.

 

Cook warned that technology’s promise to drive breakthroughs that benefit humanity is at risk of being overshadowed by the harm it can cause by deepening division and spreading false information. He said the trade in personal information “has exploded into a data industrial complex.”

 

“Our own information, from the everyday to the deeply personal, is being weaponized against us with military efficiency,” he said. Scraps of personal data are collected for digital profiles that let businesses know users better than they know themselves and allow companies to offer users increasingly extreme content that hardens their convictions,” Cook said.

 

“This is surveillance. And these stockpiles of personal data serve only to enrich only the companies that collect them,” he said.

 

Cook’s appearance seems set to one-up his tech rivals and show off his company’s credentials in data privacy, which has become a weak point for both Facebook and Google.

 

“With the spotlight shining as directly as it is, Apple have the opportunity to show that they are the leading player and they are taking up the mantle,” said Ben Robson, a lawyer at Oury Clark specializing in data privacy. Cook’s appearance “is going to have good currency,” with officials, he added.

 

Facebook CEO Mark Zuckerberg and Google head Sundar Pichai were scheduled to address by video the annual meeting of global data privacy chiefs. Only Cook attended in person.

 

He has repeatedly said privacy is a “fundamental human right” and vowed his company wouldn’t sell ads based on customer data the way companies like Facebook do.

 

His speech comes a week after the iPhone maker unveiled expanded privacy protection measures for people in the U.S., Canada, Australia and New Zealand, including allowing them to download all personal data held by Apple. European users already had access to this feature after GDPR took effect in May. Apple plans to expand it worldwide.

 

The International Conference of Data Protection and Privacy Commissioners, held in a different city every year, normally attracts little attention but its Brussels venue this year takes on symbolic meaning as EU officials ratchet up their tech regulation efforts.

 

The 28-nation EU took on global leadership of the issue when it beefed up data privacy regulations by launching GDPR. The new rules require companies to justify the collection and use of personal data gleaned from phones, apps and visited websites. They must also give EU users the ability to access and delete data, and to object to data use.

 

GDPR also allows for big fines benchmarked to revenue, which for big tech companies could amount to billions of dollars.

 

In the first big test of the new rules, Ireland’s data protection commission, which is a lead authority for Europe as many big tech firms are based in the country, is investigating Facebook after a data breach let hackers access 3 million EU accounts.

 

Google, meanwhile, shut down its Plus social network this month after revealing it had a flaw that could have exposed personal information of up to half a million people.

 

 

 

Hi-tech Cameras Spy Fugitive Emissions

The technology used in space missions can be expensive but it has some practical benefits here on Earth. Case in point: the thousands of high resolution images taken from the surface of Mars, collected by the two Mars rovers – Spirit and Opportunity. Now researchers at Carnegie Mellon University, in Pittsburgh, are using the same technology to analyze air pollution here on our planet. VOA’s George Putic reports.

US Tech Companies Reconsider Saudi Investment

The controversy over the death of Saudi Arabian journalist Jamal Khashoggi has shined a harsh light on the growing financial ties between Silicon Valley and the world’s largest oil exporter.

As Saudi Arabia’s annual investment forum in Riyadh — dubbed “Davos in the Desert” — continues, representatives from many of the kingdom’s highest-profile overseas tech investments are not attending, joining other international business leaders in shunning a conference amid lingering questions over what role the Saudi government played in the killing of a journalist inside their consulate in Turkey.

Tech leaders such as Steve Case, the co-founder of AOL, and Dara Khosrowshahi, the chief executive of Uber, declined to attend this week’s annual investment forum in Riyadh. Even the CEO of Softbank, which has received billions of dollars from Saudi Arabia to back technology companies, reportedly has canceled his planned speech at the event.

But the Saudi controversy is focusing more scrutiny on the ethics of taking money from an investor who is accused of wrongdoing or whose track record is questionable.

Fueling the tech race

In the tech startup world, Saudi investment has played a key role in allowing firms to delay going public for years while they pursue a high-growth strategy without worrying about profitability. Those ties have only grown with the ascendancy of Crown Prince Mohammed bin Salman, the son of the Saudi king.

The kingdom’s Public Investment Fund has put $3.5 billion into Uber and has a seat on Uber’s 12-member board. Saudi Arabia also has invested more than $1 billion into Lucid Motors, a California electric car startup, and $400 million in Magic Leap, an augmented reality startup based in Florida.

Almost half of the Japanese Softbank’s $93 billion Vision Fund came from the Saudi government. The Vision Fund has invested in a Who’s Who list of tech startups, including WeWork, Wag, DoorDash and Slack. 

Now there are reports that as the cloud hangs over the crown prince, Softbank’s plan for a second Vision fund may be on hold. And Saudi money might have trouble finding a home in the future in Silicon Valley, where companies are competing for talented workers, as well as customers.

The tech industry is not alone in questioning its relationship with the Saudi government in the wake of Khashoggi’s death or appearing to rethink its Saudi investments. Museums, universities and other business sectors that have benefited financially from their connections to the Saudis also are taking a harder look at those relationships.

Who are my investors?

Saudi money plays a large role in Silicon Valley, touching everything from ride-hailing firms to business-messaging startups, but it is not the only foreign investment in the region.

More than 20 Silicon Valley venture companies have ties to Chinese government funding, according to Reuters, with the cash fueling tech startups. The Beijing-backed funds have raised concerns that strategically important technology, such as artificial intelligence, is being transferred to China.

And Kremlin money has backed a prominent Russian venture capitalist in the Valley who has invested in Twitter and Facebook.

The Saudi controversy has prompted some in the Valley to question their investors about where those investors are getting their funding. Fred Wilson, a prominent tech venture capitalist, received just such an inquiry.

“I expect to get more emails like this in the coming weeks as the start-up and venture community comes to grip with the flood of money from bad actors that has found its way into the start-up/tech sector over the last decade,” he wrote in a blog post titled “Who Are My Investors?”

“Bad actors’ doesn’t simply mean money from rulers in the gulf who turn out to be cold blooded killers,” Wilson wrote. “It also means money from regions where dictators rule viciously and restrict freedom.” 

This may be a defining ethical moment in Silicon Valley, as it moves away from its libertarian roots to seeing the world in its complexity, said Ann Skeet, senior director of leadership ethics at the Markkula Center for Applied Ethics at Santa Clara University.

“Corporate leaders are moving more quickly and decisively than the administration, and they realize they have a couple of hats here — one, they are the chief strategist of their organization, and they also play the role of the responsible person who creates space for the right conversations to happen,” she said.

Tech’s evolving ethics

Responding to demands from their employees and customers, Silicon Valley firms are looking more seriously at business ethics and taking moral stands.

In the case of Google, it meant discontinuing a U.S. Defense Department contract involving artificial intelligence. In the case of WeWork, the firm now forbids the consumption of meat at the office or purchased with company expenses, on environmental grounds.

The Vision Fund will “undoubtedly find itself in a more challenging environment in convincing startups to take its money,” Amir Anvarzadeh, a senior strategist at Asymmetric Advisors in Singapore, recently told Bloomberg. 

Low-tech Tools Can Fight Land Corruption, Experts Say

Technological solutions to prevent land corruption require resources, but they do not have to be expensive, land rights experts said Tuesday.

Satellite imagery, cloud computing and blockchain are among technologies with the potential to help many of the world’s more than 1 billion people estimated to lack secure property rights. But they can be expensive and require experts to be trained.

That’s where low-tech solutions such as Cadastre Registry Inventory Without Paper (CRISP) can be useful, said Ketakandriana Rafitoson, executive director of global anti-corruption watchdog Transparency International (TI) in

Madagascar.

CRISP helps local activists in Madagascar, one of the world’s poorest countries, document land ownership using tablets with fingerprint readers and built-in cameras, which cost $20 a day to rent.

Users can take pictures of ID cards, location agreements, photos of landowners, their neighbors and any witnesses who were present during land demarcation, Rafitoson told the International Anti-Corruption Conference.

Lack of trust

One challenge in Madagascar is a lack of trust in politicians, Rafitoson said, meaning it is better if local charities are involved, too.

“If we just leave the land authorities with the community, it doesn’t work because they don’t trust each other,” she said.

Corruption in land management ranges from local officials demanding bribes for basic administrative duties to high-level political decisions being unduly influenced, according to TI.

The Dashboard, a tool developed by the International Land Coalition (ILC), is also putting local people at the center of monitoring land deals, said Eva Hershaw, a data specialist at the ILC, a global alliance of nonprofit organizations working on improving land governance.

The Dashboard is being tested in Colombia, Nepal and Senegal, where it allows ILC’s local partners to collect data based on 30 core indicators, including monitoring legal frameworks and how laws are implemented.

Next week, TI Zambia will launch a new phone-based platform, which can advise Zambians on various aspects of land acquisition and guide them through processes around it.

Rueben Lifuka, president of TI Zambia, said users can also report corruption through the platform, including requests for bribes. 

Those affected by corruption can decide whether a copy will be sent to the local authorities, and TI can then track the response.

An improvement in internet coverage in Zambia means it is becoming easier to develop technologies such as the platform, which cost about $34,000 to develop, Lifuka said.