Head of WhatsApp to Leave Company

The head of popular messaging service WhatsApp is planning to leave the company because of a reported disagreement over how parent company Facebook is using customers’ personal data. 

WhatsApp billionaire chief executive Jan Koum wrote in a Facebook post Monday, “It’s been almost a decade since (co-founder) Brian (Acton) and I started WhatsApp, and it’s been an amazing journey with some of the best people. But it is time for me to move on,” he said.

Koum did not give a date for his departure.

The Washington Post reported Monday that Koum is stepping down because of disagreements over Facebook’s attempts to use the personal data of WhatsApp customers, as well as efforts to weaken the app’s encryption. 

Action left the company last fall and since then has become a vocal critic of Facebook, recently endorsing a #DeleteFacebook social media campaign.

The Post, citing people familiar with internal WhatsApp discussions, said Koum was worn down by the differences in approach to privacy and security between WhatsApp and Facebook.

When WhatsApp agreed to the company’s sale to Facebook in 2014 for $19 billion, it said WhatsApp would remain an independent service and would not share its data with Facebook. 

However, 18 months later, Facebook pushed WhatsApp to change its terms of service to give the social network access to the personal data of WhatsApp users. 

WhatsApp is the largest messaging service in the world with 1.5 billion monthly users. However, Facebook has been struggling to find ways to make enough money from the app to prove its investment was worth the cost. 

Facebook has faced intense criticism since March when news broke that the personal data of millions of Facebook users had been harvested without their knowledge by Cambridge Analytica, a British voter profiling company that U.S. President Donald Trump’s campaign hired to target likely supporters in 2016.

Facebook chief executive Mark Zuckerberg testified before Congress earlier this month and apologized for inadequately protecting the data of millions of social media platform users. 

Facebook also recently announced it would allow all its users to shut off third-party access to their apps and said it would set up “firewalls” to ensure users’ data was not unwittingly transmitted by others in their social network.

Some members of Congress said Facebook’s actions to rectify the situation did not go far enough and have called for greater regulation of the internet and social media.

Paper Plane Protesters Urge Russia to Unblock Telegram App

Thousands of people marched through Moscow, throwing paper planes and calling for authorities to unblock the popular Telegram instant messaging app on Monday.

Protesters chanted slogans against President Vladimir Putin as they launched the planes – a reference to the app’s logo.

“Putin’s regime has declared war on the internet, has declared war on free society… so we have to be here in support of Telegram,” one protester told Reuters.

Russia began blocking Telegram on April 16 after the app refused to comply with a court order to grant state security services access to its users’ encrypted messages.

Russia’s FSB Federal Security service has said it needs access to some of those messages for its work, that includes guarding against militant attacks.

In the process of blocking the app, state watchdog Roskomnadzor also cut off access to a slew of other websites.

Telegram’s founder, Russian entrepreneur Pavel Durov, called for “digital resistance” in response to the decision and promised to fund anyone developing proxies and VPNs to dodge the block.

More than 12,000 people joined the march on Monday, said White Counter, a volunteer group that counts people at protests.

“Thousands of young and progressive people are currently protesting in Moscow in defense of internet freedom,” Telegram’s Durov wrote on his social media page.

“This is unprecedented. I am proud to have been born in the same country as you. Your energy changes the world,” Durov wrote.

Telegram has more than 200 million global users and is ranked as the world’s ninth most popular mobile messaging service.

Iran’s judiciary has also banned the app to protect national security, Iranian state TV reported on Monday.

State TV: Iran’s Judiciary Bans Using Telegram App

Iran’s judiciary has banned the popular Telegram instant messaging app to protect national security, Iran’s state TV reported Monday.

“Considering various complaints against Telegram social networking app by Iranian citizens, and based on the demand of security organizations for confronting the illegal activities of Telegram, the judiciary has banned its usage in Iran,” TV reported.

The order was issued days after Iran banned government bodies from using Telegram, which is widely used by Iranian state media, politicians, companies and ordinary Iranians.

A widespread government internet filter prevents Iranians from accessing many sites on the official grounds that they are offensive or criminal.

But many Iranians evade the filter through use of VPN software, which provides encrypted links directly to private networks based abroad, and can allow a computer to behave as if it is based in another country.

“The blocking of Telegram app should be in a way to prevent users from accessing it with VPN or any other software,” Fars said. The app had over 40 million users in Iran.

ISS to Get a New Commander and AI Assistant

On June 6, a few months short of its 20th birthday, the International Space Station or ISS, is scheduled to receive its newest crew, including the new commander, German astronaut Alexander Gerst. While Gerst and other members of his team are undergoing rigorous training in NASA’s Johnson Space Center in Houston, Airbus engineers are preparing the first personal assistant to fly to the space. VOA’s George Putic reports.

China Rapidly Expanding its Technology Sector

If you want your technology sector to expand rapidly, it pays to have strong support from the government, easy access to bank loans and a large market, hungry for your products. All this is available in China, where technology companies are expanding at a rapid pace — making other countries, including the U.S. — a bit uneasy. VOA’s George Putic reports.

Genetics Help Spot Food Contamination

A new approach for detecting food poisoning is being used to investigate the recent outbreak of E.coli bacteria in romaine lettuce grown in the U.S. state of Arizona. The tainted produce has sickened at least 84 people in 19 states. The new method, used by the Centers for Disease Control and Prevention, relies on genetic sequencing. And as Faiza Elmasry tells us, it has the potential to revolutionize the detection of food poisoning outbreaks. VOA’s Faith Lapidus narrates.

EU Piles Pressure on Social Media Over Fake News

Tech giants such as Facebook and Google must step up efforts to tackle the spread of fake news online in the next few months or potentially face further EU regulation, as concerns mount over election interference.

The European Commission said on Thursday it would draw up a Code of Practice on Disinformation for the 28-nation EU by July with measures to prevent the spread of fake news such as increasing scrutiny of advertisement placements.

EU policymakers are particularly worried that the spread of fake news could interfere with European elections next year, after Facebook disclosed that Russia tried to influence U.S. voters through the social network in the run-up to the 2016 U.S. election. Moscow denies such claims.

“These [online] platforms have so far failed to act proportionately, falling short of the challenge posed by disinformation and the manipulative use of platforms’ infrastructure,” the Commission wrote in its strategy for tackling fake news published on Thursday.

“The Commission calls upon platforms to decisively step up their efforts to tackle online disinformation.”

Advertisers and online platforms should produce “measurable effects” on the code of practice by October, failing which the Commission could propose further actions, including regulation “targeted at a few platforms.”

Companies will have to work harder to close fake accounts, take steps to reduce revenues for purveyors of disinformation and limit targeting options for political adverts.

The Commission, the EU’s executive, will also support the creation of an independent European network of fact-checkers and launch an online platform on disinformation.

Tech industry association CCIA said the October deadline for progress appeared rushed.

“The tech industry takes the spread of disinformation online very seriously…when drafting the Code of Practice, it is important to recognize that there is no one-size-fits-all solution to address this issue given the diversity of affected services,” said Maud Sacquet, CCIA Europe Senior Policy Manager.

Weaponizing fake news

The revelations that political consultancy Cambridge Analytica – which worked on U.S. President Donald Trump’s campaign – improperly accessed the data of up to 87 million Facebook users has further rocked public trust in social media.

“There are serious doubts about whether platforms are sufficiently protecting their users against unauthorized use of their personal data by third parties, as exemplified by the recent Facebook/Cambridge Analytica revelations,” the Commission wrote.

Facebook has stepped up fact-checking in its fight against fake news and is trying to make it uneconomical for people to post such content by lowering its ranking and making it less visible. The world’s largest social network is also working on giving its users more context and background about the content they read on the platform.

“The weaponization of online fake news and disinformation poses a serious security threat to our societies,” said Julian King, EU Commissioner for security. “The subversion of trusted channels to peddle pernicious and divisive content requires a clear-eyed response based on increased transparency, traceability and accountability.”

Campaign group European Digital Rights warned that the Commission ought not to rush into taking binding measures over fake news which could have an effect on the freedom of speech.

King rejected any suggestion that the proposal would lead to censorship or a crackdown on satire or partisan news.

“It’s a million miles away from censorship,” King told a news conference. “It’s not targeting partisan journalism, freedom of speech, freedom to disagree, freedom to be, in some cases, a bit disagreeable.”

Commission Vice-President Andrus Ansip said there had been some debate internally over whether to explicitly mention Russia in the fake news strategy.

“Some people say that we don’t want to name just one name. And other people say that ‘add some other countries also and then we will put them all on our list’, but unfortunately nobody is able to name those others,” the former Estonian prime minister said.

Facebook’s Rise in Profits, Users Shows Resilience 

Facebook Inc. shares rose Wednesday after the social network reported a surprisingly strong 63 percent rise in profit and an increase in users, with no sign that business was hurt by a scandal over the mishandling of personal data.

After easily beating Wall Street expectations, shares traded up 7.1 percent after the bell at $171, paring a month-long decline that began with Facebook’s disclosure in March that consultancy Cambridge Analytica had harvested data belonging to millions of users.

The Cambridge Analytica scandal, affecting up to 87 million users and prompting several apologies from Chief Executive Mark Zuckerberg, generated calls for regulation and for users to leave the social network, but there was no indication advertisers immediately changed their spending.

“Everybody keeps talking about how bad things are for Facebook, but this earnings report to me is very positive, and reiterates that Facebook is fine, and they’ll get through this,” said Daniel Morgan, senior portfolio manager at Synovus Trust Company. His firm holds about 73,000 shares in Facebook.

Facebook’s quarterly profit beat analysts’ estimates, as a 49 percent jump in quarterly revenue outpaced a 39 percent rise in expenses from a year earlier. The mobile ad business grew on a push to add more video content.

Facebook said monthly active users in the first quarter rose to 2.2 billion, up 13 percent from a year earlier and matching expectations, according to Thomson Reuters.

The company reversed last quarter’s decline in the number of daily active users in the United States and Canada, saying it had 185 million users there, up from 184 million in the fourth quarter.

Resilient business model

The results are a bright spot for the world’s largest social network amid months of negative headlines about the company’s handling of personal information, its role in elections and its fueling of violence in developing countries.

Facebook, which generates revenue primarily by selling advertising personalized to its users, has demonstrated for several quarters how resilient its business model can be as long as users keep coming back to scroll through its News Feed and watch its videos.

It is spending to ensure users are not scared away by scandals. Chief Financial Officer David Wehner told analysts on a call that expenses this year would grow between 50 percent and 60 percent, up from a prior range of 45 percent to 60 percent.

Spending on security

Much of Facebook’s ramp-up in spending is for safety and security, Wehner said. The category includes efforts to root out fake accounts, scrub hate speech and take down violent videos.

Facebook said it ended the first quarter with 27,742 employees, up 48 percent from a year earlier.

“So long as profits continue to grow at a rapid rate, investors will accept that higher spending to ensure privacy is warranted,” Wedbush Securities analyst Michael Pachter said.

It has been nearly two years since Facebook shares rose 7 percent or more during a trading day. They rose 7.2 percent on April 28, 2016, the day after another first-quarter earnings report.

Net income attributable to Facebook shareholders rose in the first quarter to $4.99 billion, or $1.69 per share, from $3.06 billion, or $1.04 per share, a year earlier.

Analysts on average were expecting a profit of $1.35 per share, according to Thomson Reuters.

Total revenue was $11.97 billion, above the analyst estimate of $11.41 billion.

Some details secret

The company declined to provide some details sought by analysts. It has not shared the revenue generated by Instagram, the photo-sharing app it owns, and it declined to provide details about time spent on Facebook. Facebook also owns the popular smartphone apps Messenger and WhatsApp.

Tighter regulation could make Facebook’s ads less lucrative by reducing the kinds of data it can use to personalize and target ads to users, although Facebook’s size means it could also be well positioned to cope with regulations.

Facebook and Alphabet Inc’s Google together dominate the internet ad business worldwide. Facebook is expected to take 18 percent of global digital ad revenue this year, compared with Google’s 31 percent, according to research firm eMarketer.

The company said it was increasing the amount of money authorized to repurchase shares by an additional $9 billion. It had initially authorized repurchases up to $6 billion.

YouTube Overhauls Kids’ App

YouTube is overhauling its kid-focused video app to give parents the option of letting humans, not computer algorithms, select what shows their children can watch.

The updates that begin rolling out April 26, 2018, are a response to complaints that the YouTube Kids app has repeatedly failed to filter out disturbing content.

Google-owned YouTube launched the toddler-oriented app in 2015. It has described it as a “safer” experience than the regular YouTube video-sharing service for finding “Peppa Pig” episodes or watching user-generated videos of people unboxing toys, teaching guitar lessons or experimenting with science.

Failure of screening system

In order to meet U.S. child privacy rules, Google says it bans kids under 13 from using its core video service. But its official terms of agreement are largely ignored by tens of millions of children and their families who don’t bother downloading the under-13 app.

Both the grown-up video service and the YouTube Kids app have been criticized by child advocates for their commercialism and for the failures of a screening system that relies on artificial intelligence. The app is engineered to automatically exclude content that’s not appropriate for kids, and recommend videos based on what children have watched before. That hasn’t always worked to parents’ liking — especially when videos with profanity, violence or sexual themes slip through the filters. 

Updates give parents option

The updates allow parents to switch off the automated system and choose a contained selection of children’s programming such as Sesame Street and PBS Kids. But the automated system remains the default.  

“For parents who like the current version of YouTube Kids and want a wider selection of content, it’s still available,” said James Beser, the app’s product director, in a blog post Wednesday. “While no system is perfect, we continue to fine-tune, rigorously test and improve our filters for this more-open version of our app.”

Beser also encouraged parents to block videos and flag them for review if they don’t think they should be on the app. But the practice of addressing problem videos after children have already been exposed to them has bothered child advocates who want the more controlled option to be the default. 

Cleaner, safer kids’ app

“Anything that gives parents the ability to select programming that has been vetted in some fashion by people is an improvement, but I also think not every parent is going to do this,” said Josh Golin, director of the Boston-based Campaign for a Commercial-Free Childhood. “Giving parents more control doesn’t absolve YouTube of the responsibility of keeping the bad content out of YouTube Kids.”

He said Google should aim to build an even cleaner and safer kids’ app, then pull all the kid-oriented content off the regular YouTube — where most kids are going — and onto that app. 

Golin’s group recently asked the Federal Trade Commission to investigate whether YouTube’s data collection and advertising practices violate federal child privacy rules. He said advocates plan to meet with FTC officials next week.

Will Robot Baristas Replace Traditional Cafes?

There has been a long tradition of making and drinking coffee across cultures and continents. Now, a tech company in Austin is adding to this tradition by creating robot baristas to make the coffee-drinking experience more convenient. For a similar price of a cup of Starbucks designer coffee, a robot can now make it, too. VOA’s Elizabeth Lee finds out whether robots will replace traditional baristas.

Flying Taxi Start-Up Hires Designer Behind Modern Mini, Fiat 500

Lilium, a German start-up with Silicon Valley-scale ambitions to put electric “flying taxis” in the air next decade, has hired Frank Stephenson, the designer behind iconic car brands including the modern Mini, Fiat 500 and McLaren P1.

Lilium is developing a lightweight aircraft powered by 36 electric jet engines mounted on its wings. It aims to travel at speeds of up to 300 kilometers (186 miles) per hour, with a range of 300 km on a single charge, the firm has said.

Founded in 2015 by four Munich Technical University students, the Bavarian firm has set out plans to demonstrate a fully functional vertical take-off electric jet by next year, with plans to begin online booking of commuter flights by 2025.

It is one of a number of companies, from Chinese automaker Geely to U.S. ride-sharing firm Uber, looking to tap advances in drone technology, high-performance materials and automated driving to turn aerial driving – long a staple of science fiction movies like “Blade Runner” – into reality.

Stephenson, 58, who holds American and British citizenship, will join the aviation start-up in May. He lives west of London and will commute weekly to Lilium’s offices outside of Munich.

His job is to design a plane on the outside and a car inside.

Famous for a string of hits at BMW, Mini, Ferrari, Maserati, Fiat, Alfa Romeo and McLaren, Stephenson will lead all aspects of Lilium design, including the interior and exterior of its jets, the service’s landing pads and even its departure lounges.

“With Lilium, we don’t have to base the jet on anything that has been done before,” Stephenson told Reuters in an interview.

“What’s so incredibly exciting about this is we’re not talking about modifying a car to take to the skies, and we are not talking about modifying a helicopter to work in a better way.”

Stephenson recalled working at Ferrari a dozen years ago and thinking it was the greatest job a grown-up kid could ever want.

But the limits of working at such a storied carmaker dawned on him: “I always had to make a car that looked like a Ferrari.”

His move to McLaren, where he worked from 2008 until 2017, freed him to design a new look and design language from scratch: “That was as good as it gets for a designer,” he said.

Lilium is developing a five-seat flying electric vehicle for commuters after tests in 2017 of a two-seat jet capable of a mid-air transition from hover mode, like drones, into wing-borne flight, like conventional aircraft.

Combining these two features is what separates Lilium from rival start-ups working on so-called flying cars or taxis that rely on drone or helicopter-like technologies, such as German rival Volocopter or European aerospace giant Airbus.

“If the competitors come out there with their hovercraft or drones or whatever type of vehicles, they’ll have their own distinctive look,” Stephenson said.

“Let the other guys do whatever they want. The last thing I want to do is anything that has been done before.”

The jet, with power consumption per kilometer comparable to an electric car, could offer passenger flights at prices taxis now charge but at speeds five times faster, Lilium has said.

Nonetheless, flying cars face many hurdles, including convincing regulators and the public that their products can be used safely. Governments are still grappling with regulations for drones and driverless cars.

Lilium has raised more than $101 million in early-stage funding from backers including an arm of China’s Tencent and Atomico and Obvious Ventures, the venture firms, respectively, of the co-founders of Skype and Twitter.    

 

Facebook Rules at a Glance: What’s Banned, Exactly?

Facebook has revealed for the first time just what, exactly, is banned on its service in a new Community Standards document released on Tuesday. It’s an updated version of the internal rules the company has used to determine what’s allowed and what isn’t, down to granular details such as what, exactly, counts as a “credible threat” of violence. The previous public-facing version gave a broad-strokes outline of the rules, but the specifics were shrouded in secrecy for most of Facebook’s 2.2 billion users.

Not anymore. Here are just some examples of what the rules ban. Note: Facebook has not changed the actual rules – it has just made them public.

Credible violence

Is there a real-world threat? Facebook looks for “credible statements of intent to commit violence against any person, groups of people, or place (city or smaller).” Is there a bounty or demand for payment? The mention or an image of a specific weapon? A target and at least two details such as location, method or timing? A statement to commit violence against a vulnerable person or group such as “heads-of-state, witnesses and confidential informants, activists, and journalists.”

Also banned: instructions on “on how to make or use weapons if the goal is to injure or kill people,” unless there is “clear context that the content is for an alternative purpose (for example, shared as part of recreational self-defense activities, training by a country’s military, commercial video games, or news coverage).”

Hate speech

“We define hate speech as a direct attack on people based on what we call protected characteristics – race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, and serious disability or disease. We also provide some protections for immigration status,” Facebook says. As to what counts as a direct attack, the company says it’s any “violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation.” There are three tiers of severity, ranging from comparing a protected group to filth or disease to calls to “exclude or segregate” a person our group based on the protected characteristics. Facebook does note that it does “allow criticism of immigration policies and arguments for restricting those policies.”

Graphic violence

Images of violence against “real people or animals” with comments or captions that contain enjoyment of suffering, humiliation and remarks that speak positively of the violence or “indicating the poster is sharing footage for sensational viewing pleasure” are prohibited. The captions and context matter in this case because Facebook does allow such images in some cases where they are condemned, or shared as news or in a medical setting. Even then, though, the post must be limited so only adults can see them and Facebook adds a warnings screen to the post.

Child sexual exploitation

“We do not allow content that sexually exploits or endangers children. When we become aware of apparent child exploitation, we report it to the National Center for Missing and Exploited Children (NCMEC), in compliance with applicable law. We know that sometimes people share nude images of their own children with good intentions; however, we generally remove these images because of the potential for abuse by others and to help avoid the possibility of other people reusing or misappropriating the images,” Facebook says. Then, it lists at least 12 specific instances of children in a sexual context, saying the ban includes, but is not limited to these examples. This includes “uncovered female nipples for children older than toddler-age.”

Adult nudity and sexual activity

“We understand that nudity can be shared for a variety of reasons, including as a form of protest, to raise awareness about a cause, or for educational or medical reasons. Where such intent is clear, we make allowances for the content. For example, while we restrict some images of female breasts that include the nipple, we allow other images, including those depicting acts of protest, women actively engaged in breast-feeding, and photos of post-mastectomy scarring,” Facebook says. That said, the company says it “defaults” to removing sexual imagery to prevent the sharing of non-consensual or underage content. The restrictions apply to images of real people as well as digitally created content, although art – such as drawings, paintings or sculptures – is an exception.