Students Build Program That Sniffs Out Twitter ‘Bots’

For months, university students Ash Bhat and Rohan Phadte had been tracking about 1,500 political propaganda accounts on Twitter that appeared to have been generated by computers when they noticed something odd.

In the hours after the February school shooting in Parkland, Florida, the bots, short for robots, shifted into high gear, jumping into the debate about gun control.

The hashtag #guncontrol gained traction among the bot network. In fact, all of the top hashtags among the bots were about the Parkland shooting, Bhat and Phadte noticed.

Explainer: What Is a Twitter Bot?

Twitter under fire

Since the 2016 U.S. presidential election, technology companies have come under fire for how their services were used by foreign-backed operations to sow discord among Americans before and after the election.

Twitter, in particular, has been called out repeatedly for the sheer number of computerized accounts that tweet about controversial topics. The company itself has said 50,000 accounts on its service were linked to Russian propaganda efforts, and the company recently announced plans to curtail automated, computer-generated accounts.

On Monday, executives from Twitter are expected to be on Capitol Hill to brief the Senate Commerce committee about how the service was manipulated in the wake of the Parkland shooting.

For Bhat and Phadte, students at the University of California, Berkeley, the growing public scrutiny on bots couldn’t come fast enough.

Figuring out Twitter fakes

Childhood friends from San Jose, Calif., the two work out of their shared apartment in Berkeley on ways to figure out what is real and fake on the internet and how to arm people with tools to tell the difference.

“Everyone’s realizing how big of a problem this is becoming,” Bhat, co-founder of RoBhat Labs, said. “And I think we’re also at a weird inflection point. It’s like the calm before the storm. We’re building up our defenses before the real effects of misinformation hit.”

One of their projects is Botcheck.me, a way for Twitter users to check whether a person on Twitter is real or fake. To use botcheck.me, users can download a Google Chrome extension, which puts the blue button next to every Twitter account. Or users can run a Twitter account through the website botcheck.me.

Some of the characteristics of a fake Twitter persona? Hundreds of tweets over a 24-hour period is one. Another, mostly retweeting others. A third clue, thousands of followers even though the account may be relatively new.

Polarizing the debate

The result is a digital robot army ready to jump into a national debate, they say.

“The conversation around gun control was a lot more polarizing in terms of for and against gun control, as opposed to seeing in the Parkland shooting other issues, such as mental illness,” Bhat said.

The two do not speculate who may be behind the bots or what their motives may be. Their concern is to try to bring some authenticity back into online discussions.

“Instead of being aggravated and spending an hour tweeting and retweeting, or getting madder, you can find out it’s a bot and stop engaging,” Bhat said.

In recent months, the students say they have seen a lot of Twitter accounts they have been tracking suspended.

But as fast as Twitter can get rid of accounts, the students say new ones are popping back up. And suspicious accounts are starting to look more like humans. They may tweet about the weather or cars for awhile before switching over into political content.

“You can sort of see these bots evolve,” Bhat said. “And the scary thing for us is that if we aren’t keeping up on their technological progress, it’s going to be impossible to tell the difference.”

Vero a Hot Instagram Alternative, but Will It Last?

Instagram users fed up with the service becoming more and more like Facebook are flocking to a hot new app called Vero.

Vero lets you share photos and video just like Instagram, plus it lets you talk about music, movies or books you like or hate. Though Vero has been around since 2015, its popularity surged in recent days, thanks in part to sudden, word-of-mouth interest from the cosplay community — comic book fans who like to dress up as characters. That interest then spread to other online groups.

There’s also a growing frustration with Instagram, with a flood of ads, dearth of privacy options and a recent end to the chronological ordering of posts. Instagram users have been posting screenshots of Vero, asking their friends to join.

But don’t ring Instagram’s death knells just yet. Hot new apps pop up and fizzle by the dozen, so the odds are stacked against Vero. Remember Ello? Peach? Thought so.

“Young people are super fickle and nothing has caught on in the way that Snapchat or Instagram has,” said Debra Aho Williamson, an eMarketer analyst who specializes in social media.

From 2015 until this past week, Vero was little known, with fewer than 200,000 users, according to CEO Ayman Hariri. Then cosplay members started posting photos of elaborate costumes and makeup. Photographers, tattoo artists and others followed. As of Thursday, Vero was approaching 3 million users, Hariri said.

A fee, eventually

Vero has gotten so popular in recent days that some users have reported widespread outages and error messages. Vero says it’s working to keep up in response “a large wave of new users.”

Vero works on Apple or Android mobile devices and is free, at least for now. The company eventually wants to charge a subscription fee.

There are no ads, and the service promises “no data mining. Ever.” That means it won’t try to sell you stuff based on your interests and habits, as revealed through your posts. Of course, Facebook started out without ads and “data mining,” and it’s now one of the top internet advertising companies. Facebook bought Instagram in 2012 and started showing ads there the following year.

Instagram’s privacy settings are all or nothing: You either make everything available to everyone on Instagram, or make everything visible only to approved friends. Vero lets you set the privacy level of individual posts. If you don’t want something available to all users, you can choose just close friends, friends or acquaintances.

Another big difference: Vero shows friends’ posts in chronological order rather than tailored to your perceived tastes, as determined by software. Instagram got rid of chronological presentations in 2016, a change that hasn’t gone well with many users.

Founder was already wealthy

Facebook CEO Mark Zuckerberg became a billionaire after starting the service. Vero’s founder was already one.

Hariri is the son of former Lebanese Prime Minister Rafic Hariri and helped run the family’s now-defunct construction company in Saudi Arabia. He got a computer science degree from Georgetown and returned to Saudi Arabia after his father was assassinated in 2005. His half brother, Saad, is Lebanon’s current prime minister.

Hariri’s ties with the family business, Saudi Oger, have come into question. The company has been accused in recent years of failing to pay workers and stranding them with little food and access to medical care. Vero says Hariri hasn’t had any operational or financial involvement with the business since late 2013.

Hariri said he started the service not to replace Instagram but to give people “a more authentic social network.” Because Vero doesn’t sell ads, he said, it isn’t simply trying to get people to stay on longer. More important, he said, is “how you feel when you use [it] and how you feel it’s useful.”

Newcomers like Ello and Peach can quickly become popular as people fed up with bigger services itch for something new. But reality can set in when people realize that their friends are not on the new services or that these services aren’t all they promised to be.

Williamson, the eMarketer analyst, said it’s difficult for a new service to become something people use for more than a few weeks.

A rare exception is Snapchat, which was founded in 2010, the same year as Instagram. Unlike Instagram, it has remained an independent company and is still a popular service among younger people. But even Snapchat is having trouble growing more broadly.

EMarketer recently published a report that predicted 2 million people under 25 leaving Facebook for other apps this year. But that means going to Snapchat and the Facebook-owned Instagram, not necessarily emerging services like Vero.

Another Flying Car Soon to Make Its Debut

Forget self-driving cars! Imagine a future filled with flying cars. The latest design comes from the Netherlands, where a company plans to officially unveil the newest combination of a gyrocopter and a sports car. VOA’s George Putic has more.

Facebook Ends Six-Country Test of Two Separate News Feeds

Facebook Inc on Thursday put an end to a test of splitting its signature News Feed into two, an idea that roiled how people consumed news in six countries where the test occurred and added to concern about Facebook’s power.

The test created two streaming series of posts. One was focused on photos and other updates from friends and family, and a second was called an “explore feed.” It was dedicated to material from Facebook pages that the user had liked, such as media outlets or sports teams.

The social media network decided to end the test and maintain one feed because people told the company in surveys they did not like the change, Adam Mosseri, head of the News Feed at Facebook, said in a statement.

“In surveys, people told us they were less satisfied with the posts they were seeing, and having two separate feeds didn’t actually help them connect more with friends and family,” Mosseri said.

The test began in October and took place in Bolivia, Cambodia, Guatemala, Serbia, Slovakia and Sri Lanka, and it quickly affected website traffic for smaller media outlets.

Mosseri said the company had also “received feedback that we made it harder for people in the test countries to access important information, and that we didn’t communicate the test clearly.”

He said Facebook would, in response, revise how it tests product changes although he did not say how.

Chief Executive Mark Zuckerberg has unveiled other changes to the Facebook News Feed in the past two months to fight sensationalism and prioritize posts from friends and family.

The world’s largest social network and its competitors are under pressure from users and government authorities to make their services less addictive and to stem the spread of false news stories and hoaxes.

Reporting by David Ingram.

Equifax Finds Additional 2.4 Million Impacted by 2017 Breach

Equifax said Thursday that an additional 2.4 million Americans were impacted by last year’s data breach, however these newly disclosed consumers had significantly less personal information stolen.

The company says the additional consumers only had their names and a partial driver’s license number stolen by the attackers, unlike the original 145.5 million Americans who had their Social Security numbers impacted. Attackers were unable to get the state where the license was issued, the date of issuance or its expiration date.

In total, roughly 147.9 million Americans have been impacted by Equifax’s data breach. It remains the largest data breach of personal information in history.

The company says they were able to find the additional 2.4 million Americans by cross referencing names with partial driver’s license numbers using both internal and external data sources. These Americans were not found in the original breach because Equifax had focused its investigation on those with Social Security numbers impacted. Individuals with stolen Social Security numbers are generally more at risk for identity theft because of how prolific Social Security numbers are used in identity verification.

Equifax Inc. says it will reach out to all newly impacted consumers and will provide the same credit monitoring and identity theft protection services they have been offering to the original victims.

Facebook Launches Job Search Feature for Low-Skilled Workers

Facebook wants to make it easier for people to find low-skilled jobs online.

After testing the new software in U.S. and Canada since last year, Facebook added job postings Wednesday in another 40 countries across Europe and elsewhere.

The software works with both Apple and PC operating systems.

Users can find openings using the Jobs dashboard on Facebook’s web sidebar or its mobile app’s More section. The search can be filtered according to area and type of industry, as well as between full-time and part-time jobs.

Users can automatically fill out applications with information from their Facebook profile, submit the applications and schedule interviews.

Businesses can post job openings using the Jobs tab on their page, and include advertisements.

Separately, Facebook announced the introduction of a face recognition software that helps users quickly find photos they’re in, but haven’t been tagged in. The new software will help users protect themselves against unauthorized use of their photos, as well as allow visually impaired users learn who is in their photos and videos.

Moon to Get Its Own Mobile Network

Several high-tech companies are teaming up on a plan to put a mobile phone network on the moon next year.

Vodaphone Germany, Nokia, and Audi are working on a mobile network and robotic vehicles that are part of a private expedition to the moon, timed to coincide with the 50th anniversary year of the first manned lunar landing.

The project with PTScientists in Germany would use a 4G network to send high-definition information from rovers back to a lunar lander, which would then be able to communicate it back to Earth. 

Project scientists say the system uses less energy than having rovers speak directly to Earth, leaving more power for scientific activities. 

They plan to launch the vehicles from Cape Canaveral next year on a Space X Falcon 9 rocket. 

Facebook: No New Evidence Russia Interfered in Brexit Vote

Facebook Inc has told a British parliamentary committee that further investigations have found no new evidence that Russia used social media to interfere in the June 2016 referendum in which Britain voted to leave the European Union.

Facebook UK policy director Simon Milner in a letter Wednesday told the House of Commons Committee on Digital, Culture Media and Sport that the latest investigation the company undertook in mid-January to try to “identify clusters of coordinated Russian activity around the Brexit referendum that were not identified previously” had been unproductive.

Using the same methodology that Facebook used to identify U.S. election-related social media activity conducted by a Russian propaganda outfit called the Internet Research Agency, Milner said the social network had reviewed both Facebook accounts and “the activity of many thousands of advertisers in the campaign period” leading up to the June 23, 2016 referendum.

He said they had “found no additional coordinated Russian-linked accounts or Pages delivering ads to the UK regarding the EU Referendum during the relevant period, beyond the minimal activity we previously disclosed.”

At a hearing on social media political activity that the parliamentary committee held in Washington earlier in February, Milner had promised the panel it would disclose more results of its latest investigation by the end of February.

At the same hearing, Juniper Downs, YouTube’s global head of public policy, said that her company had “conducted a thorough investigation around the Brexit referendum and found no evidence of Russian interference.”

In his letter to the committee, Facebook’s Milner acknowledged that the minimal results in the company’s Brexit review contrasted with the results of Facebook inquiries into alleged Russian interference in U.S. politics. The company’s U.S. investigation results, Milner said, “comport with the recent indictments” Justice Department special counsel Robert Mueller issued against Russian individuals and entities.

Following its Washington hearing, committee chairman Damian Collins MP said his committee expected to finish a report on its inquiry into Social Media and Fake News in late March and that the report is likely to include recommendations for new British laws or regulations regarding social media content.

These could include measures to clarify the companies’ legal liability for material they distribute and their obligations to address social problems the companies’ content could engender, he said.

ISS Astronauts Will Soon Get a Personal Assistant

Astronauts aboard the International Space Station will soon get a personal assistant, similar to Amazon’s Alexa and Apple’s Siri, but so smart that astronauts prefer to call it a “colleague.”

Its official name is CIMON, short for Crew Interactive Mobile Companion, and it will partially live in a five-kilogram ball built by Airbus. It has a video screen with rudimentary face features, cameras with face recognition, microphones and speakers.

CIMON will move freely within the space station; however, its brain will be on Earth in IBM’s supercomputer, named Watson, loaded with a huge amount of scientific knowledge.

CIMON’s main human companion will be German astronaut Alexander Gerst, who will bring it onboard ISS in June. The two are currently training together, as CIMON will have to be able to recognize Gerst’s voice and face, and also to navigate within the complicated interior of the spacecraft.

For starters, Gerst and CIMON will cooperate in experiments with crystals, a complex medical experiment, and also try to solve the Rubik’s magic cube using only videos.

A larger experiment will be the interaction between human and artificial intelligence, especially in view of future deep-space missions.

CIMON’s developers would like to see whether an intelligent interactive assistant will help reduce astronauts’ stress during long flights and improve their efficiency.

Artificial Intelligence Poses Big Threat to Society, Warn Leading Scientists

Artificial Intelligence is on the cusp of transforming our world in ways many of us can barely imagine. While there’s much excitement about emerging technologies, a new report by 26 of the world’s leading AI researchers warns of the potential dangers that could emerge over the coming decade, as AI systems begin to surpass levels of human performance.

Automated hacking is identified as one of the most imminent applications of AI, especially so-called “phishing” attacks.

“That part used to take a lot of human effort – you had to study your target, make a profile of them, craft a particular message – that’s known as phishing. We are now getting to the point where we can train computers to do the same thing. So you can model someone’s topics of interest or preferences, their writing style, the writing style of a close friend, and have a machine automatically create a message that looks a lot like something they would click on,” says report co-author Shahar Avin of the Center for the Study of Existential Risk at Britain’s University of Cambridge.

In an era of so-called “fake news,” the implications of AI for media and journalism are also profound.

Programmers from the University of Washington last year built an AI algorithm to create a video of Barack Obama, allowing them to program the “fake” former president to say anything they wished. It’s just the start, says Avin.

“You create videos and audio recordings that are pixel to pixel indistinguishable from real videos and real audio of people. We will need new technical measures. Maybe some kind of digital signatures, to be able to verify sources.”

There is much excitement over technology such as self-driving AI cars, with big tech companies alongside giant car makers vying to be the first to market. The systems, however, are only as secure as the environments in which they operate.

“You can have a car that is as good and better at navigating the world than your average driver. But you put some stickers on a ‘Stop’ sign and it thinks it’s ‘Go at 55 miles per hour.’ As long as we haven’t fixed that problem, we might have systems that are very safe, but are not secure. We could have a world filled with robotic systems that are very useful and very safe, but are also open to an attack by a malicious actor who knows what they are doing,” adds Avin.

The report warns that the proliferation of drones and other robotic systems could allow attackers “to deploy or re-purpose such systems for harmful ends, such as crashing fleets of autonomous vehicles, turning commercial drones into face-targeting missiles or holding critical infrastructure to ransom.”

He says AI use in warfare is widely seen as one of the most disturbing possibilities, with so-called ‘killer robots’ and decision-making taken out of the hands of humans.

“You want to have an edge over your opponent by deploying lots and lots of sensors, lots and lots of small robotic systems, all of them giving you terabytes of information about what’s happening on the battlefield. And no human would be in a position to aggregate that information, so you would start having decision recommendation systems. At this point, do you still have meaningful human control?”

There is also the danger of AI being used in mass surveillance, especially by oppressive regimes.

The researchers stress the many positive applications of AI; however, they note that it is a dual-use technology, and assert that AI researchers and engineers should be proactive about the potential for its misuse.

The authors say AI itself will likely provide many of the solutions to the problems they identify.