Cross Continent Solar Car Race Sets Grueling Pace

Every two years, Australia holds the World Solar Challenge. It is a grueling 3-thousand kilometer race across the Australian outback in cars powered only by the sun. Everyone from high school engineers to corporate sponsored giants is free to compete, and every year the cars go farther, and faster than before. VOA’s Kevin Enochs reports.

Facebook Gets Real About Broadening Virtual Reality’s Appeal

Facebook CEO Mark Zuckerberg seems to be realizing a sobering reality about virtual reality: His company’s Oculus headsets that send people into artificial worlds are too expensive and confining to appeal to the masses.

Zuckerberg on Wednesday revealed how Facebook intends to address that problem, unveiling a stand-alone headset that won’t require plugging in a smartphone or a cord tethering it to a personal computer like the Oculus Rift headset does.

“I am more committed than ever to the future of virtual reality,” Zuckerberg reassured a crowd of computer programmers in San Jose, California, for Oculus’ annual conference.

Facebook’s new headset, called Oculus Go, will cost $199 when it hits the market next year. That’s a big drop from the Rift, which originally sold for $599 and required a PC costing at least $500 to become immersed in virtual reality, or VR.

Recent discounts lowered the Rift’s price to $399 at various times during the summer, a markdown Oculus now says will be permanent.

“The strategy for Facebook is to make the onboarding to VR as easy and inexpensive as possible,” said Gartner analyst Brian Blau. “And $199 is an inexpensive entry for a lot of people who are just starting out in VR. The problem is you will be spending that money on a device that only does VR and nothing else.”

Facebook didn’t provide any details on how the Oculus Go will work, but said it will include built-in headphones for audio and have a LCD display.

Other headsets

The Oculus Go will straddle the market between the Rift and the Samsung Gear, a $129 headset that runs on some of Samsung’s higher-priced phones. It will be able to run the same VR as the Samsung Gear, leading Blau to conclude the Go will rely on the same Android operating system as the Gear and likely include similar processors as Samsung phones.

The Gear competes against other headsets, such as Google’s $99 Daydream View, that require a smartphone. Google is also working on a stand-alone headset that won’t require a phone, but hasn’t specified when that device will be released or how much it will cost.

Zuckerberg promised the Oculus Go will be “the most accessible VR experience ever,” and help realize his new goal of having 1 billion people dwelling in virtual reality at some point in the future.

Facebook and other major technology companies such as Google and Microsoft that are betting on VR have a long way to go.

About 16 million head-mounted display devices were shipped in 2016, a number expected to rise to 22 million this year, according to the research firm Gartner Inc. Those figures include headsets for what is known as augmented reality.

Zuckerberg, though, remains convinced that VR will evolve into a technology that reshapes the way people interact and experience life, much like smartphones and social networks already have. His visions carry weight, largely because Facebook now has more than 2 billion users and plays an influential role in how people communicate.

But VR so far has been embraced mostly by video game lovers, despite Facebook’s efforts to bring the technology into the mainstream since buying Oculus for $2 billion three years ago.

Facebook has shaken up Oculus management team since then in a series of moves that included the departure of founder Palmer Luckey earlier this year.

Former Google executive Hugo Barra now oversees Facebook’s VR operations.

California Moves Toward Public Access for Self-driving Cars

California regulators took an important step Wednesday to clear the road for everyday people to get self-driving cars.

The state’s Department of Motor Vehicles published proposed rules that would govern the technology within California, where for several years manufacturers have been testing hundreds of prototypes on roads.

That testing requires a trained safety driver behind the wheel, just in case the onboard computers and sensors fail. Though companies are not ready to unleash the technology for regular drivers — most say it remains a few years away — the state expects to have a final regulatory framework in place by June.

That framework would let companies begin testing prototypes with neither steering wheels nor pedals — and indeed nobody at all inside. The public is unlikely to get that advanced version of the technology until several years after the deployment of cars that look and feel more like traditional, human-controlled vehicles.

Consumers probably won’t be able to walk into a dealership and buy a fully driverless vehicle next year. Major automakers like Mercedes, BMW, Ford, Nissan and Volvo have all said it will be closer to 2020 before those vehicles are available, and even then, they could be confined to ride-hailing fleets and other shared applications.

Tesla Inc. says the cars it’s making now have the hardware they need for full self-driving. The company is still testing the software and won’t make it available to owners without regulatory approval.

Still, Wednesday’s announcement puts California on the verge of finalizing rules for public access, which were due more than two years ago. The delay reflects both the developing nature of the technology as well as how the federal government — which is responsible for regulating the safety of the vehicles — has struggled to write its own rules.

Legislation intended to clear away federal regulations that could impede a new era of self-driving cars has moved quickly through Congress. The House has passed a bill that would permit automakers to seek exemptions to safety regulations, such as to make cars without a steering wheel, so they could sell hundreds of thousands of self-driving cars. A Senate committee approved a similar measure last week by a voice vote.

California’s proposed rules must still undergo a 15-day public comment period, which could result in further changes, and then a protracted review by other state attorneys. Department of Motor Vehicles attorney Brian Soublet told reporters that the rules should be final before June, if not before.

Half of US, Japan Teens ‘Addicted’ to Smartphones

About half of teenagers in the United States and Japan say they are addicted to their smartphones.

University of Southern California (USC) researchers asked 1,200 Japanese about their use of electronic devices. The researchers are with the Walter Annenberg School for Communications and Journalism. Their findings were compared with an earlier study on digital media use among families in North America.

“Advances in digital media and mobile devices are changing the way we engage not only with the world around us, but also with the people who are the closest to us,” said Willow Bay, head of the Annenberg School.

The USC report finds that 50 percent of American teenagers and 45 percent of Japanese teens feel addicted to their mobile phones.

“This is a really big deal,” said James Steyer, founder of Common Sense Media, an organization that helped with the study. “Just think about it, 10 years ago we didn’t even have smart phones.”

Sixty-one percent of Japanese parents believe their children are addicted to the devices. That compares to 59 percent of the American parents who were asked.

Also, more than 1-in-3 Japanese parents feel they have grown dependent on electronic devices, compared to about 1-in-4 American parents.

Leaving your phone at home is ‘one of the worst things’

“Nowadays, one of the worst things that can happen to us is, like, ‘Oh, I left my phone at home,’” said Alissa Caldwell, a student at the American School in Tokyo. She spoke at the USC Global Conference 2017, which was held in Tokyo.

A majority of Japanese and American parents said their teenagers used mobile devices too much. But only 17 percent of Japanese teens agreed with that assessment. In the United States, 52 percent of teens said they are spending too much time on mobile devices.

Many respond immediately to messages

About 7-in-10 American teens said they felt a need to react quickly to mobile messages, compared to about half of Japanese teens.

In Japan, 38 percent of parents and 48 percent of teens look at and use their devices at least once an hour. In the United States, 69 percent of parents and 78 percent of teens say they use their devices every hour.

Naturally, that hourly usage stops when people are sleeping, the researchers said.

The devices are a greater cause of conflict among teens and parents in the United States than in Japan. One-in-3 U.S. families reported having an argument every day about mobile device use. Only about 1-in-6 Japanese families say they fight every day over mobile devices.

Care more about devices than your children?

But 20 percent of Japanese teens said they sometimes feel that their parents think their mobile device is more important than they are. The percentage of U.S. teens saying they feel this way is 6 percent.

In the United States, 15 percent of parents say their teens’ use of mobile devices worsens the family’s personal relationships. Eleven percent of teens feel their parents’ use of mobile devices is not good for their relationship.

The USC research was based on an April 2017 study of 600 Japanese parents and 600 Japanese teenagers. Opinions from American parents and teenagers were collected in a study done earlier by Common Sense Media.

Bay, the Annenberg School of Communications dean, said the research raises critical questions about the effect of digital devices on family life.

She said the cultural effects may differ from country to country, but “this is clearly a global issue.”

Facebook’s Zuckerberg Apologizes for Virtual Tour of Devastated Puerto Rico

Mark Zuckerberg has apologized for showcasing Facebook’s virtual reality capability with a tour of hurricane-ravaged Puerto Rico.

The Facebook founder and another executive discussed the platform’s virtual reality project through avatars in a video recorded live Monday.

The video begins with the avatars pictured on the roof of Facebook’s Mountain View, California, headquarters before heading to Puerto Rico by using a 360-degree video recorded by National Public Radio as a backdrop.

Zuckerberg later responded to critics, writing that his goal of showing “how VR can raise awareness and help us see what’s happening in different parts of the world” wasn’t clear. He says he’s sorry to anyone who was offended.

Facebook is also working to restore internet connectivity on the island and has donated money to the relief effort.

US Researchers Genetically Modify Corn to Boost Nutritional Value

U.S. researchers said this week they have discovered a way to genetically engineer corn, the world’s largest commodity crop, to produce a type of amino acid found in meat.

The result is a nutritionally rich food that could benefit millions worldwide, while also reducing the cost of animal feed.  The breakthrough came in a report in the National Academy of Sciences, a peer-reviewed journal. 

Researchers say the process involves infusing corn with a certain type of bacteria in order to produce methionine, an amino acid generally found in meat.

“We improved the nutritional value of corn, the largest commodity crop grown on Earth,” Thomas Leustek, professor in the Department of Plant Biology at Rutgers University and co-author of the study, told VOA. “Most corn is used for animal feed, but it lacks methionine — a key amino acid — and we found an effective way to add it.”

The new method works by adding an E. coli bacteria into the genome of the corn plant, which then causes the methionine production in the plants leaves. According to the study, methionine in the corn kernels then increases by about 57 percent.

The scientists fed the genetically modified corn to chickens at Rutgers University in order to show it was nutritious for them, co-author Joachim Messing said.

Normally, chicken feed is prepared as a corn-soybean mixture, the authors said in a press release, but the mixture lacks methionine.

“Methionine is added because animals won’t grow without it. In many developing countries where corn is a staple, methionine is also important for people, especially children. It’s vital nutrition, like a vitamin,” Messing said.

If the genetically modified corn can be successfully deployed, those who live in developing countries “wouldn’t have to purchase methionine supplements or expensive foods that have higher methionine,” Leustek said.

Victor Beattie contributed to this report.

Turning Waste into Fuel and Food

While mountains of waste and trash keep growing in industrialized and developing countries, materials scientists are busy as ever experimenting with new methods for turning those scraps into something useful, from biofuel to food. VOA’s George Putic looks at some of the latest discoveries.

Fake News Still Here, Despite Efforts by Google, Facebook

Nearly a year after Facebook and Google launched offensives against fake news, they’re still inadvertently promoting it — often at the worst possible times.

 

Online services designed to engross users aren’t so easily retooled to promote greater accuracy, it turns out. Especially with online trolls, pranksters and more malicious types scheming to evade new controls as they’re rolled out.

Fear and falsity in Las Vegas

In the immediate aftermath of the Las Vegas shooting, Facebook’s “Crisis Response” page for the attack featured a false article misidentifying the gunman and claiming he was a “far left loon.” Google promoted a similarly erroneous item from the anonymous prankster site 4chan in its “Top Stories” results.

A day after the attack, a YouTube search on “Las Vegas shooting” yielded a conspiracy-theory video that claimed multiple shooters were involved in the attack as the fifth result. YouTube is owned by Google.

None of these stories were true. Police identified the sole shooter as Stephen Paddock, a Nevada man whose motive remains a mystery. The Oct. 1 attack on a music festival left 58 dead and hundreds wounded.

The companies quickly purged offending links and tweaked their algorithms to favor more authoritative sources. But their work is clearly incomplete — a different Las Vegas conspiracy video was the eighth result displayed by YouTube in a search Monday.

Engagement first

Why do these highly automated services keep failing to separate truth from fiction? One big factor: most online services systems tend to emphasis posts that engage an audience — exactly what a lot of fake news is specifically designed to do.

Facebook and Google get caught off guard “because their algorithms just look for signs of popularity and recency at first,” without first checking to ensure relevance, says David Carroll, a professor of media design at the Parsons School of Design in New York.

That problem is much bigger in the wake of disaster, when facts are still unclear and demand for information runs high.

Malicious actors have learned to take advantage of this, says Mandy Jenkins, head of news at social media and news research agency Storyful. “They know how the sites work, they know how algorithms work, they know how the media works,” she says.

Participants on 4chan’s “Politically Incorrect” channel regularly chat about “how to deploy fake news strategies” around major stories, says Dan Leibson, vice president of search at the digital marketing consultancy Local SEO Guide.

One such chat just hours after the Las Vegas urged readers to “push the fact this terrorist was a commie” on social media. “There were people discussing how to create engagement all night,” Leibson says.

Eye of the beholder

Thanks to political polarization, the very notion of what constitutes a “credible” source of news is now a point of contention.

Mainstream journalists routinely make judgments about the credibility of various publications based on their history of accuracy. That’s a much more complicated issue for mass-market services like Facebook and Google, given the popularity of many inaccurate sources among political partisans.

The pro-Trump Gateway Pundit site, for example, published the false Las Vegas story promoted by Facebook. But it has also been invited to White House press briefings and counts more than 620,000 fans on its Facebook page.

 

Facebook said last week it is “working to fix the issue” that led it to promote false reports about the Las Vegas shooting, although it didn’t say what it had in mind.

 

The company has already taken a number of steps since December; it now features fact-checks by outside organizations, puts warning labels on disputed stories and has de-emphasized false stories in people’s news feeds.

 

Getting algorithms right

Breaking news is also inherently challenging for automated filter systems. Google says the 4chan post that misidentified the Las Vegas shooter should not have appeared in its “Top Stories” feature, and was replaced by its algorithm after a few hours.

Outside experts say Google was flummoxed by two different issues. First, its “Top Stories” is designed to return results from the broader web alongside items from news outlets. Second, signals that help Google’s system evaluate the credibility of a web page — for instance, links from known authoritative sources — aren’t available in breaking news situations, says independent search optimization consultant Matthew Brown.

“If you have enough citations or references to something, algorithmically that’s going to look very important to Google,” Brown said. “The problem is an easy one to define but a tough one to resolve.”

More people, fewer robots

Federal law currently exempts Facebook, Google and similar companies from liability for material published by their users. But circumstances are forcing the tech companies to accept more responsibility for the information they spread.

Facebook said last week that it would hire an extra 1,000 people to help vet ads after it found a Russian agency bought ads meant to influence last year’s election. It’s also subjecting potentially sensitive ads, including political messages, to “human review.”

In July, Google revamped guidelines for human workers who help rate search results in order to limit misleading and offensive material. Earlier this year, Google also allowed users to flag so-called “featured snippets” and “autocomplete” suggestions if they found the content harmful.

The Google-sponsored Trust Project at Santa Clara University is also working to create tags that could serve as markers of credibility for individual authors. These would include items such as their location and journalism awards, information that could be fed into future algorithms, according to project director Sally Lehrman.

Fake News Is Still Here, Despite Efforts by Google, Facebook

Nearly a year after Facebook and Google launched offensives against fake news, they’re still inadvertently promoting it — often at the worst possible times.

 

Online services designed to engross users aren’t so easily retooled to promote greater accuracy, it turns out. Especially with online trolls, pranksters and more malicious types scheming to evade new controls as they’re rolled out.

Fear and falsity in Las Vegas

In the immediate aftermath of the Las Vegas shooting, Facebook’s “Crisis Response” page for the attack featured a false article misidentifying the gunman and claiming he was a “far left loon.” Google promoted a similarly erroneous item from the anonymous prankster site 4chan in its “Top Stories” results.

A day after the attack, a YouTube search on “Las Vegas shooting” yielded a conspiracy-theory video that claimed multiple shooters were involved in the attack as the fifth result. YouTube is owned by Google.

None of these stories were true. Police identified the sole shooter as Stephen Paddock, a Nevada man whose motive remains a mystery. The Oct. 1 attack on a music festival left 58 dead and hundreds wounded.

The companies quickly purged offending links and tweaked their algorithms to favor more authoritative sources. But their work is clearly incomplete — a different Las Vegas conspiracy video was the eighth result displayed by YouTube in a search Monday.

Engagement first

Why do these highly automated services keep failing to separate truth from fiction? One big factor: most online services systems tend to emphasis posts that engage an audience — exactly what a lot of fake news is specifically designed to do.

Facebook and Google get caught off guard “because their algorithms just look for signs of popularity and recency at first,” without first checking to ensure relevance, says David Carroll, a professor of media design at the Parsons School of Design in New York.

That problem is much bigger in the wake of disaster, when facts are still unclear and demand for information runs high.

Malicious actors have learned to take advantage of this, says Mandy Jenkins, head of news at social media and news research agency Storyful. “They know how the sites work, they know how algorithms work, they know how the media works,” she says.

Participants on 4chan’s “Politically Incorrect” channel regularly chat about “how to deploy fake news strategies” around major stories, says Dan Leibson, vice president of search at the digital marketing consultancy Local SEO Guide.

One such chat just hours after the Las Vegas urged readers to “push the fact this terrorist was a commie” on social media. “There were people discussing how to create engagement all night,” Leibson says.

Eye of the beholder

Thanks to political polarization, the very notion of what constitutes a “credible” source of news is now a point of contention.

Mainstream journalists routinely make judgments about the credibility of various publications based on their history of accuracy. That’s a much more complicated issue for mass-market services like Facebook and Google, given the popularity of many inaccurate sources among political partisans.

The pro-Trump Gateway Pundit site, for example, published the false Las Vegas story promoted by Facebook. But it has also been invited to White House press briefings and counts more than 620,000 fans on its Facebook page.

 

Facebook said last week it is “working to fix the issue” that led it to promote false reports about the Las Vegas shooting, although it didn’t say what it had in mind.

 

The company has already taken a number of steps since December; it now features fact-checks by outside organizations, puts warning labels on disputed stories and has de-emphasized false stories in people’s news feeds.

 

Getting algorithms right

Breaking news is also inherently challenging for automated filter systems. Google says the 4chan post that misidentified the Las Vegas shooter should not have appeared in its “Top Stories” feature, and was replaced by its algorithm after a few hours.

Outside experts say Google was flummoxed by two different issues. First, its “Top Stories” is designed to return results from the broader web alongside items from news outlets. Second, signals that help Google’s system evaluate the credibility of a web page — for instance, links from known authoritative sources — aren’t available in breaking news situations, says independent search optimization consultant Matthew Brown.

“If you have enough citations or references to something, algorithmically that’s going to look very important to Google,” Brown said. “The problem is an easy one to define but a tough one to resolve.”

More people, fewer robots

Federal law currently exempts Facebook, Google and similar companies from liability for material published by their users. But circumstances are forcing the tech companies to accept more responsibility for the information they spread.

Facebook said last week that it would hire an extra 1,000 people to help vet ads after it found a Russian agency bought ads meant to influence last year’s election. It’s also subjecting potentially sensitive ads, including political messages, to “human review.”

In July, Google revamped guidelines for human workers who help rate search results in order to limit misleading and offensive material. Earlier this year, Google also allowed users to flag so-called “featured snippets” and “autocomplete” suggestions if they found the content harmful.

The Google-sponsored Trust Project at Santa Clara University is also working to create tags that could serve as markers of credibility for individual authors. These would include items such as their location and journalism awards, information that could be fed into future algorithms, according to project director Sally Lehrman.

 

US House Committee Calls New Hearing on Kaspersky Software

A U.S. House of Representatives committee said Friday that it had scheduled a new hearing on Kaspersky Lab software as lawmakers review accusations that the Kremlin could use its products to conduct espionage.

Kaspersky Lab has strongly denied those allegations — which last month prompted the Trump administration to order civilian government agencies to purge the software from its networks — and agreed to send Chief Executive Eugene Kaspersky to Washington to testify before Congress.

The House Committee on Science, Space and Technology announced the October 25 hearing a day after reports that Russian government-backed hackers stole highly classified U.S. cybersecrets in 2015 from a National Security Agency contractor who had Kaspersky software installed on his laptop.

The House science committee did not say who would be called to testify at the hearing.

Eugene Kaspersky last month told Reuters that the committee had invited him to testify at a September 27 hearing and that he would attend if he could get an expedited visa to enter the United States.

Classified session

That hearing was later canceled, though the committee held a closed-door classified session on Kaspersky software on September 26.

Kaspersky said in a statement on Friday that he hoped to attend the hearing.

“I look forward to participating in the hearing once it’s rescheduled and having the opportunity to address the committee’s concerns directly,” he said.

An appearance before Congress would mark Kaspersky’s most high-profile attempt to dispel long-standing accusations that his firm may be conducting espionage on behalf of the Russian government.

The investigation into the 2015 NSA hack is focused on somebody who worked at the agency’s Tailored Access Operations unit, a unit that uses computer hacking to gather intelligence, according to two people familiar with the classified probe.

Kaspersky anti-virus software was running on the contractor’s laptop at the time of the hack, and investigators are looking into whether hackers used the software to breach the computer and steal the data, said one of those sources.

Women in Tech Talk Change in Orlando

In Orlando, Florida, where tourists come for the palm trees, shopping and theme parks, 18,000 women converged recently on the city’s giant convention center to talk about technology.

Amid technical sessions on artificial intelligence and augmented reality, the main theme of the Grace Hopper Celebration, the largest gathering of women in technology worldwide, was simple: How to make the tech industry more welcoming to women.

 

With women making up nearly 23 percent of the U.S. tech industry’s workforce, women should be playing a bigger role than they currently do in the industry, said Melinda Gates, co-founder of the Bill & Melinda Gates Foundation.

“It’s time the world recognizes that the next Bill Gates may not look anything like the last one and that not every great idea comes wrapped in a hoodie,” said Melinda Gates, who worked at Microsoft earlier in her career.

This isn’t your typical technology conference.

 

First, its namesake “Grace Hopper” was a rear admiral in the U.S. Navy and a groundbreaking computer programmer.

 

The conference also provided childcare and all-gender bathrooms. At some of the career booths, women were offered lip balm embossed with a corporate name. At one booth, they were invited to vamp it up, while promoting a new cloud computing service.

Chinyere Nwabugwu, a machine learning researcher at IBM Research in San Jose, California, said what she liked most was hearing about what successful women have done to get ahead.

“I’m just encouraged to work hard in my field, to be known for something, to put in my best, to be a good role model to others, mentor other people coming after me,” Nwabugwu said.

Town hall conference

Voice of America held a town hall at the conference where female leaders in technology talked about the progress that has been made and how far it has yet to go. There are concrete steps companies can take that will bring more women into the industry, the speakers said.

One simple thing companies can do is publicly announce job openings, rather than fill jobs from managers’ personal connections, said Danielle Brown, chief diversity and inclusion officer at Google.

Paula Tolliver, chief information officer at Intel, recently left one male-dominated industry — she was an executive at Dow Chemical — for the tech industry. But she said she was drawn by tech’s promise.

 

“Being CIO of Intel, and being at the middle of the ecosystem of Silicon Valley and working across many industries, it’s exciting,” Tolliver said. “And I personally, want more women to be more representative of that.”

Deborah Berebichez, a data scientist and co-host of the Discovery Channel’s Outrageous Acts of Science, said that she pursued science despite the lack of support from her parents.

 

Gatherings, such as the Grace Hopper Celebration, are solving two important problems in the tech industry, Berebichez said: How to interest more women in tech and how to help women already in tech to advance their careers.

Gender diversity issues

Both issues came to the forefront in August after a memo written by a male engineer at Google questioned the need for gender diversity programs in the industry.

In a 10-page internal memo that was leaked on social media, James Damore suggested fewer women are employed in the technology field because women “prefer jobs in social and artistic areas” due to “biological causes.”

Brown, who joined Google two weeks prior to the notorious memo, said that it upset both men and women at the company and didn’t reflect Google’s values. Damore was fired.

Berebichez’s message to women?  

 

“You’re the only one that can make your future,” Berebichez said. “Nobody else will do it for you so seek mentors, do whatever you have to do, study like crazy, be very entrepreneurial and craft your path, because you will be the only one that gets the fruits of your own labor.”