New Treatments Give Hope to People With Brain Tumors

Republican Senator John McCain is perhaps the best known person who has brain cancer. His is a glioblastoma, the most deadly type. Since McCain announced the news last year, he has had surgery and chemotherapy. There’s no cure for this type of cancer, and even with treatment, most people don’t live longer than three years after being diagnosed.

Surgeons often can’t remove the entire tumor because it might affect brain functions, or it might be attached to the spinal column. These tumors often grow tentacles that make them impossible to cut out completely.

Untreated, people have just months to live. But even with treatment, the two-year survival rate is just 30 percent, according to the American Brain Tumor Association.

What’s hopeful is that some new treatments are showing promise.

A case in point is Lori Mines. This 40-year-old wife and mother was diagnosed with stage four brain cancer two years ago. She had a severe headache followed by a stroke. When doctors ordered a brain scan, they found two large brain tumors, one on either side of her brain. One of the tumors was attached to the spinal column so it couldn’t be completely removed. After surgery, Mines had radiation.

“I didn’t even want to know anything about it. I just basically wanted to focus on trying to get better,” she said.

Even noncancerous brain tumors can be deadly if they interfere with portions of the brain responsible for vital bodily functions. Treatment often includes surgery, chemotherapy or radiation or a combination of these treatments.

Glioblastomas are the most common type of cancerous brain tumors, and the five-year relative survival rate is less than 6 percent. 

Mines says she’s realistic, although she hopes she can live longer. She says she will just keep fighting for herself, for her husband, and for her young daughter.

“I have persisted because there’s no other option,” she said.

Scientists at Duke Health found they can increase the survival rate for some patients by injecting a modified polio virus directly into the tumor. Other researchers are trying to get the body’s immune system to attack the tumors.

Dr. Arnab Chakravarti heads the Department of Radiation Oncology at The Ohio State University where he specializes in brain cancers. Chakravarti says medical researchers are examining novel clinical trials, targeted therapies and immunotherapies.

“There’s a lot of hope for this patient population,” he said.

Chakravarti led a study on the genetic makeup of gliomas, brain tumors that can be cancerous or benign. The researchers found they could more than double the life expectancy among patients who had a distinctive biomarker, a cell or a molecule that is present with a particular type of tumor. It helps doctors decide what treatment can work best to shrink the tumor.

“It’s very important to personalize care for the individual patient and that’s why biomarkers, prognostic and predictive biomarkers are so important,” Chakravarti said. The study was published in JAMA Oncology. 

Experts say testing genetic markers will become the standard for patients with malignant brain tumors. They are also looking at targeted drug therapies as part of individualized treatment. The hope is that getting a diagnosis of brain cancer will no longer be an imminent death sentence.

Russian Search Engine Alerts Google to Possible Data Problem

The Russian Internet company Yandex said Thursday that its public search engine has been turning up dozens of Google documents that appear meant for private use, suggesting there may have been a data breach.

Yandex spokesman Ilya Grabovsky said that some Internet users contacted the company Wednesday to say that its public search engine was yielding what looked like personal Google files.

Russian social media users started posting scores of such documents, including an internal memo from a Russian bank, press summaries and company business plans.

 

Grabovsky said Yandex has alerted Google to the concerns.

 

It was unclear whether the files were meant to be publicly viewable by their authors and how many there were. Google did not comment.

 

Grabovsky said that a Yandex search only yields files that don’t require logins or passwords. He added that the files were also turning up in other search engines.

AI Robot Sophia Wows at Ethiopia ICT Expo

Sophia, one of the world’s most advanced and perhaps most famous artificial intelligence (AI) humanoid robot, was a big hit at this year’s Information & Communication Technology International Expo in Addis Ababa, Ethiopia. Visitors, including various dignitaries, were excited to meet the life-like AI robot as she communicated with expo guests and expressed a wide range of facial expressions. As VOA’s Mariama Diallo reports, Sophia has become an international sensation.

How Much Artificial Intelligence Surveillance Is Too Much?

When a CIA-backed venture capital fund took an interest in Rana el Kaliouby’s face-scanning technology for detecting emotions, the computer scientist and her colleagues did some soul-searching — and then turned down the money.

“We’re not interested in applications where you’re spying on people,” said el Kaliouby, the CEO and co-founder of the Boston startup Affectiva. The company has trained its artificial intelligence systems to recognize if individuals are happy or sad, tired or angry, using a photographic repository of more than 6 million faces.

Recent advances in AI-powered computer vision have accelerated the race for self-driving cars and powered the increasingly sophisticated photo-tagging features found on Facebook and Google. But as these prying AI “eyes” find new applications in store checkout lines, police body cameras and war zones, the tech companies developing them are struggling to balance business opportunities with difficult moral decisions that could turn off customers or their own workers.

El Kaliouby said it’s not hard to imagine using real-time face recognition to pick up on dishonesty — or, in the hands of an authoritarian regime, to monitor reaction to political speech in order to root out dissent. But the small firm, which spun off from a Massachusetts Institute of Technology research lab, has set limits on what it will do.

The company has shunned “any security, airport, even lie-detection stuff,” el Kaliouby said. Instead, Affectiva has partnered with automakers trying to help tired-looking drivers stay awake, and with consumer brands that want to know whether people respond to a product with joy or disgust. 

New qualms

Such queasiness reflects new qualms about the capabilities and possible abuses of all-seeing, always-watching AI camera systems — even as authorities are growing more eager to use them.

In the immediate aftermath of Thursday’s deadly shooting at a newspaper in Annapolis, Maryland, police said they turned to face recognition to identify the uncooperative suspect. They did so by tapping a state database that includes mug shots of past arrestees and, more controversially, everyone who registered for a Maryland driver’s license.

Initial information given to law enforcement authorities said that police had turned to facial recognition because the suspect had damaged his fingerprints in an apparent attempt to avoid identification. That report turned out to be incorrect and police said they used facial recognition because of delays in getting fingerprint identification.

In June, Orlando International Airport announced plans to require face-identification scans of passengers on all arriving and departing international flights by the end of this year. Several other U.S. airports have already been using such scans for some departing international flights.

Chinese firms and municipalities are already using intelligent cameras to shame jaywalkers in real time and to surveil ethnic minorities, subjecting some to detention and political indoctrination. Closer to home, the overhead cameras and sensors in Amazon’s new cashier-less store in Seattle aim to make shoplifting obsolete by tracking every item shoppers pick up and put back down.

Concerns over the technology can shake even the largest tech firms. Google, for instance, recently said it will exit a defense contract after employees protested the military application of the company’s AI technology. The work involved computer analysis of drone video footage from Iraq and other conflict zones.

Google guidelines

Similar concerns about government contracts have stirred up internal discord at Amazon and Microsoft. Google has since published AI guidelines emphasizing uses that are “socially beneficial” and that avoid “unfair bias.”

Amazon, however, has so far deflected growing pressure from employees and privacy advocates to halt Rekognition, a powerful face-recognition tool it sells to police departments and other government agencies. 

Saying no to some work, of course, usually means someone else will do it. The drone-footage project involving Google, dubbed Project Maven, aimed to speed the job of looking for “patterns of life, things that are suspicious, indications of potential attacks,” said Robert Work, a former top Pentagon official who launched the project in 2017.

While it hurts to lose Google because they are “very, very good at it,” Work said, other companies will continue those efforts.

Commercial and government interest in computer vision has exploded since breakthroughs earlier in this decade using a brain-like “neural network” to recognize objects in images. Training computers to identify cats in YouTube videos was an early challenge in 2012. Now, Google has a smartphone app that can tell you which breed.

A major research meeting — the annual Conference on Computer Vision and Pattern Recognition, held in Salt Lake City in June — has transformed from a sleepy academic gathering of “nerdy people” to a gold rush business expo attracting big companies and government agencies, said Michael Brown, a computer scientist at Toronto’s York University and a conference organizer.

Brown said researchers have been offered high-paying jobs on the spot. But few of the thousands of technical papers submitted to the meeting address broader public concerns about privacy, bias or other ethical dilemmas. “We’re probably not having as much discussion as we should,” he said.

Not for police, government

Startups are forging their own paths. Brian Brackeen, the CEO of Miami-based facial recognition software company Kairos, has set a blanket policy against selling the technology to law enforcement or for government surveillance, arguing in a recent essay that it “opens the door for gross misconduct by the morally corrupt.”

Boston-based startup Neurala, by contrast, is building software for Motorola that will help police-worn body cameras find a person in a crowd based on what they’re wearing and what they look like. CEO Max Versace said that “AI is a mirror of the society,” so the company chooses only principled partners.

“We are not part of that totalitarian, Orwellian scheme,” he said.

India Demands Facebook Curb Spread of False Information on WhatsApp

India has asked Facebook to prevent the spread of false texts on its WhatsApp messaging application, saying the content has sparked a series of lynchings and mob beatings across the country.

False messages about child abductors spread over WhatsApp have reportedly led to at least 31 deaths in 10 different states over the past year, including a deadly mob lynching Sunday of five men in the western state of Maharashtra.

In a strongly worded statement Tuesday, India’s Ministry of Electronics and Information Technology said the service “cannot evade accountability and responsibility” when messaging platforms are used to spread misinformation.

“The government has also conveyed in no uncertain terms that Whatsapp must take immediate action to end this menace and ensure that their platform is not used for such mala fide activities,” the ministry added.

Facebook and WhatsApp did not immediately respond to requests for comment, but WhatsApp previously told the Reuters news agency it is educating users to identify fake news and is considering changes to the messaging service.

The ministry said law enforcement authorities are working to apprehend those responsible for the killings.

WhatsApp has more than 200 million users in India, the messaging site’s largest market in the world.

Portuguese Tech Firm Uncorks a Smartphone Made Using Cork

A Portuguese tech firm is uncorking an Android smartphone whose case is made from cork, a natural and renewable material native to the Iberian country.

The Ikimobile phone is one of the first to use materials other than plastic, metal and glass and represents a boost for the country’s technology sector, which has made strides in software development but less in hardware manufacturing.

A Made in Portugal version of the phone is set to launch this year as Ikimobile completes a plant to transfer most of its production from China.

“Ikimobile wants to put Portugal on the path to the future and technologies by emphasizing this Portuguese product,” chief executive Tito Cardoso told Reuters at Ikimobile’s plant in the cork-growing area of Coruche, 80 km (50 miles) west of Lisbon.

“We believe the product offers something different, something that people can feel good about using,” he said. Cork is harvested only every nine years without hurting the oak trees and is fully recyclable.

Portugal is the world’s largest cork producer and the phone also marks the latest effort to diversify its use beyond wine bottle stoppers.

Portuguese cork exports have lately regained their peaks of 15 years ago as cork stoppers clawed back market share from plastic and metal. Portugal also exports other cork products such as flooring, clothing and wind turbine blades.

A layer of cork covers the phone’s back providing thermal, acoustic and anti-shock insulation. The cork comes in colors ranging from black to light brown and has certified antibacterial properties and protects against battery radiation.

Cardoso said Ikimobile is working with north Portugal’s Minho University to make the phone even “greener” and hopes to replace a plastic body base with natural materials soon.The material, agglomerated using only natural resins, required years of research and testing for the use in phones.

The plant should churn out 1.2 million phones a year — a drop in the ocean compared to last year’s worldwide smartphone market shipments of almost 1.5 billion.

Most cell phones are produced in Asia but local manufacture helps take advantage of the availability of cork and the “Made in Portugal” brand appeals to consumers in Europe, Angola, Brazil and Canada, Cardoso said.

In 2017, it sold 400,000 phones assembled in China in 2017, including simple feature phones. It hopes to surpass that amount with local production this year. Top-of-the-line cork models, costing 160-360 euros ($187-$420), make up 40 percent of sales.

2001: A Space Odyssey, 50 Years Later

It was 50 years ago the sci-fi epic 2001: A Space Odyssey by author Arthur C. Clarke and filmmaker Stanley Kubrick, opened in theaters across America to mixed reviews. The almost three-hour long film, was too cerebral and slow- moving to be appreciated by general audiences in 1968. Today, half a century later, the movie is one of the American Film Institute’s top 100 films of all time. VOA’s Penelope Poulou explores Space Odyssey’s power and its relevance 50 years since its creation.

I Never Said That! High-tech Deception of ‘Deepfake’ Videos

Hey, did my congressman really say that? Is that really President Donald Trump on that video, or am I being duped?

 

New technology on the internet lets anyone make videos of real people appearing to say things they’ve never said. Republicans and Democrats predict this high-tech way of putting words in someone’s mouth will become the latest weapon in disinformation wars against the United States and other Western democracies.

 

We’re not talking about lip-syncing videos. This technology uses facial mapping and artificial intelligence to produce videos that appear so genuine it’s hard to spot the phonies. Lawmakers and intelligence officials worry that the bogus videos — called deepfakes — could be used to threaten national security or interfere in elections.

 

So far, that hasn’t happened, but experts say it’s not a question of if, but when.

 

“I expect that here in the United States we will start to see this content in the upcoming midterms and national election two years from now,” said Hany Farid, a digital forensics expert at Dartmouth College in Hanover, New Hampshire. “The technology, of course, knows no borders, so I expect the impact to ripple around the globe.”

 

When an average person can create a realistic fake video of the president saying anything they want, Farid said, “we have entered a new world where it is going to be difficult to know how to believe what we see.” The reverse is a concern, too. People may dismiss as fake genuine footage, say of a real atrocity, to score political points.

 

Realizing the implications of the technology, the U.S. Defense Advanced Research Projects Agency is already two years into a four-year program to develop technologies that can detect fake images and videos. Right now, it takes extensive analysis to identify phony videos. It’s unclear if new ways to authenticate images or detect fakes will keep pace with deepfake technology.

 

Deepfakes are so named because they utilize deep learning, a form of artificial intelligence. They are made by feeding a computer an algorithm, or set of instructions, lots of images and audio of a certain person. The computer program learns how to mimic the person’s facial expressions, mannerisms, voice and inflections. If you have enough video and audio of someone, you can combine a fake video of the person with a fake audio and get them to say anything you want.

 

So far, deepfakes have mostly been used to smear celebrities or as gags, but it’s easy to foresee a nation state using them for nefarious activities against the U.S., said Sen. Marco Rubio, R-Fla., one of several members of the Senate intelligence committee who are expressing concern about deepfakes.

 

A foreign intelligence agency could use the technology to produce a fake video of an American politician using a racial epithet or taking a bribe, Rubio says. They could use a fake video of a U.S. soldier massacring civilians overseas, or one of a U.S. official supposedly admitting a secret plan to carry out a conspiracy. Imagine a fake video of a U.S. leader — or an official from North Korea or Iran — warning the United States of an impending disaster.

 

“It’s a weapon that could be used — timed appropriately and placed appropriately — in the same way fake news is used, except in a video form, which could create real chaos and instability on the eve of an election or a major decision of any sort,” Rubio told The Associated Press.

 

Deepfake technology still has a few hitches. For instance, people’s blinking in fake videos may appear unnatural. But the technology is improving.

 

“Within a year or two, it’s going to be really hard for a person to distinguish between a real video and a fake video,” said Andrew Grotto, an international security fellow at the Center for International Security and Cooperation at Stanford University in California.

 

“This technology, I think, will be irresistible for nation states to use in disinformation campaigns to manipulate public opinion, deceive populations and undermine confidence in our institutions,” Grotto said. He called for government leaders and politicians to clearly say it has no place in civilized political debate.

 

Crude videos have been used for malicious political purposes for years, so there’s no reason to believe the higher-tech ones, which are more realistic, won’t become tools in future disinformation campaigns.

 

Rubio noted that in 2009, the U.S. Embassy in Moscow complained to the Russian Foreign Ministry about a fake sex video it said was made to damage the reputation of a U.S. diplomat. The video showed the married diplomat, who was a liaison to Russian religious and human rights groups, making telephone calls on a dark street. The video then showed the diplomat in his hotel room, scenes that apparently were shot with a hidden camera. Later, the video appeared to show a man and a woman having sex in the same room with the lights off, although it was not at all clear that the man was the diplomat.

 

John Beyrle, who was the U.S. ambassador in Moscow at the time, blamed the Russian government for the video, which he said was clearly fabricated.

 

Michael McFaul, who was American ambassador in Russia between 2012 and 2014, said Russia has engaged in disinformation videos against various political actors for years and that he too had been a target. He has said that Russian state propaganda inserted his face into photographs and “spliced my speeches to make me say things I never uttered and even accused me of pedophilia.”

‘Insect Vision’ Hunts Down Asteroids

June 30 marks Asteroid Day, a U.N.-sanctioned campaign to promote awareness around the world of what’s up in the sky. In Milan, scientists are assembling a new telescope that uses “insect vision” to spot risky celestial objects. Faith Lapidus explains.