Orthopedic surgery can be tricky. So, researchers at Northwestern University are engineering new materials to make 3D printed bones that are a perfect match, ready for grafts. Faith Lapidus reports.
…
Category: eNews
Digital and technology news. A newsletter is a printed or electronic report containing news concerning the activities of a business or an organization that is sent to its members, customers, employees or other subscribers
AI Robot Sophia Wows at Ethiopia ICT Expo
Sophia, one of the world’s most advanced and perhaps most famous artificial intelligence (AI) humanoid robot, was a big hit at this year’s Information & Communication Technology International Expo in Addis Ababa, Ethiopia. Visitors, including various dignitaries, were excited to meet the life-like AI robot as she communicated with expo guests and expressed a wide range of facial expressions. As VOA’s Mariama Diallo reports, Sophia has become an international sensation.
…
How Much Artificial Intelligence Surveillance Is Too Much?
When a CIA-backed venture capital fund took an interest in Rana el Kaliouby’s face-scanning technology for detecting emotions, the computer scientist and her colleagues did some soul-searching — and then turned down the money.
“We’re not interested in applications where you’re spying on people,” said el Kaliouby, the CEO and co-founder of the Boston startup Affectiva. The company has trained its artificial intelligence systems to recognize if individuals are happy or sad, tired or angry, using a photographic repository of more than 6 million faces.
Recent advances in AI-powered computer vision have accelerated the race for self-driving cars and powered the increasingly sophisticated photo-tagging features found on Facebook and Google. But as these prying AI “eyes” find new applications in store checkout lines, police body cameras and war zones, the tech companies developing them are struggling to balance business opportunities with difficult moral decisions that could turn off customers or their own workers.
El Kaliouby said it’s not hard to imagine using real-time face recognition to pick up on dishonesty — or, in the hands of an authoritarian regime, to monitor reaction to political speech in order to root out dissent. But the small firm, which spun off from a Massachusetts Institute of Technology research lab, has set limits on what it will do.
The company has shunned “any security, airport, even lie-detection stuff,” el Kaliouby said. Instead, Affectiva has partnered with automakers trying to help tired-looking drivers stay awake, and with consumer brands that want to know whether people respond to a product with joy or disgust.
New qualms
Such queasiness reflects new qualms about the capabilities and possible abuses of all-seeing, always-watching AI camera systems — even as authorities are growing more eager to use them.
In the immediate aftermath of Thursday’s deadly shooting at a newspaper in Annapolis, Maryland, police said they turned to face recognition to identify the uncooperative suspect. They did so by tapping a state database that includes mug shots of past arrestees and, more controversially, everyone who registered for a Maryland driver’s license.
Initial information given to law enforcement authorities said that police had turned to facial recognition because the suspect had damaged his fingerprints in an apparent attempt to avoid identification. That report turned out to be incorrect and police said they used facial recognition because of delays in getting fingerprint identification.
In June, Orlando International Airport announced plans to require face-identification scans of passengers on all arriving and departing international flights by the end of this year. Several other U.S. airports have already been using such scans for some departing international flights.
Chinese firms and municipalities are already using intelligent cameras to shame jaywalkers in real time and to surveil ethnic minorities, subjecting some to detention and political indoctrination. Closer to home, the overhead cameras and sensors in Amazon’s new cashier-less store in Seattle aim to make shoplifting obsolete by tracking every item shoppers pick up and put back down.
Concerns over the technology can shake even the largest tech firms. Google, for instance, recently said it will exit a defense contract after employees protested the military application of the company’s AI technology. The work involved computer analysis of drone video footage from Iraq and other conflict zones.
Google guidelines
Similar concerns about government contracts have stirred up internal discord at Amazon and Microsoft. Google has since published AI guidelines emphasizing uses that are “socially beneficial” and that avoid “unfair bias.”
Amazon, however, has so far deflected growing pressure from employees and privacy advocates to halt Rekognition, a powerful face-recognition tool it sells to police departments and other government agencies.
Saying no to some work, of course, usually means someone else will do it. The drone-footage project involving Google, dubbed Project Maven, aimed to speed the job of looking for “patterns of life, things that are suspicious, indications of potential attacks,” said Robert Work, a former top Pentagon official who launched the project in 2017.
While it hurts to lose Google because they are “very, very good at it,” Work said, other companies will continue those efforts.
Commercial and government interest in computer vision has exploded since breakthroughs earlier in this decade using a brain-like “neural network” to recognize objects in images. Training computers to identify cats in YouTube videos was an early challenge in 2012. Now, Google has a smartphone app that can tell you which breed.
A major research meeting — the annual Conference on Computer Vision and Pattern Recognition, held in Salt Lake City in June — has transformed from a sleepy academic gathering of “nerdy people” to a gold rush business expo attracting big companies and government agencies, said Michael Brown, a computer scientist at Toronto’s York University and a conference organizer.
Brown said researchers have been offered high-paying jobs on the spot. But few of the thousands of technical papers submitted to the meeting address broader public concerns about privacy, bias or other ethical dilemmas. “We’re probably not having as much discussion as we should,” he said.
Not for police, government
Startups are forging their own paths. Brian Brackeen, the CEO of Miami-based facial recognition software company Kairos, has set a blanket policy against selling the technology to law enforcement or for government surveillance, arguing in a recent essay that it “opens the door for gross misconduct by the morally corrupt.”
Boston-based startup Neurala, by contrast, is building software for Motorola that will help police-worn body cameras find a person in a crowd based on what they’re wearing and what they look like. CEO Max Versace said that “AI is a mirror of the society,” so the company chooses only principled partners.
“We are not part of that totalitarian, Orwellian scheme,” he said.
…
India Demands Facebook Curb Spread of False Information on WhatsApp
India has asked Facebook to prevent the spread of false texts on its WhatsApp messaging application, saying the content has sparked a series of lynchings and mob beatings across the country.
False messages about child abductors spread over WhatsApp have reportedly led to at least 31 deaths in 10 different states over the past year, including a deadly mob lynching Sunday of five men in the western state of Maharashtra.
In a strongly worded statement Tuesday, India’s Ministry of Electronics and Information Technology said the service “cannot evade accountability and responsibility” when messaging platforms are used to spread misinformation.
“The government has also conveyed in no uncertain terms that Whatsapp must take immediate action to end this menace and ensure that their platform is not used for such mala fide activities,” the ministry added.
Facebook and WhatsApp did not immediately respond to requests for comment, but WhatsApp previously told the Reuters news agency it is educating users to identify fake news and is considering changes to the messaging service.
The ministry said law enforcement authorities are working to apprehend those responsible for the killings.
WhatsApp has more than 200 million users in India, the messaging site’s largest market in the world.
…
Portuguese Tech Firm Uncorks a Smartphone Made Using Cork
A Portuguese tech firm is uncorking an Android smartphone whose case is made from cork, a natural and renewable material native to the Iberian country.
The Ikimobile phone is one of the first to use materials other than plastic, metal and glass and represents a boost for the country’s technology sector, which has made strides in software development but less in hardware manufacturing.
A Made in Portugal version of the phone is set to launch this year as Ikimobile completes a plant to transfer most of its production from China.
“Ikimobile wants to put Portugal on the path to the future and technologies by emphasizing this Portuguese product,” chief executive Tito Cardoso told Reuters at Ikimobile’s plant in the cork-growing area of Coruche, 80 km (50 miles) west of Lisbon.
“We believe the product offers something different, something that people can feel good about using,” he said. Cork is harvested only every nine years without hurting the oak trees and is fully recyclable.
Portugal is the world’s largest cork producer and the phone also marks the latest effort to diversify its use beyond wine bottle stoppers.
Portuguese cork exports have lately regained their peaks of 15 years ago as cork stoppers clawed back market share from plastic and metal. Portugal also exports other cork products such as flooring, clothing and wind turbine blades.
A layer of cork covers the phone’s back providing thermal, acoustic and anti-shock insulation. The cork comes in colors ranging from black to light brown and has certified antibacterial properties and protects against battery radiation.
Cardoso said Ikimobile is working with north Portugal’s Minho University to make the phone even “greener” and hopes to replace a plastic body base with natural materials soon.The material, agglomerated using only natural resins, required years of research and testing for the use in phones.
The plant should churn out 1.2 million phones a year — a drop in the ocean compared to last year’s worldwide smartphone market shipments of almost 1.5 billion.
Most cell phones are produced in Asia but local manufacture helps take advantage of the availability of cork and the “Made in Portugal” brand appeals to consumers in Europe, Angola, Brazil and Canada, Cardoso said.
In 2017, it sold 400,000 phones assembled in China in 2017, including simple feature phones. It hopes to surpass that amount with local production this year. Top-of-the-line cork models, costing 160-360 euros ($187-$420), make up 40 percent of sales.
…
2001: A Space Odyssey, 50 Years Later
It was 50 years ago the sci-fi epic 2001: A Space Odyssey by author Arthur C. Clarke and filmmaker Stanley Kubrick, opened in theaters across America to mixed reviews. The almost three-hour long film, was too cerebral and slow- moving to be appreciated by general audiences in 1968. Today, half a century later, the movie is one of the American Film Institute’s top 100 films of all time. VOA’s Penelope Poulou explores Space Odyssey’s power and its relevance 50 years since its creation.
…
I Never Said That! High-tech Deception of ‘Deepfake’ Videos
Hey, did my congressman really say that? Is that really President Donald Trump on that video, or am I being duped?
New technology on the internet lets anyone make videos of real people appearing to say things they’ve never said. Republicans and Democrats predict this high-tech way of putting words in someone’s mouth will become the latest weapon in disinformation wars against the United States and other Western democracies.
We’re not talking about lip-syncing videos. This technology uses facial mapping and artificial intelligence to produce videos that appear so genuine it’s hard to spot the phonies. Lawmakers and intelligence officials worry that the bogus videos — called deepfakes — could be used to threaten national security or interfere in elections.
So far, that hasn’t happened, but experts say it’s not a question of if, but when.
“I expect that here in the United States we will start to see this content in the upcoming midterms and national election two years from now,” said Hany Farid, a digital forensics expert at Dartmouth College in Hanover, New Hampshire. “The technology, of course, knows no borders, so I expect the impact to ripple around the globe.”
When an average person can create a realistic fake video of the president saying anything they want, Farid said, “we have entered a new world where it is going to be difficult to know how to believe what we see.” The reverse is a concern, too. People may dismiss as fake genuine footage, say of a real atrocity, to score political points.
Realizing the implications of the technology, the U.S. Defense Advanced Research Projects Agency is already two years into a four-year program to develop technologies that can detect fake images and videos. Right now, it takes extensive analysis to identify phony videos. It’s unclear if new ways to authenticate images or detect fakes will keep pace with deepfake technology.
Deepfakes are so named because they utilize deep learning, a form of artificial intelligence. They are made by feeding a computer an algorithm, or set of instructions, lots of images and audio of a certain person. The computer program learns how to mimic the person’s facial expressions, mannerisms, voice and inflections. If you have enough video and audio of someone, you can combine a fake video of the person with a fake audio and get them to say anything you want.
So far, deepfakes have mostly been used to smear celebrities or as gags, but it’s easy to foresee a nation state using them for nefarious activities against the U.S., said Sen. Marco Rubio, R-Fla., one of several members of the Senate intelligence committee who are expressing concern about deepfakes.
A foreign intelligence agency could use the technology to produce a fake video of an American politician using a racial epithet or taking a bribe, Rubio says. They could use a fake video of a U.S. soldier massacring civilians overseas, or one of a U.S. official supposedly admitting a secret plan to carry out a conspiracy. Imagine a fake video of a U.S. leader — or an official from North Korea or Iran — warning the United States of an impending disaster.
“It’s a weapon that could be used — timed appropriately and placed appropriately — in the same way fake news is used, except in a video form, which could create real chaos and instability on the eve of an election or a major decision of any sort,” Rubio told The Associated Press.
Deepfake technology still has a few hitches. For instance, people’s blinking in fake videos may appear unnatural. But the technology is improving.
“Within a year or two, it’s going to be really hard for a person to distinguish between a real video and a fake video,” said Andrew Grotto, an international security fellow at the Center for International Security and Cooperation at Stanford University in California.
“This technology, I think, will be irresistible for nation states to use in disinformation campaigns to manipulate public opinion, deceive populations and undermine confidence in our institutions,” Grotto said. He called for government leaders and politicians to clearly say it has no place in civilized political debate.
Crude videos have been used for malicious political purposes for years, so there’s no reason to believe the higher-tech ones, which are more realistic, won’t become tools in future disinformation campaigns.
Rubio noted that in 2009, the U.S. Embassy in Moscow complained to the Russian Foreign Ministry about a fake sex video it said was made to damage the reputation of a U.S. diplomat. The video showed the married diplomat, who was a liaison to Russian religious and human rights groups, making telephone calls on a dark street. The video then showed the diplomat in his hotel room, scenes that apparently were shot with a hidden camera. Later, the video appeared to show a man and a woman having sex in the same room with the lights off, although it was not at all clear that the man was the diplomat.
John Beyrle, who was the U.S. ambassador in Moscow at the time, blamed the Russian government for the video, which he said was clearly fabricated.
Michael McFaul, who was American ambassador in Russia between 2012 and 2014, said Russia has engaged in disinformation videos against various political actors for years and that he too had been a target. He has said that Russian state propaganda inserted his face into photographs and “spliced my speeches to make me say things I never uttered and even accused me of pedophilia.”
…
Swedish Researches Developing 3-D VR Model of Milky Way
Researchers at a public university in Sweden are creating a 3-D, virtual reality model of the Milky Way. They say their work could change how surgeons separated by oceans collaborate on medical examinations. VOA’s Arash Arabasadi reports.
…
‘Insect Vision’ Hunts Down Asteroids
June 30 marks Asteroid Day, a U.N.-sanctioned campaign to promote awareness around the world of what’s up in the sky. In Milan, scientists are assembling a new telescope that uses “insect vision” to spot risky celestial objects. Faith Lapidus explains.
…
Virtual Reality in Filmmaking Immerses Viewers in Global Issues
Melting glaciers and rising seas in Greenland; raging fires in Northern California; a relentless drought in Somalia and the disappearing Amazon forests. Famine, Feast, Fire and Ice are the four installments in a virtual reality (VR) documentary on climate change by filmmakers Eric Strauss and Danfung Dennis.
The series, showcased at AFI Docs, the American Film Institute’s Documentary festival in Washington, D.C., offers a 360-degree view of destructive phenomena brought by climate change on our planet. It immerses viewers into the extremes of Earth’s changing climate.
Eric Strauss told VOA he hopes that when someone watches the series as it drives home this idea that there is no hiding from global warming. “This is coming for all of us, regardless of where we live or what our income is; it’s going to affect everyone.”
Ken Jacobson, AFI’s Virtual Reality Programmer, says viewers – who watch the film wearing virtual reality headsets – react in many different ways to this all immersive experience.
“Some people have a very visceral reaction where they jump, where they kind of yelp because they are very surprised by what they see, while other people, I think, are very reflective and can even be sad, depending on the content,” he said.
One of these viewers is James Willard, a film and TV production student at George Mason University. He describes his experience of watching the installment Feast, about the deforestation of the Amazon rainforests to make space for industrial-sized cattle ranches to satisfy the global appetite for beef.
“You are completely immersed in this whole situation,” he says, “You are facing these animals eye-to-eye and watching as they are marching towards their death.”
The film needs no dialogue. A few sentences set up the topic. “It is actually stripping away a lot of the information, putting you in environments that you then experience for yourself,” says Eric Strauss, “You are much more of a protagonist in some way in this type of stories than you would be in a traditional form of cinema.”
Another viewer, Patricia, has just watched Famine, the episode that looks at the extreme drought in Somalia. “It makes it even more powerful because you feel like you are there. I think, it’s a great medium to spread the word on critical subjects,” she says.
That’s what Strauss wants to hear. “That is the goal; to effect change, to effect positive change.”
VR films are becoming more accessible as the technology evolves, and are often viewed on smart phone applications. But VR Programmer Ken Jacobson says watching them through a virtual reality headset is the best way to experience them.
But can VR films ever replace traditional 2D or even 3D films?
“I think it is going to add another aspect on how we are going to watch movies,” says student James Willard. “Virtual reality can be very dangerous because you are completely immersing yourself within the story to the point where you don’t see anything else. At least in the movie theater you are fully aware that this is a screen in front of you, but if you look to your sides you don’t have another screen there completely immersing you within that story. And with virtual reality that’s exactly what it does. For some people, it will be okay to take off the goggles and go on with their lives, but for others it may be too much. I don’t think it will completely take over.”
Eric Strauss agrees that VR will not overtake traditional cinema, but he says virtual reality can allow viewers to relate deeply with socially conscious stories.
“The technology creates a situation where you truly feel transported to that location because you are not just witnessing something or watching it on a screen. You are occupying the space. And that creates an emotional connection where you can’t really turn away. I mean, there is no getting away from what you’ve allowed yourself to be teleported to and hopefully that will create a visceral, emotional response in viewers and what they are seeing will prompt them to want to get involved.”
Move Over UPS: Amazon Delivery Vans to Hit the Streets
Your Amazon packages, which usually show up in a UPS truck, an unmarked vehicle or in the hands of a mail carrier, may soon be delivered from an Amazon van.
The online retailer has been looking for a while to find a way to have more control over how its packages are delivered. With its new program rolling out Thursday, contractors around the country can launch businesses that deliver Amazon packages. The move gives Amazon more ways to ship its packages to shoppers without having to rely on UPS, FedEx and other package delivery services.
With these vans on the road, Amazon said more shoppers would be able to track their packages on a map, contact the driver or change where a package is left — all of which it can’t do if the package is in the back of a UPS or FedEx truck.
Amazon has beefed up its delivery network in other ways: It has a fleet of cargo planes it calls “Prime Air,” announced last year that it was building an air cargo hub in Kentucky and pays people as much as $25 an hour to deliver packages with their cars through Amazon Flex.
Recently, the company has come under fire from President Donald Trump who tweeted that Amazon should pay the U.S. Postal Service more for shipping its packages. Dave Clark, Amazon’s senior vice president of worldwide operations, said the new program is not a response to Trump, but a way to make sure that the company can deliver its growing number of orders. “This is really about meeting growth for our future,” Clark said.
Through the program , Amazon said it can cost as little as $10,000 for someone to start the delivery business. Contractors that participate in the program will be able to lease blue vans with the Amazon logo stamped on it, buy Amazon uniforms for drivers and get support from Amazon to grow their business.
Contractors don’t have to lease the vans, but if they do, those vehicles can only be used to deliver Amazon packages, the company said. The contractor will be responsible for hiring delivery people, and Amazon would be the customer, paying the business to pick up packages from its 75 U.S. delivery centers and dropping them off at shoppers’ doorsteps. An Amazon representative declined to give details on how much it will pay for the deliveries.
Olaoluwa Abimbola, who was part of Amazon’s test of the program, said that the amount of packages Amazon needs delivered keeps his business busy. He’s hired 40 workers in five months.
“We don’t have to go make sales speeches,” Abimbola said. “There’s constant work, every day. All we have to do is show up.”
…
Former US Defense Official Says Google Has Stepped Into a ‘Moral Hazard’
A former top U.S. Defense Department official is questioning the morality of Google’s decision not to renew a partnership with the Pentagon.
“I believe the Google employees have created a moral hazard for themselves,” former Deputy Defense Secretary Bob Work said Tuesday.
Google announced earlier this month that it would not renew its contract for Project Maven, after 13 employees resigned and more than 4,600 employees signed a petition objecting to their work being used for warfare.
Project Maven seeks to use artificial intelligence, or AI, to help detect and identify images captured using drones.
Many of the Google employees who objected to the project cited Google’s principle of ensuring its products are not used to do harm. But Work, who served as deputy defense secretary from 2014 through July 2017, described Google’s thinking as short-sighted. “It might wind up with us taking a shot, but it could easily save lives” he told an audience at the Defense One Tech Summit in Washington.
Work also described Google as hypocritical, given the company’s endeavors with other countries, such as China. “Google has opened an AI [artificial intelligence] center in China,” he said. “Anything that’s going on in the AI center in China is going to the Chinese government and then will ultimately end up in the hands of the Chinese military.”
The Pentagon’s Project Maven was approved under Work’s watch in 2016 had an initial budget of about $70 million. Google officials had told employees the company was earning less than $10 million, though the deal could lead to additional work.
Current military officials have declined to comment on Google’s decision to not renew the contract, explaining the tech giant is not the main contractor.
“It would not be appropriate for us to comment on the relationship between a prime and sub-prime contractor holder,” Pentagon spokeswoman, Maj. Audricia Harris told VOA in an email.
“We value all of our relationships with academic institutions and commercial companies involved with Project Maven,” she added. “Partnering with the best universities and commercial companies in the world will help preserve the United States’ critical lead in artificial intelligence.” VOA has asked Google for a response, but has received no reply.
While declining to comment directly on Google and Project Maven, the executive director of the Defense Innovation Board said the hope is that, eventually, ethical consideration will push tech companies to work with the military.
“AI [artificlal intelligence] done properly is really, really dangerous,” said Josh Marcuse “We want to work with these companies, these engineers.”
“We are going to have to defend these democracies against adversaries or competitors who see the world every differently,” he said at the same conference in Washington as Work. “I don’t want to show up with a dumb weapon on a smart battlefield.”
But experts say questions of ethics and business viability are likely to continue to plague Google and otherbig tech companies who are asked to work with the Pentagon.
“Their customer base is not just the United States,” said Heather Roff with the Leverhulme Centre for the Future of Intelligence at the University of Cambridge. “Aiding the U.S. defense industry will potentially hinder their economic success or viability in other countries.”
Still, Paul Scharre, a former Defense Department official who worked on emerging technologies, said he was disappointed by Google’s decision.
“There are weapons companies that build weapons – I understand why Google might not want to be part of that,” said Scharre, now with the Center for a New American Security.
“I don’t think Project Maven crosses the line at all,” he added. “It’s clearly not a weapons technology. It’s helping people better understand the battle space. If you are only worried about civilian and collateral damage that’s only good.”
VOA’s Michelle Quinn contributed to this report. Some information from Reuters was used in this report.
…