1 July 2018

Why Project Maven is a ‘moral hazard’ for Google

By: Justin Lynch
Source Link

Capt. Sean Heritage strode to the stage on Tuesday wearing a white Navy uniform, holding up a black hacker hoodie with military stripes on the sleeve. The home-brew sweatshirt represented a fusion of two mindsets that can have different priorities ― hacker and sailor. Like the hoodie, Heritage is a combination of two worlds: He is acting head of the Defense Innovation Unit ― Experimental, a military project investing in Silicon Valley. But despite the peace offering from Heritage, disagreements between the two communities were on display Tuesday at a Defense One conference in Washington, D.C., over the uses of artificial intelligence. Current and former military officials criticized tech-giant Google for “creating a moral hazard,” by dropping out from a top military program called Project Maven, and called on the company to rethink its decision.


Project Maven uses machine learning to recognize objects from moving or still imagery,according to the Defense Department, and that learning could be used to develop targets for drone strikes. But Google reportedly plans on ending its partnership with the program in 2019 after a backlash from the company’s employees, who argued in an open letter signed by more than 3,000 workers that it did not want to “build warfare technology.”

Google employees “say look this data could potentially, down the line, at some point, cause harm to human life. And I say ’Yeah. But it might save 500 Americans or 500 allies, or 500 innocent civilians from being attacked,” said Robert Work, the former deputy secretary of defense, at the conference. He called the Google employees’ logic “a bank shot,” adding the AI data would be reviewed by humans before it was used in a strike.

Work also criticized Google employees for apparently not speaking out against a plannedartificial intelligence center that the company is planning in China.

“Anything that is going in the AI center in China is going to the Chinese government and then will ultimately end up in the hands of the Chinese military,” Work said. “ I didn’t see any Google employee saying, ‘hmm … maybe we shouldn’t do that.’”

Josh Marcuse, executive director of the Defense Innovation Board, which is also charged with outreach to new tech communities, said that those who take issue with the government’s AI approach should work with the government to ensure that ethics and safety are incorporated.

“With respect to privacy and respect to civil liberties, we are going to have to defend these democracies against adversaries or competitors who see the world very differently,” Marcuse said.

Heritage downplayed tensions with Silicon Valley and called the issues “small in number.” 

On Capitol Hill, Google’s Chief Scientist for AI and Machine Learning, Fei-Fei Li, did not appear to directly address the company’s operations in China during a testimony to lawmakers. But she said “it is critical that we have ethical guidelines different institutions from government to academia to industry will have to participate in this dialogue together and also by themselves.”

Li’s testimony took place at the same time as the comments from the current and former defense officials. She was speaking in her capacity as co-founder of AI4ALL, a group that promotes artificial intelligence. She did not immediately respond to a request for comment.

The disagreement represents longstanding issues between government officials and entrepreneurs from Silicon Valley. It is a regression for top intelligence officials, who last week said it needs to work with the private sector to defend against cyberattacks and other threats.

“The government has to figure out how we get good at finishing with the commercial entities,” said Rob Joyce, senior adviser at the NSA, adding the public sector should be playing a supporting role for the private sector.
Know all the coolest acronyms 

Other officials warned that the standards for AI needed to be lowered.

For AI “to have to be perfect is not going to work. We are never going to be able to trust automation if perfection is what we are going for,” said Stacey Dixon, Deputy Director at the Intelligence Advanced Research Projects Activity.

Lawmakers on Tuesday said that the U.S. could fall behind in the race for artificial intelligence.

“By some accounts, China is investing $7 billion in AI through 2030,” said Rep. Lamar Smith, R-Texas., during a hearing. “Yet, the Department of Defense’s unclassified investment in AI was only $600 million in 2016 while federal spending on quantum totals about $250 million a year.”

No comments: