Pages

28 December 2019

Artificial Intelligence and the Adversary

Samantha Ravich

The potential benefits of artificial intelligence are proclaimed loudly, for all to hear. The dangers, however, are discussed quietly among national-security experts. The time has come to bring the general population into the discussion.

The benefits are enticing. With AI, the future promises longer life expectancy, increased productivity, and better preservation of precious resources. You will be able to take a picture of a mole on your leg and send it electronically to a dermatologist, who will use deep neural networks to determine whether it is skin cancer. Data-driven sensors and drones will determine the perfect amount of pesticide and water to promote agricultural diversity and counter monocropping. The AI revolution in transportation will herald autonomous planes, trains and automobiles. Music will be created to improve not only mood but heart rate and brain activity.

But we should know by now that advanced technology can also be used for ill. The whispered worst-case scenarios stem from malign actors gaining control of the massive data sets that will train machines to compute faster, better and perhaps with more-penetrating insight.


A fierce contest between the U.S. and China is under way over who will dominate this new frontier. The Chinese Communist Party has proclaimed that it will become the world’s leader in AI by 2030. Already China is hard at work, building out 5G networks world-wide and launching a new cryptocurrency as part of a strategy of “eco-political terraforming,” or building a world that will enable it to control massive amounts of information and use it for political and economic advantage. Beijing already hoards vast quantities of data about its own 1.4 billion people, none of whom have privacy rights under the Communist regime.

Nevertheless, Beijing isn’t satisfied. It has turned its sights on the U.S. and has already exfiltrated some of the most sensitive information on the American people and military. These include repeated breaches since 2013 of medical systems and databases, and the decadelong targeting of the U.S. Navy’s ship-maintenance records and the names and personal details of 100,000 of its personnel.

In time, through artificial intelligence, China will be able to use Americans’ data against us. Personalized medical records could become personalized bioweapons, for instance. In 2017 Zhang Shibo, a retired Chinese general, wrote that biotech could provide China an offensive capability through the creation of “specific ethnic genetic attacks.” As for the stolen Navy data, understanding how the U.S. maintains its fleets will help China point out vulnerabilities in U.S. weapons systems and ship design to be exploited during a confrontation.

To fight back, America needs very large data sets to train its own deep-learning models. The U.S. military collects and holds more than 35 million individual records on service members, employees, contractors, retirees and family members. These records include standard personal details, financial information and training records, as well as cognitive, physical and moral aptitude-test results. Even more data is kept on the development, testing, fielding and maintenance of weaponry.

No doubt some of the $4 billion the Pentagon budgeted for AI and machine-learning research and development in 2020 will use that data to build better military hardware, predict maintenance needs, optimize troop deployment, and develop advanced battlefield medicine. The more data fed to these training models, the more accurate they become. But if the military’s data sets are breached, that information is as likely to harm us as help the U.S. The enemy will be inside American barracks, cockpits and strategy sessions, knowing what we know and seeing what we see.

The Defense Innovation Board has released a new report on AI principles for the Pentagon. It stresses that AI systems must have their “safety, security, and robustness” tested repeatedly over their lifetime of use. Given the importance of this data and the role it will play in America’s future, the military must build in security at the foundation of the hardware, software and storage needed to build tomorrow’s AI.

If the U.S. repeats the mistakes of the past by operating insecure systems and allowing the adversary to hack its data or infect its supply chain with malicious software and counterfeit hardware, beware. The AI that our top scientists and engineers are now building to advance American security and prosperity could instead sow the seeds of our demise.

While data integrity and supply-chain security may not sound as tantalizing as building the algorithm that develops biotronic robots, it may be even more important. If we don’t secure the data we use to propel our AI revolution, we are building an AI capacity for the adversary.

Ms. Ravich is chairman of the Center on Cyber and Technology Innovation at the Foundation for Defense of Democracies and serves as a co-chairman of the Secretary of Energy Advisory Board’s Artificial Intelligence Working Group.

No comments:

Post a Comment