Pages

4 March 2019

President Trump’s Executive Order on Artificial Intelligence

By Jim Baker 

On Feb. 11, President Trump issued a new executive order regarding artificial intelligence (AI). Darrell West from Brookings wrote a brief analysis of the order, Caleb Watney from R Street critiqued it on Lawfare, and major media outlets have provided some reporting and commentary on the rollout. Rather than repeat what the order says or what others have said about it, below are three compliments and three concerns based on my initial review of the order.

First, here are three things I like:

1. The president actually issued the order. No one is really sure exactly how transformative AI will be—there is a lot of potential in AI, but there is also a lot of hype. But because AI might have major impacts on the economy, national security and other facets of society, society needs to stay focused on it. Other countries—especially China—are investing heavily in AI and related fields, such as high-speed computing, sensors and robotics (including autonomous vehicles and weapons systems). The U.S. Department of Defense and elements of the U.S. intelligence community seem to be fully seized of the AI issue and are actively pursuing an array of initiatives in the field.


The executive order clearly recognizes the potentially significant implications of AI for the U.S. economy and national security, and posits that the United States must be an AI leader. The order declares: “It is the policy of the United States Government to sustain and enhance the scientific, technological, and economic leadership position of the United States in AI R&D and deployment through a coordinated Federal Government strategy.” If this vision is implemented fully (a big “if”), maintaining and enhancing U.S. leadership on AI would be a major policy achievement. The executive order thus prioritizes AI for federal departments and agencies and establishes a multifaceted framework for the executive branch to implement the administration’s AI policy. This is good.

Executive orders have the potential to focus bureaucracies in ways that other executive pronouncements (such as tweets) do not. But realizing that potential will require sustained leadership and commitment from the White House. I expect that over the coming months there will be numerous follow-up meetings and working groups established across the executive branch to implement the executive order. All of that is good and, if done right, will maintain needed federal focus on the AI issue.

2. The link between AI and big data. Numerous provisions in the executive order address the link between AI and big data. For example, one of the strategic objectives set forth in the order is the following: “Enhance access to high-quality and fully traceable Federal data, models, and computing resources to increase the value of such resources for AI R&D, while maintaining safety, security, privacy, and confidentiality protections consistent with applicable laws and policies.” This is important. AI algorithms learn from having access to relevant data. The more data made accessible, the more learning can occur. Exactly how all of this learning happens is complex and messy, and real success is hard to achieve. But the point is that in order to improve our AI systems, we need to provide developers with lawful access to large datasets.

Many directives in the order, like the one above, encourage government agencies to make data that they possess accessible to AI scientists and developers in ways that protect privacy (more on privacy below). China has a disproportionate advantage over other countries in terms of the volume of data about human behavior to which its AI developers have access because of its intensive (and repressive) collection of data about the activities of its very large population and its likely theft of data from countries around the world. If we expect to keep up with China, it makes sense for the administration to encourage U.S. government departments and agencies to make more of their data available to U.S.-based AI developers.

3. Protecting the AI assets of the U.S. and its allies. For some time, I have been particularly concerned that the U.S. government is inadequately protecting its AI assets—people, technology, data—and those of its allies from serious threats. (I wrote about that concern last year in a four-part series on Lawfare: Part I, Part II, Part III and Part IV.) The executive order rightly discusses the need for federal departments and agencies to protect those assets. One of the five principles that guide the policy set forth in the executive order includes the goal of “protecting our critical AI technologies from acquisition by strategic competitors and adversarial nations.” Indeed, doing so is essential for the long-term national security and economic well-being of the United States. A related objective the order describes is to “[e]nsure that technical standards [developed by the federal government pursuant to the order] minimize vulnerability to attacks from malicious actors.” Protecting the integrity of AI technology from a physical and cybersecurity perspective is also essential in order to make sure that our AI systems work as intended.

In addition, the order requires that federal agencies implementing it:

Develop and implement an action plan, in accordance with the National Security Presidential Memorandum of February 11, 2019 (Protecting the United States Advantage in Artificial Intelligence and Related Critical Technologies) (the NSPM) to protect the advantage of the United States in AI and technology critical to United States economic and national security interests against strategic competitors and foreign adversaries.

The administration has not publicized the NSPM, but its name and the language of this objective further suggest that the White House is taking seriously its obligation to protect the country’s AI assets. The order also includes the following section specifically focused on securing the country’s AI assets:

Sec. 8. Action Plan for Protection of the United States Advantage in AI Technologies.

(a) As directed by the NSPM, the Assistant to the President for National Security Affairs, in coordination with the OSTP Director and the recipients of the NSPM, shall organize the development of an action plan to protect the United States advantage in AI and AI technology critical to United States economic and national security interests against strategic competitors and adversarial nations.

(b) The action plan shall be provided to the President within 120 days of the date of this order, and may be classified in full or in part, as appropriate.

(c) Upon approval by the President, the action plan shall be implemented by all agencies who are recipients of the NSPM, for all AI-related activities, including those conducted pursuant to this order.

Relevant agencies of course need to devote the appropriate time, effort and resources to safeguard the United States’ AI assets. And in addition to protecting technology, they must focus on protecting the people who have relevant AI expertise and making sure that our immigration policies allow us to attract, educate and retain the best AI minds in the world. The government must also consider carefully the second- and third-order consequences of any actions it takes to protect U.S. AI assets, such as avoiding economic protectionism under the banner of protecting U.S.-based AI capability. But the overall policy highlights the importance of the AI security issue, and that makes a lot of sense.

But three things concern me about the order:

1. Show me the money. Section 4 of the order is entitled “Federal Investment in AI Research and Development.” The executive order directs federal agencies to review their AI-related budgets and develop funding requests for the Office of Management and Budget (OMB) in future fiscal years. But the executive order does not and cannot appropriate new funding for AI research and development, education and security. That requires action from Congress. If the administration is truly serious about addressing the AI issue, it will need to work with the Hill to obtain significant and sustained AI funding, now and well into the future. That will be tough in an era of nearly $1 trillion budget deficits. The private sector and academia are clearly incentivized to spend substantial sums on AI, but no one has as much money as Uncle Sam. The administration and Congress need to work together closely to find appropriate levels of government funding to maintain the U.S. lead in the AI field. Is this something they can all agree on?

2. Where is the attorney general’s role on privacy protection? At several points, the executive order discusses the need to more effectively leverage federal data to improve AI systems while simultaneously protecting the privacy of Americans. For example, the order highlights the role that agencies’ senior privacy officers should play in that endeavor. Maybe I’m biased, but I think the attorney general and the Department of Justice also should play a central role in making sure that all federal agencies adhere strictly to the Constitution and laws of the United States that protect the rights of Americans. AI development requires access to big data, but we don’t want things to get out of hand—for example, we don’t want agencies making decisions on important privacy questions based on narrow-minded and overly aggressive legal analysis that fails to take into consideration the broader legal, policy and reputational interests potentially at stake. With numerous agencies and subcomponents thereof accessing and releasing large datasets to federal contractors, universities and private companies, the potential for abuse is significant. To be effective in the inevitable bureaucratic battles to follow, the Justice Department will need the White House to emphasize that the department has an important seat at the table. The administration should issue some kind of supplemental directive to bolster the Justice Department’s role.

3. What is artificial intelligence? The executive order is about artificial intelligence. But how are executive branch agencies supposed to know what, exactly, AI is? The executive order does contain a definition of AI. Here it is:

As used in this order:

(a) the term “artificial intelligence” means the full extent of Federal investments in AI, to include: R&D of core AI techniques and technologies; AI prototype systems; application and adaptation of AI techniques; architectural and systems support for AI; and cyberinfrastructure, data sets, and standards for AI[.]

With all due respect, that is not a definition of artificial intelligence. I have great sympathy for the drafters of the order because it is hard to put a precise definition on AI. But if federal agencies are supposed to figure out how to execute their responsibilities under the new executive order, they need to know what the order is about. The failure to offer a better definition is a real miss, and I fear it will have collateral consequences for the government’s ability to adequately fund research and development of AI systems, protect the rights of Americans as related to AI and secure AI systems from hostile foreign actors. How do we know what to prioritize, fund, regulate and protect if we can’t define it? The administration should supplement the order in some way to provide more guidance to the government and the public about what technology the order actually covers.

No comments:

Post a Comment