Pages

4 November 2023

The Biden Administration’s Executive Order on Artificial Intelligence

James Andrew Lewis, Emily Benson and Michael Frank

The discussion of rules for the use of artificial intelligence is a crowded space. The United Kingdom is hosting an AI Safety Summit later this week, the European Union moving forward with its AI Act to regulate AI, the United Nations is creating a Digital Compact, the Organization for Economic Cooperation and Development and G7 have issued guidelines, and there are countless public and private sector conferences on managing the risks of AI. Stanford University found that 37 AI-related laws in 127 different countries passed in 2022 alone. Most of these guidelines say more or less the same thing; that all must balance the potential risk of AI systems against the risk of losing the economic and social benefits the new technology can bring.

The United States is the latest entrant into the crowded field, with the Biden administration releasing its Executive Order (EO) on Safe, Secure, and Trustworthy Artificial Intelligence. The United States leads in developing AI technologies (also a crowded space), so these rules are consequential, if not always groundbreaking. The EO is levelheaded—it avoids the use of phrases like “existential risk” and focuses on concrete problems—security and safety, privacy, discrimination. Its approach to managing risk is increased transparency and the use of testing, tools, and standards. In recent months, many federal agencies have pressed ahead with AI-related rules. This EO establishes a requirement to engage in AI rulemaking that had lagged since President Trump’s initial February 2019 order. Perhaps its defining feature is its use of the executive-led approach to regulation, in contrast with the legislative approaches in the European Union.

Key Points from the Executive Order

The EO has a very broad scope. There is great emphasis on developing standards for critical infrastructure and using AI tools to fix software reinforces goals set in the National Cybersecurity Strategy and recognizes the potential benefits of AI. One important new requirement is to apply standards for federally funded biological synthesis projects, an area that many experts see as the most likely near-term risk from AI. Limiting standards to federally funded project also limits their scope, but in this and in other areas, the EO highlights the need for international cooperation in making the use of AI less risky. Harmonizing the many EO guidelines and laws with international partners might be one of the most important tasks the Administration can take. Unless Congress dramatically changes from the regulatory principles in the EO, it all but assures the United States will decline to follow the European Union’s risk classification embedded in the EU AI Act. The principles in the EO are not as incompatible with the EU AI Act as the competing privacy policies of the European Union’s General Data Protection Regulation and the laissez-faire U.S. approach, and the language the EO uses on privacy, civil rights, and workers’ rights is consistent with the European Union’s approach.

Another major element of the EO is its focus on “red-teaming.” Red-teaming entails stress-testing programs for potential holes or safety oversights. Under the Defense Production Act, the EO states that “companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests.” While this provision is likely to invite pushback from the private sector, which will claim that such measures will lead to the leakage of trade secrets, this policy will encourage private sector developers to consider more seriously the net effects of their products. Embedded in this section, however, is also the growing “mission creep” of the administration’s economic security agenda that increasingly melds economic and trade policy together with national security objectives. In today’s more decentralized, diffuse, and digital environment, that may prove prescient over time.

Some of the proposed solutions are not new. The United States has tried to promote “privacy preserving” technologies since the Obama administration with little success. Efforts to avoid AI-enabled fraud through detection, authentication, and watermarking are also not new and have had limited uptake by users—this might be a place where greater use of AI tools can help. Proponents of immigration reform are not shy in linking U.S.-China technology competition to making more visas available for skilled foreign talent. There are many world-class AI researchers eager to join U.S. universities and AI labs, but the EO’s plan to modernize the visa process, while a welcome improvement, is not a game changer without increasing the cap for highly skilled applicants.

The EO raises many social issues AI will create, such the risk of discrimination and the effect on workers, but does not really address them. The sections on discrimination and on workers calls for studies and work to develop principles and best practices. Other sections, like the section on innovation, echoes some of the administration’s pet themes like helping small businesses and encouraging the Federal Trade Commission. These ideas are not linked to any specific action. There are no easy solutions to some of the most enduring ills of the social media era, such as the link between content moderation, misinformation, and online harassment—issues that reflect society rather than represent problems to be resolved through executive action. Most of the action in the EO will need to be developed further, making the EO a kind of workplan as much as a rule.

Congress and AI Regulation

Congress is the biggest obstacle to progress in making AI safer. EOs have the force of law, but they are limited to authorities Congress has already approved. While some executive agencies are confident that their existing authorities are adequate to regulate AI, in other cases, particularly for privacy, there is a limit to what an EO can achieve unless Congress passes legislation, and this is unlikely. Congress has abrogated its role in regulating privacy so far. Instead, California’s Consumer Privacy Act has become the largest piece in a patchwork of State privacy policies. More broadly, the reliance on executive authorities to regulate AI denotes a major hurdle, since Congress has been unable to legislate on potential harms resulting from the adoption of advanced digital technologies. There have been hard lessons learned about the United States’ inability to regulate social media and U.S. lawmakers are keen to avoid a similar scenario as AI becomes increasingly prominent.

The inability of Congress to agree on legislation has made the Biden administration rely on its executive powers. This action on AI, for example, follows an earlier EO from August 2023 that limited U.S. investment in AI with potential military and intelligence uses in China. Failure to legislate forces the administration to create strategic roadmaps for understaffed U.S. agencies to implement.

Enforcement of the Executive Order

Another question for the EO is how it will be enforced. The EO builds on earlier voluntary commitments from private sector behemoths, but securing their buy-in to this agenda will be necessary in effectuating many of these changes. On the same day the EO was released, G7 leaders put out a statement announcing the creation of Guiding Principles for all AI actors in the AI ecosystem and Code of Conduct for Organizations Developing Advanced AI Systems. The G7 process aligns with the EO, calling for red-teaming and authentication methods such as watermarking. The text of the EO factsheet affirms the goal to complement and support international efforts, including the G7 Hiroshima AI Process. However, the Hiroshima AI Process is more narrowly focused than the EO, primarily targeting advanced foundation models and generative AI. The G7 documents are also unique in that they are a living document intended to be updated as AI models evolve, an important facet given the rapid pace of AI’s advancement.

Parts of the private sector was largely quick to support the EO. Microsoft’s president Brad Smith said the following on X: “Today’s executive order is another critical step forward in the governance of AI technology. . . . and we look forward to working with U.S. officials to fully realize the power and promise of this emerging technology.” However, NetChoice, a coalition of trade associations and technology firms, published a press release that largely condemns the EO as a “red tape wishlist” that would harm the AI marketplace, stifle innovation and investment, and ultimately damage the United States’ standing as a tech leader.

If the United States is to regulate AI, it will need to secure durable buy-in from both the private sector and international partners. This is no small feat to achieve, but the risks are high if it fails.

No comments:

Post a Comment