Pages

3 November 2023

Agencies get marching orders as White House issues AI-safety directive

PATRICK TUCKER

The White House hopes to guide how technologists develop artificial intelligence and how the government prompts and adopts AI tools, under a new executive order to be unveiled Monday.

The order lays out some basic safety rules to prevent AI-enabled consumer fraud, requires red-team testing of AI software for safety, and issues guidance on privacy protections. The White House will also pursue new multilateral agreements on AI safety with partner nations and accelerate AI adoption within the government, according to a fact sheet provided to reporters.

The order comes amid growing public concern about the effects of rapidly advancing artificial intelligence tools on public life, the future of employment, education, and more. Those concerns are at odds with warnings from key business leaders and others that China’s growing investment in AI could give it an economic, technological, and military advantage in the coming decades. The new executive order attempts to address concerns about the use of AI in dangerous settings and the misuse of AI while simultaneously encouraging its advancement and adoption.

White House Deputy Chief of Staff Bruce Reed called the order “the next step in an aggressive strategy to do everything on all fronts to harness the benefits of AI and mitigate the risks.”

On safety, the order directs the National Institute of Standards and Technology, or NIST, to draft standards for red-team exercises to test the safety of AI tools before they’re released.

“The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks,” according to the White House fact sheet.

The order also stands up a new cyber security program to explore how AI could lead to attacks, requires that the developers of “the most powerful AI systems” share safety test results with the government, and it calls on the Department of Commerce to develop practices for detecting AI-generated content that could be used for fraud or disinformation.

It calls on the National Science Foundation to further develop cryptographic tools and other technologies to protect personal and private data that could be collected by AI tools, and it sets guidelines to prevent organizations and institutions from using AI in discriminatory ways. It also calls on the government to do more research on AI’s effects on the labor force.

Additionally, a large portion of the order looks at how the government can better embrace AI and form new bonds and working strategies with like-minded democratic nations to do so.

“The administration has already consulted widely on AI governance frameworks over the past several months—engaging with Australia, Brazil, Canada, Chile, the European Union, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK,” the fact sheet said. The order calls on the State and Commerce departments to “lead an effort to establish robust international frameworks for harnessing AI’s benefits and managing its risks and ensuring safety.”

Still, according to the fact sheet, “More action will be required, and the administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation.”

Gary Marcus—a neuroscientist, author, and AI entrepreneur who routinely argues in favor of greater regulation and public scrutiny of AI tools and products—described the move as a positive step, but he worries enforcement might be lacking. “Companies will poke for loopholes and surely find them,” he wrote in an email newsletter. “They will probably try to argue that their specific products don’t meet the risk threshold, and even if they do agree, if I understand correctly, they only have to share results of internal testing, no matter how bad the risks might be. This is a long long way from the kind of FDA approval process that I advocated for here and in the Senate.”

The Information Technology and Innovation Foundation, a technology think tank, said some of the proposed fixes the White House outlined don’t exist yet, and will be difficult to create. “For example, the EO calls for new standards for red teaming, biological synthesis screening, and detecting AI-generated content. These are all active areas of research where there are no simple solutions. Policymakers often forget that the reason industry hasn’t already adopted certain solutions is because those solutions don’t yet exist. This is one reason why it will be essential for the United States to continue to fund critical AI research in these areas.”

But the group still gave the order high marks for setting guidelines that would allow industry to begin to move forward.

“Amid a sea of chaotic chatter about how to implement appropriate guardrails for AI, today’s executive order (EO) sets a clear course for the United States. It provides industry with long-awaited guidance for AI oversight, including advising tech companies to adhere to the NIST AI risk management framework, watermark AI-generated content, consider the data used in model training, and incorporate red-teaming into testing…With this EO, the United States is demonstrating it takes AI oversight seriously.”

No comments:

Post a Comment