Standards-based approach to AI safety and fairness

 

The Biden Administration’s new executive order on Safe, Secure, and Trustworthy Artificial Intelligence gets it right.  For the U.S. to forge its own path, our approach lies on the path of technical standards.

The United States stands at a pivotal moment in the evolution of Artificial Intelligence (AI). AI is rapidly integrating into our daily lives and economy, everywhere from chatbots powered by generative AI to synthesizing novel antibiotics. The question of how to best embrace the promise of this technology while ensuring unbiased and ethical deployment is pressing. 

The European Union’s AI Act, expected to pass by year-end, is poised to establish one standard for global AI regulation. Their approach is a principles-based approach, setting out ethical guidelines for safe use of AI. 

Although, by-and-large, the U.S. and EU agree on the principles of “do no harm” with AI, the EU path is not favored by the U.S. because of challenges with interpretability and enforcement. For example, take the principle of AI transparency. Current AI models use neural networks to make decisions, not unlike the human brain, how the AI comes to it’s decision is unknown, sometimes called the AI black box problem. Thus, it will be arduous to translate the broad intentions behind “transparency” into clear guidelines for industry to follow.

The U.S. is taking a different approach, with the Biden Administration’s new Executive Order (EO) on Safe, Secure, and Trustworthy Artificial Intelligence. The EO diverges from emphasizing broad principles, such as "transparency." Instead, tasks entities such as the National Institute of Standards (NIST) with the explicit responsibility of developing standards and tests for industries to assess the societal impact of their algorithms.

In Europe, bureaucrats set the rules; in America, it's time to unleash engineers to shape the future. 

Making automobiles safe

Standards-informed policymaking is not new. In the early days of the personal automotive, federal and state legislation on technical safety standards were nonexistent. Lawmakers instead focused on human behavior, such as Connecticut’s statewide speed limit law passed in 1901 or New York’s 1910 anti-drunk driving law. 

The absence of lawmaker attention did not translate into a deficiency in safety standards.

Parallel with state-level policies, the Society of Automotive Engineers (SAE), founded in 1905, set engineering and other technical standards. 

It took fifty years for Congress to authorize the formation of the Department of Transportation (DOT) to manage and streamline car safety and traffic rules. And even another sixteen years after that before Congress directed the DOT to create what would become the Federal Motor Vehicle Safety Standards.

The key lesson for AI is that legislators did not attempt to codify technical definitions and standards in legislation, they used pre-created safety standards by the SAE to set minimum, pre-defined safety requirements.

This process illustrates a viable pathway for regulating AI. Policymakers should draw from technical standards and best practices such as those developed by the National Institute of Standards (NIST). Then, create bespoke, harm-based policies to navigate the complex challenges posed by this advancing technology.

Standards: The Bedrock of Fairness and Safety

The United States has always been a leader at technical standards. Because they are founded in what is technically possible, they offer a clearer benchmark for developers, users, and policymakers. 

The National Institute of Standards’ (NIST), work on AI already gets to the heart of the goals set by policymakers for clear, fair, and ethical AI rules and regulations. 

The NIST Risk-Management Framework for AI focuses on risks for consumers and organizations in the design and deployment of AI. Their ongoing work in AI measurement and evaluation seeks to identify measurable standards for AI bias, transparency, and accuracy. Finally, NIST is a global leader in developing technical standards and is working cross-sectorally to design and implement trustworthy and responsible AI development standards. They are coordinating with federal government agencies like the Office of Personal Management (OPM) and the National Security Agency (NSA) agencies to align AI use with NIST standards.

By adhering to established standards, industries can be confident in the ethicality and safety of their AI applications. Similarly, standards evolve more quickly than legislation. As technology advances and new challenges emerge, adapt, ensuring continued relevance. 

Concluding Thoughts: Standards First, Legislation Next

The EU is pursuing a legislation-first, standards-second model. However, given the rapidly evolving nature of AI and its myriad applications, reversing the process may be the most effective way to manage the harms of AI in the U.S.

With clear standards in place for AI development and deployment, Congress’s role in AI policy is clear: addressing any consumer or competition-focused gaps and, where necessary, enacting the most crucial standards into law.

This legislation wouldn't replace or supersede the standards but complement them. It would identify any gaps or areas where the standards might not be enough, ensuring a comprehensive regulatory framework.

The future of AI in the United States hinges on achieving a delicate balance between innovation and regulation. Embracing an approach grounded in standards and complemented by targeted legislation ensures that AI thrives ethically and safely.


 
Jordan Shapiro