US Maps Plan for Developing Artificial Intelligence Guidelines
“It is important for those participating in AI standards development to be aware of, and to act consistently with, U.S. government policies and principles, including those that address societal and ethical issues, governance and privacy.”
According to the Department of Commerce’s National Institute of Standards and Technology (NIST), standards for artificial intelligence (AI) should have the flexibility to stimulate innovation, yet have the structure to prevent the technology from causing harm. To ensure this, on August 9th, the NIST revealed “a plan for prioritizing federal agency engagement in the development of standards for artificial intelligence.”
The plan presented in a document titled, “U.S. LEADERSHIP IN AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools” will hopefully create that balance. Without the proper tools and better guidelines, regulating AI standards could prove to be a tough job.
Executive Order to Develop AI
“Continued American leadership in artificial intelligence is of paramount importance to maintaining the economic and national security of the United States and to shaping the global evolution of AI in a manner consistent with our nation’s values, policies and priorities,” said President Trump in executive order 13859 issued last February.
That executive order launched the American AI Initiative. The purpose of the initiative is to use government resources to further develop artificial intelligence “in order to increase our Nation’s prosperity, enhance our national and economic security, and improve quality of life for the American people,” as the executive order declared.
In a statement released by the NIST announcing the new federal plan to help establish AI standards, U.S. Chief Technology Officer Michael Kratsios said, “Public trust, security and privacy considerations remain critical components of our approach to setting AI technical standards. As put forward by NIST, federal guidance for AI standards development will support reliable, robust and trustworthy systems and ensure AI is created and applied for the benefit of the American people.”
Released last week, the plan NIST created resulted from the executive order in February.
“The government’s meaningful engagement … is necessary, but not sufficient, for the nation to maintain its leadership in this competitive realm,” said NIST officials. “Active involvement and leadership by the private sector, as well as academia, is required.”
What’s in the NIST Plan?
According to the agency, future development of AI guidelines should have the flexibility to adapt to new and emerging technology while “minimizing bias” and safeguarding privacy.
“The degree of potential risk presented by particular AI technologies and systems will help to drive decision-making about the need for specific AI standards and standards-related tools,” NIST officials said.
The NIST’s plan stated that some existing standards for other technologies are applicable to AI, stating that “standards related to data formats, testing methodology, transfer protocols, cybersecurity, and privacy are examples.” However, standards related to trustworthiness are a new and developing field.
“Trustworthiness standards include guidance and requirements for: accuracy, explainability, resiliency, safety, reliability, objectivity, and security,” stated the NIST plan.
One of the key components of the NIST’s plan is to establish a set of principles to guide AI standardization and development.
“Standards flow from principles, and a first step toward standardization will be reaching broad consensus on a core set of AI principles,” writes the NIST plan.
The NIST plan did also call for a group of “specialists trained in law and ethics” to consider legal, societal, and ethical considerations of AI standardization.
NIST Proposes That Government Officials Partner to Coordinate and Develop AI Standards
NIST officials emphasized the necessity of timing in the development of regulations for AI. Standards that are too early could prevent innovation. If these standards are too late, however, getting the industry on board with these guidelines could be a challenge.
“It is important for those participating in AI standards development to be aware of, and to act consistently with, U.S. government policies and principles, including those that address societal and ethical issues, governance and privacy,” NIST leaders outlined in the plan. “While there is broad agreement that these issues must factor into AI standards, it is not clear how that should be done and whether there is yet sufficient scientific and technical basis to develop those standards provisions.”