Type to search

HEALTH/SCI/TECH

US Maps Plan for Developing Artificial Intelligence Guidelines

Artificial Intelligence (Photo courtesy of Pixabay)
Artificial Intelligence (Photo courtesy of Pixabay)

“It is important for those participating in AI standards development to be aware of, and to act consistently with, U.S. government policies and principles, including those that address societal and ethical issues, governance and privacy.”

According to the Department of Commerce’s National Institute of Standards and Technology (NIST), standards for artificial intelligence (AI) should have the flexibility to stimulate innovation, yet have the structure to prevent the technology from causing harm. To ensure this, on August 9th, the NIST revealed “a plan for prioritizing federal agency engagement in the development of standards for artificial intelligence.”

The plan presented in a document titled, “U.S. LEADERSHIP IN AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools” will hopefully create that balance. Without the proper tools and better guidelines, regulating AI standards could prove to be a tough job.

Executive Order to Develop AI

“Continued American leadership in artificial intelligence is of paramount importance to maintaining the economic and national security of the United States and to shaping the global evolution of AI in a manner consistent with our nation’s values, policies and priorities,” said President Trump in executive order 13859 issued last February.

That executive order launched the American AI Initiative. The purpose of the initiative is to use government resources to further develop artificial intelligence “in order to increase our Nation’s prosperity, enhance our national and economic security, and improve quality of life for the American people,” as the executive order declared.

In a statement released by the NIST announcing the new federal plan to help establish AI standards, U.S. Chief Technology Officer Michael Kratsios said, “Public trust, security and privacy considerations remain critical components of our approach to setting AI technical standards. As put forward by NIST, federal guidance for AI standards development will support reliable, robust and trustworthy systems and ensure AI is created and applied for the benefit of the American people.”

Released last week, the plan NIST created resulted from the executive order in February.

“The government’s meaningful engagement … is necessary, but not sufficient, for the nation to maintain its leadership in this competitive realm,” said NIST officials. “Active involvement and leadership by the private sector, as well as academia, is required.”

What’s in the NIST Plan?

According to the agency, future development of AI guidelines should have the flexibility to adapt to new and emerging technology while “minimizing bias” and safeguarding privacy.

“The degree of potential risk presented by particular AI technologies and systems will help to drive decision-making about the need for specific AI standards and standards-related tools,” NIST officials said.

The NIST’s plan stated that some existing standards for other technologies are applicable to AI, stating that “standards related to data formats, testing methodology, transfer protocols, cybersecurity, and privacy are examples.” However, standards related to trustworthiness are a new and developing field.

“Trustworthiness standards include guidance and requirements for: accuracy, explainability, resiliency, safety, reliability, objectivity, and security,” stated the NIST plan.

One of the key components of the NIST’s plan is to establish a set of principles to guide AI standardization and development.

“Standards flow from principles, and a first step toward standardization will be reaching broad consensus on a core set of AI principles,” writes the NIST plan.

The NIST plan did also call for a group of “specialists trained in law and ethics” to consider legal, societal, and ethical considerations of AI standardization.

NIST Proposes That Government Officials Partner to Coordinate and Develop AI Standards

NIST officials emphasized the necessity of timing in the development of regulations for AI. Standards that are too early could prevent innovation. If these standards are too late, however, getting the industry on board with these guidelines could be a challenge.

“It is important for those participating in AI standards development to be aware of, and to act consistently with, U.S. government policies and principles, including those that address societal and ethical issues, governance and privacy,” NIST leaders outlined in the plan. “While there is broad agreement that these issues must factor into AI standards, it is not clear how that should be done and whether there is yet sufficient scientific and technical basis to develop those standards provisions.”

Tags:
Leighanna Shirey

Leighanna graduated with a degree in English from Pensacola Christian College. After teaching high school English for five years, she decided to pursue her dream of writing and editing. When not working, she enjoys traveling with her husband, spending time with her dogs, and drinking way too much coffee.

You Might also Like

1 Comment

  1. Larry N Stout August 19, 2019

    Every technological advancement is immediately co-opted by the military and by criminal masterminds.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *