After long last, NIST released its much-anticipated guidance on how the U.S. government should approach developing technical and ethical standards for artificial intelligence (AI). While this guidance doesn’t detail any specific regulations or policies, the plan outlines multiple initiatives that would guide the U.S. government in promoting the responsible use of AI. Additionally, and probably most critical, is that it details an array of high-level principles that should inform any future standards for the technology. The quandary that this creates though, is how to create safe standards while not stifling innovation. The following article provides more detail on the standards, and also shares some of NIST’s concerns surrounding the approach.
Federal standards for artificial intelligence must be strict enough to prevent the tech from harming humans, yet flexible enough to encourage innovation and get the tech industry on board, according to the National Institute of Standards and Technology.
However, without better standards for measuring the performance and trustworthiness of AI tools, officials said, the government could have a tough time striking that balance.