I’m all about the federal government in this week’s newsletter. To start with, the National Institute of Standards and Technology (NIST) has released a risk management framework for artificial intelligence (AI). The goal of the framework is to help companies and users assess and manage risks associated with AI.
It also offers a good lens through which to view how society regards AI and how industry and governments are experiencing AI. The good news is that the adoption of AI will mirror other modern technologies such as IoT. The bad news is that managing risk in AI isn’t really a technological problem, but a cultural one.
The framework spends a lot of time focusing on how complicated it is to measure the risks associated with AI because of several factors. One is that it can be hard to see the potential harms of an algorithm until it is widely deployed. Another is that harms caused by an algorithm can be hard to measure.
Is it harmful if an AI determines that homes located in a certain area are more likely to house people who smoke? Maybe not until a police department uses a new algorithm to correlate smokers with those who are more likely to commit crimes and subsequently beefs up enforcement. Then, through poorly combined algorithms, the original AI that calculated whether or not a person smoked could lead to increased police enforcement in a specific neighborhood.
The inability to control how data gets used as well as to understand the agenda of those producing and/or deploying the AI that gathers such data also makes it hard to measure the risks that AI presents. It’s a risk familiar to anyone involved in the API economy, which is that when you build software, hardware, or AI models using software, data, or services from third parties you lose control of some inputs into your final product or use case.
After the discussion of risks, NIST digs into what makes a trusted AI. Again, these are pretty common ideas already in play across the internet of things and a myriad of other businesses. Any deployed AI must be reliable, secure, robust, privacy-protecting, accountable, resilient, etc.
Of course, those factors will change to a certain degree depending on whether someone is building an AI model or deploying it. Accountability when building the model might require an audit of the data set used to build the model, but once it’s deployed accountability might shift to ensuring that outside observers can ensure the model behaves in a consistent and unbiased manner.
But the biggest crux of this framework document is that building trusted AI and using it in a trustworthy manner needs to start at the top. And the determining factor how AI gets used will be an organization’s culture. How will an organization track bias, if it even considers it a priority? Will an organization deploying AI prioritize accountability and safety over trade secrets?
We often look at technology as a neutral element that simply provides facts or data-derived conclusions. But with AI, we have to recognize that any insights or conclusions derived by an algorithm are completely reliant on the desires and biases of the humans who built and implemented them.
That’s what these guidelines ultimately offer: a chance for the individuals responsible for building and managing AI to recognize that this technology isn’t neutral, nor is it a way to abdicate making hard calls when it comes to figuring out an organization’s priorities. In many ways, an AI will only expose those priorities more clearly than an organization may want.
NIST can’t force anyone to follow the framework. But the government often writes NIST guideline documents into laws, and if nothing else, the document provides a good amalgamation of the current thinking around the various challenges associated with building and deploying AI models.
Leave a Reply