Analysis

What new NIST guidelines on trusted AI tell us about AI

I’m all about the federal government in this week’s newsletter. To start with, the National Institute of Standards and Technology (NIST) has released a risk management framework for artificial intelligence (AI). The goal of the framework is to help companies and users assess and manage risks associated with AI.

It also offers a good lens through which to view how society regards AI and how industry and governments are experiencing AI. The good news is that the adoption of AI will mirror other modern technologies such as IoT. The bad news is that managing risk in AI isn’t really a technological problem, but a cultural one.

We are only at the beginning of understanding how we should use AI, and how it may work for good and bad.

The framework spends a lot of time focusing on how complicated it is to measure the risks associated with AI because of several factors. One is that it can be hard to see the potential harms of an algorithm until it is widely deployed. Another is that harms caused by an algorithm can be hard to measure.

Is it harmful if an AI determines that homes located in a certain area are more likely to house people who smoke? Maybe not until a police department uses a new algorithm to correlate smokers with those who are more likely to commit crimes and subsequently beefs up enforcement. Then, through poorly combined algorithms, the original AI that calculated whether or not a person smoked could lead to increased police enforcement in a specific neighborhood.

The inability to control how data gets used as well as to understand the agenda of those producing and/or deploying the AI that gathers such data also makes it hard to measure the risks that AI presents. It’s a risk familiar to anyone involved in the API economy, which is that when you build software, hardware, or AI models using software, data, or services from third parties you lose control of some inputs into your final product or use case.

After the discussion of risks, NIST digs into what makes a trusted AI. Again, these are pretty common ideas already in play across the internet of things and a myriad of other businesses. Any deployed AI must be reliable, secure, robust, privacy-protecting, accountable, resilient, etc.

Of course, those factors will change to a certain degree depending on whether someone is building an AI model or deploying it. Accountability when building the model might require an audit of the data set used to build the model, but once it’s deployed accountability might shift to ensuring that outside observers can ensure the model behaves in a consistent and unbiased manner.

But the biggest crux of this framework document is that building trusted AI and using it in a trustworthy manner needs to start at the top. And the determining factor how AI gets used will be an organization’s culture. How will an organization track bias, if it even considers it a priority? Will an organization deploying AI prioritize accountability and safety over trade secrets?

We often look at technology as a neutral element that simply provides facts or data-derived conclusions. But with AI, we have to recognize that any insights or conclusions derived by an algorithm are completely reliant on the desires and biases of the humans who built and implemented them.

That’s what these guidelines ultimately offer: a chance for the individuals responsible for building and managing AI to recognize that this technology isn’t neutral, nor is it a way to abdicate making hard calls when it comes to figuring out an organization’s priorities. In many ways, an AI will only expose those priorities more clearly than an organization may want.

NIST can’t force anyone to follow the framework. But the government often writes NIST guideline documents into laws, and if nothing else, the document provides a good amalgamation of the current thinking around the various challenges associated with building and deploying AI models.

Stacey Higginbotham

Share
Published by
Stacey Higginbotham
Tags: AINIST

Recent Posts

Episode 437: Goodbye and good luck

This is the final episode of The Internet of Things Podcast, and to send us…

8 months ago

So long, and thanks for all the insights

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

8 months ago

We are entering our maintenance era

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

9 months ago

IoT news of the week for August 18, 2023

Verdigris has raised $10M for smarter buildings: I am so excited by this news, because roughly eight…

9 months ago

Podcast: Can Alexa (and the smart home) stand on its own?

Amazon's head of devices, David Limp, plans to retire as part of a wave of executives that…

9 months ago

Z-Wave gets a boost with new chip provider

If you need any more indication that Matter is not going to kill all of…

9 months ago