Stacey on IoT | Internet of Things news and analysis

Internet of Things

  • Home
  • Analysis
  • Startups
  • How-To
  • News
  • Podcast
  • Events
  • About
  • Advertise
  • Speaking
    • Facebook
    • RSS
    • Twitter
    • YouTube

What new NIST guidelines on trusted AI tell us about AI

February 6, 2023 by Stacey Higginbotham Leave a Comment

I’m all about the federal government in this week’s newsletter. To start with, the National Institute of Standards and Technology (NIST) has released a risk management framework for artificial intelligence (AI). The goal of the framework is to help companies and users assess and manage risks associated with AI.

It also offers a good lens through which to view how society regards AI and how industry and governments are experiencing AI. The good news is that the adoption of AI will mirror other modern technologies such as IoT. The bad news is that managing risk in AI isn’t really a technological problem, but a cultural one.

We are only at the beginning of understanding how we should use AI, and how it may work for good and bad.

The framework spends a lot of time focusing on how complicated it is to measure the risks associated with AI because of several factors. One is that it can be hard to see the potential harms of an algorithm until it is widely deployed. Another is that harms caused by an algorithm can be hard to measure.

Is it harmful if an AI determines that homes located in a certain area are more likely to house people who smoke? Maybe not until a police department uses a new algorithm to correlate smokers with those who are more likely to commit crimes and subsequently beefs up enforcement. Then, through poorly combined algorithms, the original AI that calculated whether or not a person smoked could lead to increased police enforcement in a specific neighborhood.

The inability to control how data gets used as well as to understand the agenda of those producing and/or deploying the AI that gathers such data also makes it hard to measure the risks that AI presents. It’s a risk familiar to anyone involved in the API economy, which is that when you build software, hardware, or AI models using software, data, or services from third parties you lose control of some inputs into your final product or use case.

After the discussion of risks, NIST digs into what makes a trusted AI. Again, these are pretty common ideas already in play across the internet of things and a myriad of other businesses. Any deployed AI must be reliable, secure, robust, privacy-protecting, accountable, resilient, etc.

Of course, those factors will change to a certain degree depending on whether someone is building an AI model or deploying it. Accountability when building the model might require an audit of the data set used to build the model, but once it’s deployed accountability might shift to ensuring that outside observers can ensure the model behaves in a consistent and unbiased manner.

But the biggest crux of this framework document is that building trusted AI and using it in a trustworthy manner needs to start at the top. And the determining factor how AI gets used will be an organization’s culture. How will an organization track bias, if it even considers it a priority? Will an organization deploying AI prioritize accountability and safety over trade secrets?

We often look at technology as a neutral element that simply provides facts or data-derived conclusions. But with AI, we have to recognize that any insights or conclusions derived by an algorithm are completely reliant on the desires and biases of the humans who built and implemented them.

That’s what these guidelines ultimately offer: a chance for the individuals responsible for building and managing AI to recognize that this technology isn’t neutral, nor is it a way to abdicate making hard calls when it comes to figuring out an organization’s priorities. In many ways, an AI will only expose those priorities more clearly than an organization may want.

NIST can’t force anyone to follow the framework. But the government often writes NIST guideline documents into laws, and if nothing else, the document provides a good amalgamation of the current thinking around the various challenges associated with building and deploying AI models.

Want the latest IoT news and analysis? Get my newsletter in your inbox every Friday.

Share this:

  • Click to share on Twitter (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)

Related

Filed Under: Analysis, Featured Tagged With: AI, NIST

Sponsors



Become a sponsor

Subscribe to Blog via Email

Enter your email address to receive notifications of new posts by email.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

IoT Podcast

Listen to the latest episode of the Internet of Things Podcast. Just press play!

Sponsors

Become a sponsor





Get Stacey’s free weekly Internet of Things newsletter

  • This field is for validation purposes and should be left unchanged.

Recent Comments

  • JD Roberts on Podcast: Smarter grocery stores are coming
  • JD Roberts on Podcast: Smarter grocery stores are coming
  • JD Roberts on Podcast: Smarter grocery stores are coming
  • Mike on Hands on with the Matter beta for Home Assistant

Stacey on Twitter

Tweets by gigastacey
Copyright © 2023 SKT Labs, LLC · Privacy Policy