Analysis

NXP opens a window into how ML algorithms work

As we place more decision-making on machine learning (ML) algorithms, companies are investing in technologies that let algorithms “show their work” so that we can avoid errors. I wrote about this need back in January for IEEE Spectrum, and this month I had a conversation with Gowri Chindalore, head of technology and business strategy for NXP’s microcontrollers business, about how the chip giant is trying to help data scientists build what it calls explainable AI.

At its core, NXP is trying to help machine learning models alert data scientists when they work from a compromised image (maybe it’s blurry or super cropped) or images they have never encountered before. To understand why this helps, we should probably talk about how machine learning models are built and what actually happens when an AI identifies an object or recognizes an anomaly.

NXP wants to improve ML models so they can “show their work.”

When building a model, a data scientist inputs a lot of data into a computer. In image recognition, for example, to teach a computer to “see” COVID-19 in lungs, people first annotate the data (tell the computer what it’s looking at) and then the data scientist feed those images into the computer. From there, the computer starts spitting out suggestions, and the data scientist tweaks the way the computer is weighing different values to get it closer to an accurate — and potentially COVID-19 positive — diagnosis.

But the data scientist doesn’t really “know” how the computer draws its conclusion about what it sees. Indeed, this black-box situation can result in hilarity when machine learning algorithms come to radically different conclusions than people. But it’s also worth noting that with machine learning, computers don’t come to a definitive conclusion, but a probability. So with our example, the computer will look at an image of X-rayed lungs, run that image through its algorithm, and declare that it is 90% sure the lungs in the image match those of people diagnosed with COVID-19.

The higher that percentage, the more confident the computer is. But sometimes even a really confident computer gets it wrong. And when it does, the folks at NXP think it’s being led astray in two ways. The first is when it’s given bad input. For example, the lung X-ray may be blurry. The second is when it encounters something new. Maybe the lung x-ray is from someone who had a rare form of cancer that disfigured their lungs pre-COVID. It’s unlikely that the model was trained with a similar lung image.

But both errors can still lead to high certainties on the part of the computer, which then leads to misclassification. To stop the misclassification, NXP researchers have come up with a way to teach a machine learning model how to “tell” data scientists that the input data is wonky or that it has just encountered something new and is using an “educated guess” to build out its ultimate conclusion.

The idea is that NXP’s efforts will help machines do a better job of taking in data in the real world, where conditions are messy, and still return accurate answers. So when it comes to a COVID-19 diagnosis from a lung X-ray, an algorithm could let data scientists know that the X-ray from the 30-year-old machine deployed in a rural hospital is very different than the data it was trained on in the well-funded university hospital. Or it can let data scientists know that it has never encountered a piece of data before and that it essentially guessed what it was so it could continue trying to make a prediction.

NXP’s learnings should help in the effort to make machines both smarter, and a little more explicable to those relying on the results. Which is going to be important for COVID-19 but also in alerting data scientists when their models exhibit racial biases.

And for that work, there is no time like the present.

Stacey Higginbotham

Share
Published by
Stacey Higginbotham
Tags: nxp

Recent Posts

Episode 437: Goodbye and good luck

This is the final episode of The Internet of Things Podcast, and to send us…

8 months ago

So long, and thanks for all the insights

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

8 months ago

We are entering our maintenance era

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

8 months ago

IoT news of the week for August 18, 2023

Verdigris has raised $10M for smarter buildings: I am so excited by this news, because roughly eight…

8 months ago

Podcast: Can Alexa (and the smart home) stand on its own?

Amazon's head of devices, David Limp, plans to retire as part of a wave of executives that…

8 months ago

Z-Wave gets a boost with new chip provider

If you need any more indication that Matter is not going to kill all of…

8 months ago