Featured

Let’s see what the Davos set has to say about the IoT

This story was originally publish on Jan. 20, 2023 in my weekly IoT newsletter.

The glitterati of global government and business were hobnobbing in Davos as part of the World Economic Forum’s annual meeting this week. I wasn’t there (I couldn’t afford the $250,000 ticket others were reportedly charged), but I did get a copy of the WEF’s report on The State of the Connected World 2023. Let’s take a look, shall we?

Nothing in the 49-page report will surprise readers of this newsletter, but I want to highlight a big area where I wish we’d stop talking about the problem and actually do something about it. The report notes the two biggest governance gaps in the IoT: privacy practices and the ethical use of the technology, followed by cybersecurity. I’m optimistic we’re well on our way to closing the governance gap when it comes to cybersecurity, but we’re completely failing users when it comes to privacy, and that will hobble the IoT.

The survey results 82% of the respondents lacked confidence in the protection of privacy and the responsible use of data generated from connected devices.  Image courtesy of the WEF report.

Eighty-two percent of respondents indicated they lacked confidence in the protection of privacy and the responsible use of data generated from connected devices. That means more than four out of every five people aren’t ready to trust connected devices or the ethics of those producing or deploying them.

This is an astonishing fact, and one that will stymie voluntary adoption of connected devices and reduce the potential benefits of widespread use of such devices. I write constantly about the importance of trust in the internet of things, whether it’s between consumers and smart speaker vendors or suppliers and the manufacturers they work with.

By making it easy to “see” what’s going on in the home or inside a piece of equipment, the internet of things can shed light on habits or actions one may want to keep hidden. It might expose someone’s IBS or a company’s trade secrets. Layering algorithms on top of the internet of things’ sensing capabilities requires even more trust.

Consumers and businesses must be able to trust that the algorithm is measuring what it’s supposed to and drawing conclusions that benefit all parties to some extent. This isn’t simply eliminating bias, but involves making sure the decisions made by the algorithm take into account the needs of both buyers and sellers, or of varying business partners. Many people blithely assume an algorithm is neutral, but every decision will involve tradeoffs, which means businesses will likely want to understand the impacts of any AI and negotiate how they work when they are entering into agreements tied to IoT data and AI.

For consumers, this will be more of a challenge. The WEF report suggests that when it comes to sensing, transparency will be key, and governments will have to regulate this in ways they haven’t yet. On the transparency side, the WEF calls out two elements: one is to tell users what data is gathered and who has access to it. But a second element is to share what inferences can be made with that data. From the report:

“While companies tend to provide an “informed consent” standard to verify and ensure that users are fully aware of the rules and limits of a software or platform, the current model does not effectively educate users of the implication of their choices.”

I think helping consumers understand the implications of their choices is a laudable goal, but it’s also impossible. For example, heart rate data from my wearable might be used to monitor my health, but it could also be used to track whether or not I’m paying attention while I drive. Every day I get emails from companies about new ways to parse sensor data to generate a new biomarker or efficiency metric. (I got one the other day that tried to explain how volume in an office setting correlated to productivity.)

In a more serious example, that same heart rate data might be used in court if I were charged with a crime. How does a company or government explain all these things to a consumer? How does a consumer even begin to understand the potential implications? I’d like to see the WEF focus on how to build legislation that protects consumers and businesses from harmful outcomes of using this data.

We need impartial studies where we measure the potential algorithms and sensor data to prove it actually does indicate what the AI is trying to measure, and then we need conversations about individual privacy weighed against the public good of using that AI or data.  It should probably look something like this.

Otherwise, we risk people avoiding connected devices because the perceived harms for the individual far outweigh any benefits for us all. And that would be a shame, because having a better understanding of our world and how we interact with it could help us solve some real problems.

Stacey Higginbotham

Share
Published by
Stacey Higginbotham

Recent Posts

Episode 437: Goodbye and good luck

This is the final episode of The Internet of Things Podcast, and to send us…

8 months ago

So long, and thanks for all the insights

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

8 months ago

We are entering our maintenance era

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

8 months ago

IoT news of the week for August 18, 2023

Verdigris has raised $10M for smarter buildings: I am so excited by this news, because roughly eight…

9 months ago

Podcast: Can Alexa (and the smart home) stand on its own?

Amazon's head of devices, David Limp, plans to retire as part of a wave of executives that…

9 months ago

Z-Wave gets a boost with new chip provider

If you need any more indication that Matter is not going to kill all of…

9 months ago