Analysis

The internet of things is becoming a surveillance tool

What do the owners of Cadillac Escalades listen to? GM can tell you. Image courtesy of Cadillac.

Whenever I’m away from home, in the morning I like to check in on my family by looking at the doorbell camera where I can see them leave for work and school. In the evening, I can generally tell if my daughter is upstairs in her room based on the lighting scenes there; I can also tell when my husband is working out. Sometimes, if I try to call them and they aren’t home, I check in on our car to see where it (and they) might be.

While my family is aware of all of the devices we have, sometimes my use of those devices does creep them out a bit. But they know me, love me, and understand that I do this in order to connect with them when I am on the road. And if they asked me to stop checking in, I would.

I would adhere to my family’s desires because I love and respect them. I’m also not looking in on them for anything other than a bit of comfort. But these devices don’t just offer me a window into what my family is doing or where they are; they offer Nest, Google, Chamberlain, Tesla, and a variety of third parties that might also have access to my family’s data a glimpse of them as well.

Unlike me, who will stop looking if asked, those companies won’t. They don’t even offer a way for us to ask them to not look in on our lives. And the data they capture gets sucked into their servers without a sense of how well it’s protected, how it’s used, or even how long it will stay accessible to them or any third parties.

In most cases, the only way to avoid these draconian data capture policies is to stop using the product, though I wonder if in a few years non-connected versions of products will even exist. In some cases, such as when a patient was asked to sign away her rights to some of her private information just moments before going into a long-awaited doctor’s appointment, the choice of whether or not to avoid data collection isn’t really a choice at all.

This isn’t a new concern, but it is one that is gaining in urgency. With the passage of the General Data Protection Regulation (GDPR) in the EU in May, companies are already changing how they talk to consumers about data use and allowing consumers more of a say in the process.

Among the positives of that regulation are the ability to write to a data protection officer at a company to see the data that business has on you, the ability to reclaim your data from those servers, and even some language that forces data customers to say why they want the data they are gathering. Is it perfect? No. There’s plenty of evidence that the rules benefit larger companies over smaller ones, and may even help entrench the tech giants by offering them data monopolies.

But we as a society have to enact regulations, and we have to do it now. The amount of data companies can gather when they put computers in things and connect those things to the internet is phenomenal. By storing data from different sources in the cloud and perfecting algorithms to make sense of that data, the ability to track individuals, market to them, and even assess their health and well-being becomes unsurpassed.

That is the core of what the internet of things is: the ability to cheaply gather ever more granular data, analyze it, and draw conclusions. When applied to industrial processes, it can save on energy use. When applied to medicine, it can help allocate resources such as nurses or equipment. When applied to cities, it can reduce traffic congestion and pollution.

But when applied to marketing, price-setting for insurance or loans, or determining who can have access to jobs or services, it becomes nefarious. It’s a problem in marketing because you can customize a pitch to the point where it’s not really a pitch, but a psychological manipulation that people may not even realize influences them to buy the product. In health care, such data may be used to decide which patients deserve care and which don’t. One reason I’m worried is because all of these algorithms are built with human biases that influence how machines try to take action. The other reason I’m worried is because the data pulled into these decisions will come from sources a consumer won’t necessarily even know they gave that data to.

Plus, in some cases, the companies gathering that information may not have a plan for it. Or they have a plan but they may not communicate with users what that plan is, exactly. There are a lot of companies out there collecting data just for the sake of collecting data. Recently, a GM executive presented the findings of GM users’ radio listening habits to advertisers. GM had sampled the radio stations its car owners were listening to and where they were listening to those stations — without telling the car owners. The data was gathered and sent to GM by the cars’ Wi-Fi networks.

I have no doubt that when they bought their GM cars, those customers clicked “yes” on a long agreement that said they would share that data with GM and maybe even with some third parties, too. But when you are set to purchase a car, or after you have purchased a car, and you are given the option to click on an acknowledgement of data usage before connecting that car to Wi-Fi, it’s too late. If the consumer reads the policy and decides they don’t like it, they can’t easily return the car for their full purchase price. Additionally, few would actually change their vehicle choice when faced with such an opaque agreement.

GM clearly knows it has the advantage here. When asked why it sampled its customers listening habits, GM’s Saejin Park, director of global digital transformation, said, “We sampled (the behavior) every minute just because we could.

Consumers, on the other hand, don’t understand what they are giving up. They don’t understand how the aggregation and analysis of such data can manipulate them. And sometimes they sign away their rights to their data under duress, such as after the purchase of a vehicle or five minutes before a doctor’s appointment that they really want to keep.

I bring this up because the U.S. is holding hearings on data usage and privacy in preparation for new laws. In an ideal world, the laws will help address privacy at a macro level, giving consumers the power to take back their data and control how it is used. I’d also like to see education around how companies communicate their use of data to make sure they offer clear, transparent terms before someone has purchased a product, so that they’re not left to click through a 40-page agreement on the screen of an oven or a car’s dashboard computer.

We also need to have a process in place that analyzes and audits the results of companies’ algorithms, many of which can result in redlining, substandard care, or a lack of access to essential services at a reasonable price. Companies, however, often argue that algorithm-assessing methods that require an understanding of what data is used and how infringe on their IP. They argue that their algorithms are proprietary.

But by auditing the results, we can challenge companies whose algorithms are biased without forcing any of them to divulge their secrets. Will it be a messy process? Of course it will. But we’ve previously established agencies capable of handling such highly technical review processes in a democratic manner; the Federal Communications Commission is just one example. And while, yes, its current leadership is giving the agency a black eye, the FCC also has a highly respected engineering and technical staff that does make unbiased policy decisions.

The Federal Trade Commission would likely be the right agency to handle such data audits and the related review of various privacy policies. It, too, has technical experts on staff. Its role should be to make sure consumers can take advantage of internet connected devices without signing away their privacy and in the process, giving companies the tools to manipulate or discriminate against them as they try to live their lives.

This isn’t too much to ask, and if we don’t ask it, the internet of things will become a surveillance tool that benefits overzealous governments and companies’ bottom lines. And consumers will be the ones paying for it. To make sure you have secure data and are storing large amounts of that data in a safe way, it is worth checking out alfresco development services by SmartexLab to see how they can help you and make you feel a lot safer when managing your content.

Stacey Higginbotham

Share
Published by
Stacey Higginbotham

Recent Posts

Episode 437: Goodbye and good luck

This is the final episode of The Internet of Things Podcast, and to send us…

8 months ago

So long, and thanks for all the insights

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

9 months ago

We are entering our maintenance era

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

9 months ago

IoT news of the week for August 18, 2023

Verdigris has raised $10M for smarter buildings: I am so excited by this news, because roughly eight…

9 months ago

Podcast: Can Alexa (and the smart home) stand on its own?

Amazon's head of devices, David Limp, plans to retire as part of a wave of executives that…

9 months ago

Z-Wave gets a boost with new chip provider

If you need any more indication that Matter is not going to kill all of…

9 months ago