Analysis

Unprecedented times call for thoughtful conversations

In the last month and a half, reading about how cities are enforcing curfews and police are monitoring Black Lives Matter protestors I’ve realized that the surveillance network we are worried about building with the internet of things is already in place. And as we bring people back to work amidst the threat of COVID-19, that surveillance network is only going to expand from the state to our employers.

This scares me. It’s not what I hoped for, and it’s not what I envisioned when I tried to connect my light bulbs to Twitter to try to track sentiment to make it easier to spot stories. Now police are using a similar idea to find protestors. And while a connected thermometer can help us see the spread of a disease a few days earlier, I worry that in the hands of an unscrupulous employer, private health data could become a liability.

The future we’ve built and are continuing to invest in is not one many people want to live under. And we need to talk about it.

Citi Bike’s were remotely rendered inoperable during the New York City’s curfew last month.

Let’s focus on existing ways our connected devices can be used to control or track citizens that has been exposed by the Black Lives Matter protests. New York City Mayor Bill DeBlasio decided to shut down Citi Bike in New York City last month during curfews set in place for Black Lives Matter protests, leaving some people unable to get to their work or other destinations (including protests).

Shutting down the bike network is similar to shutting down public transportation which is well within a mayor’s right, but it does bring up two questions. The first is about betraying the promise of a future of shared resources enabled by connectivity on every device. If a shared resource can be remotely deactivated at the behest of a government official, then prudent citizens will likely decide to use their own resources rather than share. So the question is, should we rely on a network of shared connected devices or fall back to individual ownership?

The second question is about how far a government can reach to control a connected device. Citi Bike is clearly using public infrastructure, made available on the streets of NYC only because its government has allowed it to offer bikes there. But what about private vehicles that are connected? Could a government ask companies to help enforce a curfew by shutting off power to an individual’s connected car?

What about if a police officer wanted access to a building in order to follow protesters, would the police officer be able to get a warrant to break into the house via its electronic locks? What if police wanted to search an apartment building that had connected locks installed? Would the building’s management open them up? Would the lock company? What might be different if the police needed access to apartments in public (government-owned) housing?

If you’re an access company, are you prepared for these requests? Do you know where you’d draw the line when it comes to cooperating with the law? What about smart camera companies? Could they fight a subpoena for access to their cameras in a geographic area, or in a backyard where a crime was committed? It’s not just connected lock and camera makers that should think about this. Already Google is trying to walk the line between an individual’s right to privacy and unreasonable search and seizure while getting requests from police departments for data from cell phones used near crime scenes.

Are the companies building connected products ready to ask themselves these questions? Are they ready if government officials in the form of police or immigration officers come to them with these requests?

In the meantime, because of COVID-19, society is encouraging employers to build out intra-company surveillance networks to monitor their offices for occupancy and ensure that workers socially distance. The pandemic is also driving health professionals to deploy more technology for remote monitoring without necessarily understanding how some of these tools might be collecting and sharing anonymized patient data (or reporting that data to insurers).

I’m concerned as well about how employers might use occupancy sensors to track employees in private places, such as bathrooms. If someone spends a lot of time in the restroom because they have a health condition, but their work is fine, it’s possible a manager may never notice. But if in an effort to help with occupancy sensing or contact tracing companies start using that data to put together reports, such personal habits might become clear. And invite action.

A more insidious threat could also emerge. Wearables to ensure social distancing are already being marketed in manufacturing and factory environments. And some employers are turning to consumer wearables that people already own to track fevers or sleep as indicators of potential infection. But if those employers start looking closely at that data, they might see other habits that should remain private.

Additionally, it’s worth remembering that actual people are the ones with access to that data. And in some cases, not people from the HR department or someone specially trained for the job. That symptom survey or temperature tracking wand might be wielded by a random 25-year-old receptionist or office manager who has time on his hands.

It’s one thing to share health data with a professional who has some sort of training and discretion, but another to have it go to a random individual who may or may not keep it to themselves. A similar worry exists around video surveillance in the office. I recently read about using AI to tell when people behave badly in elevators and wondered what it might mean to have a camera and an algorithm constantly monitor and flag bizarre elevator behavior. I also think giving an underpaid security guard access to the footage is a problem.

We’re deploying a lot of new technology without spending a lot of time thinking about who can use it and how they can use it. We need to start thinking about both of those issues. We also need to start thinking about legal and ethical frameworks that can protect individuals when their every action at home and at work is potentially captured by a computer and rendered both intelligible and searchable.

I don’t have the answers beyond a growing awareness that we need to seek consent from those affected before deploying this technology and that companies should be building with anonymity in mind as opposed to simply anonymizing data after the fact. What else should we do?

Stacey Higginbotham

Share
Published by
Stacey Higginbotham

Recent Posts

Episode 437: Goodbye and good luck

This is the final episode of The Internet of Things Podcast, and to send us…

9 months ago

So long, and thanks for all the insights

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

9 months ago

We are entering our maintenance era

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

9 months ago

IoT news of the week for August 18, 2023

Verdigris has raised $10M for smarter buildings: I am so excited by this news, because roughly eight…

9 months ago

Podcast: Can Alexa (and the smart home) stand on its own?

Amazon's head of devices, David Limp, plans to retire as part of a wave of executives that…

9 months ago

Z-Wave gets a boost with new chip provider

If you need any more indication that Matter is not going to kill all of…

9 months ago