The COVID-19 pandemic may finally be waning as increasing numbers of people get vaccinated, but in the meantime it’s pushing a lot of surveillance technologies to the forefront — whether for contact tracing or for counting the number of people in a building. We’re also learning more about how algorithms might be shaping the course of our lives, such as by way of automated resume scanning services or software that determines who gets (or doesn’t get) bail.
But we don’t always have to sacrifice privacy in exchange for protection, or to gain transparency about why some AI acted the way it did. Thanks to a conversation with Thijs Turèl, who both heads up the Responsible Sensing Lab and manages the Responsible Urban Digitization program at the Amsterdam Institute for Advanced Metropolitan Solutions, I’ve encountered two concepts I’d like to see implemented in more places to help make our reliance on digital data more private and more transparent.
The first concept is what Turèl calls a “provokotype,” a prototype designed to show the user how an AI might behave and then explain why it made the decision(s) it did. The goal is twofold: transparency for the model, and education for the citizen encountering the model.
The first provokotype built for the lab is an electric car charger that tries to optimize car charging times based on the type of vehicle and the person driving it. For example, it might be obvious to people that an ambulance or police vehicle gets priority charging speeds, but less obvious that a car owned by someone who lives farther away might charge faster as well.
The goal with the electric car provokotype is to put one on city streets and let citizens see how the electric car-charging algorithm decides whose car gets charged first. While using it, they will not only see their rate of charge, but will learn how other vehicles or driving needs might factor into the time it takes for them to “fill up.”
The speed of charging might depend on the amount of renewable energy available, planned trips, the occupation of the car’s owner, and much more. By explaining how these things work, the Dutch hope to avoid people getting frustrated when, say, their neighbor’s car is fully charged and theirs isn’t.
The idea that explaining the tradeoffs to people might make a world governed by algorithms a little less scary is somewhat charming. For as we’re discovering with the rollout of vaccines, what many people want isn’t transparency, but that their needs be met. We may pay lip service to equity or a set of ideals about who is at most risk, but when push comes to shove, the U.S. is embracing more of a free-for-all.
I expect the same will be true of electric car charging. However, trying to make policy decisions transparent is great, because that transparency helps us align our stated goals for society with actual results. It also ensures that when we’re talking about setting policy we understand how difficult it can be to make decisions and what tradeoffs the decisions we make will require.
The second concept the Responsible Sensing Lab is working to implement is reducing the surveillance associated with public cameras. As Turèl points out, with many smart city implementations the focus is on cutting costs and optimizing for efficiency, but if cities only focus on those two areas, the people who live in those cities lose out. That was part of the rationale for creating the ethical sensing lab earlier this year, to ensure that as technology gets deployed in Amsterdam the city can also use values-based design instead of simply choosing the cheapest or easiest option.
To that end, the lab is looking at using millimeter-wave cameras as a way to count people in public spaces as opposed to cameras. With mmWave sensors, people register as disruptions in an RF field; an algorithm then figures out which disruptions are people as opposed to dogs, delivery robots, or whatever else might be moving along a public street. This approach preserves the privacy of citizens but still lets the city understand how many people are in a given place, so can help with public safety.
Another privacy-centric project is dedicated to camera shutters for CCTV. In the pilot project, researchers outfitted a single camera with a privacy shutter (like you might have on a webcam). When the camera was recording, the shutter was opened; and when it wasn’t, the shutter was closed to cover the lens. The idea is that public cameras might not need to constantly record during daylight hours when people are around, but could be opened only at night instead. And when they are recording, the shutters are kept open so that people have a way to know that they are being recorded.
When it comes to smarter cities, we have choices. Projects like these are trying to help us make good choices by making clear the tradeoffs associated with pulling in more municipal data and automating city services. Others are challenging the very notion that a smart city is automatically a surveilled city. I’d love to see Amsterdam’s approach mirrored in other cities around the world.