This story was originally published on Friday May 12, in my weekly newsletter. To receive the newsletter, sign up here.
Apple wants to build a health coach with the ability to track people’s vital signs as well as their emotions. Meanwhile, Amazon is testing a smarter version of its Astro robot, including giving it the ability to track and remember what’s happening in people’s homes so it can better monitor them.
On the web itself, people are becoming aware that their chats with ChatGPT/ generative AIs might accidentally leak private or even competitive intelligence. Even Amazon’s employees were concerned about the privacy implications of the company’s plans to build a health coach. Based on all of this, I have to ask: Do we really want the smarter devices and services that companies are building?
I initially got excited about the potential for the internet of things to add sensing and connectivity to everyday objects and places. With the information they gather and access to cheap cloud computing for data analysis, I’d hoped we’d make the invisible visible. Specifically, I’d hoped that we would take some of the invisible externalities associated with pollution or industrial processes and effect positive change.
Maybe it would be an NGO holding a factory accountable for air pollution, or a city enacting different zoning laws to prevent people from living on top of toxic areas. Maybe a factory would use the data it gathers to reduce the amount of harmful materials in its product, or to make a new product that lasts longer.
While I’m seeing a bit of the latter, especially when it comes to cutting carbon emissions or reducing waste, I don’t see much of anything related to using sensor data to hold private enterprise accountable. The IoT is making the invisible visible, but so far it’s doing so in a manner that benefits only the bottom line.
And when it comes to the smart home and consumers, I have to ask: Do we really want to make fully visible what we currently keep invisible to tech companies, data brokers, and the companies and governments that buy from them?
With the hype around generative AI such as ChatGPT, big tech companies are embedding their own large language models or other generative models into their products. This week at Google I/O, the search company showed off the use of new models in search, image generation, security, and medicine, and also as a way to help “jumpstart” creativity.
Amazon has talked about improving Alexa with a new large language model, and Business Insider is reporting that it has a project code-named Burnham that will give its Astro robot the ability to remember things and answer questions. But do I want a little robot armed with a camera patrolling my home looking for problems?
Amazon pitches it as a way to make sure the stove is turned off, or a means of seeing things like broken glass and letting you know something is wrong. But it will also be a way for families to surveil folks at home and, depending on the privacy features (it should provide local storage and encryption for cloud data), a way for companies to get far more information about life at home than we can begin to imagine. That information may feel benign, like the type of toilet paper you use or the number of cats in your home, but tech firms can monetize that information in so many ways, and none of those ways have the consumer’s benefits at heart.
For example, when users asked Alexa about yoga, Amazon offered them ads for its Halo wearable product, assuming that the person was interested in fitness. With something like a smarter Astro, the consumer doesn’t have to actively ask a device that they know is connected to the cloud about a yoga mat; the robot can rove the home, note a collection of weights or fitness gear, and then send to Amazon the data that leads to the consumers’ demographic profile getting tagged in health and wellness.
A more disturbing example is how some of the user data from smart devices gets sold to data brokers. Today the big fuss is over location data from cellphones, especially in the wake of multiple states criminalizing women’s health care. In August, the Federal Trade Commission (FTC) sued a company called Kochava over the sale of location data as part of an overall effort to reign in data collection on consumers.
Last week, a federal judge in Idaho sent the FTC’s suit back, arguing that the agency didn’t prove that the collection and sale of the data had caused “substantial injury” to consumers. The judge did agree that the collection and sale of such personal data had the potential to cause harm, but needed the FTC to provide additional facts proving harm.
This is both a blow and an opportunity. For many, proving substantial injury from a loss of privacy will be challenging. While we will undoubtedly see a few cases where sales of consumer data leads to substantial harm, such as an arrest after an abortion or actual injury from a violent partner or stalker, the law doesn’t recognize the creeping harms from a life under constant and invisible surveillance. But it could.
Currently, however, when we don’t fully understand the risks of adding our data to some of the newer models, and companies want to add more cameras and more devices to our homes that aim to “understand” or coach us, it’s a risk to make our full selves so visible. But that’s exactly what we’ll do with some of these newer gadgets and services.
As much as I like technology and the convenience of some of my smart devices, the combination of smarter services, more cameras, more sensors, and “smarter” AI concerns me. I think it should concern you, too.