This story was originally published in my weekly newsletter on Friday July 21, 2023. You can sign up for my newsletter here.
This week, the Biden administration shared its plans for a label for consumer connected devices that would indicate a device had passed certain cybersecurity criteria. I subsequently explained why we need such a label, how the program would work, and what sort of security criteria might get baked into it.
But I’m also frustrated, because the administration failed to take what was a golden opportunity to push for some baseline privacy regulations associated with connected devices. Such regulations are especially important now because the country can’t seem to pass a federal privacy law, and given all of the connected devices currently snagging data about us, we desperately need one.
This label is a good thing. There are several ways it can be watered down to do less on the cybersecurity front, but ultimately I think it will become a necessary baseline that makes consumer connected devices more secure. And that is good for everyone.
But I wanted this label to do a bit more on the privacy front, by offering some indications about the types of sensors inside the device as well as how the manufacturer shares that data, both inside their organization and with third parties. I’m not sure we’ll get that, even though there’s clear demand from consumers for more transparency around data gathering by connected devices.
I’m not alone in my focus on privacy. The Connectivity Standards Alliance (CSA) created a Data Privacy Working Group earlier this year, and at the event unveiling the label, researchers from Carnegie Mellon University’s (CMU’s) CyLabs Security and Privacy Institute detailed research also done this year indicating that consumers want more information about the types of sensors on devices and who has access to the data.
Privacy is a cybersecurity concern. While when a camera-containing device gets hacked, the government may be most worried about the device getting used as part of a botnet to facilitate a distributed denial-of-service attack on critical infrastructure, the consumer is worried about images of their home or child being shared on the internet.
Both are legitimate security concerns. And while a consumer might recognize that their video doorbell has a camera inside, the camera on their robotic vacuum cleaner might be a surprise. Manufacturers often include sensors inside a product just in case they want to one day turn them on. This means they don’t get advertised as being inside a device unless the manufacturer later decides to activate them after the consumer has the device inside their home.
That’s why I think some transparency around the sensors inside a device should be part of the cybersecurity label. CMU Associate Professor Yuvraj Agarwal told me he believes the sensor information and some basic data about how the data is shared should be on a label, adding that a survey of 500 consumers completed this year indicated they want the same thing.
Plans for the U.S. Cyber Trust Mark label also include a QR code through which a consumer can get more information. Both Agarwal and the CyLabs group want to see a lot more granular information about what data gets shared behind that QR code. However, I don’t see manufacturers of devices agreeing to that for two reasons.
The first reason is that they probably don’t want to disclose the details of their data collection and sales. The second is that keeping such a database up to date would be a big job.
Steve Hanna, a distinguished engineer with Infineon and chair of the CSA’s Matter security working group, is also on the CSA’s privacy working group. He said the privacy working group is working on a plan tied to disclosure or transparency around the sensors located on a device. He also has an idea for how to make keeping any database about security or privacy capabilities up to date and accessible.
To start, the database should be programmatic. Companies should be able to log into a single database and change the device specs to update them as the manufacturer turns on features or changes their terms of service. And this programmatic database should be open to others via Application Programming Interfaces (APIs). This could allow other companies to build tools for consumers or even enterprises that check the database and then ensure the devices on the network meet whatever baseline criteria the consumer might have.
For example, if I only want devices in my home that don’t have microphones, when a company turns on a microphone inside my alarm keypad or hub, I would get an alert and take whatever action I’d want. Hanna isn’t the only one with this idea. Agarwal and I discussed something similar.
However great I think a programmatic database of sensors and data sharing might be, it will not be part of this label. I don’t even think it would become part of any of the potential specifications from the CSA, simply because large industry players don’t want that level of transparency. And since many consumers wouldn’t avail themselves of such information — or have a deep understanding of how these products can erode their privacy to begin with — the demand simply isn’t there.
I could see demand coming from employers that don’t want their employees working in home offices, hotels, or coworking spaces with potential privacy risks, but that still feels far-fetched. So I think we should focus on what we can most likely get with the U.S. Cyber Trust Mark. And what I think we can get is a list of the sensors inside a device.
If we want to push the issue, I’d like the U.S. Cyber Trust Mark to ensure that the company is treating the data pulled from a device securely. That would mean there was a process around who could access the data, and only certain employees would have access to it. It would also mean that if data was sent out to third parties, the company would audit those companies to ensure the sensor data wasn’t leaking.
Outside my privacy concerns, I worry about the label’s effectiveness for two reasons. The first is that the Federal Communications Commission (FCC) will handle the program when it’s not clear that it has the authority to do so. Just a few weeks ago, the FCC announced a privacy task force that aims to stop wireless providers from selling consumer data. But even at the unveiling of the task force, lawyers were explaining their doubts about the FCC’s authority to stop wireless providers.
There are similar concerns about the FCC’s authority to enforce any compliance with this voluntary labeling program — or even manage it. Historically, the Federal Trade Commission (FTC) has been able to build and manage labeling schemes (did you know the care labels inside your clothing are part of an FTC-managed labeling scheme?). Should the Trust Mark overreach on privacy, I don’t know if the FCC has the legal authority to ensure private companies comply.
My second concern is that while the NIST 8425 security framework for consumer devices is supposed to act as a guiding document for setting the security criteria, the actual criteria could get watered down through the rule-making process the FCC will soon undertake. As I described earlier this week, the FCC will undergo a multimonth process to decide what the actual security criteria will be for a company to get the label.
As part of that process, companies, consumers, and others will submit their thoughts on what should be included. The FCC will read those comments, draft a set of rules, and then vote on them. There’s plenty of time and opportunity to take NIST’s ideas and water them down.
So keep an eye on the process, send in your comments, and don’t expect too much from this label on the privacy front. I’ll be doing the same.