When I was at CES last week, I noticed several companies showing off video cameras and demonstrating their AI capabilities by superimposing a stick figure on top of the images of people recorded while by the booths. As each person walked by, superimposed over their image was a stick figure illustrating how well the computer’s AI was capturing their actions.
I saw the stick figures on a fall detection-and-person-tracking lamp from a company called Nobi and in demos from at least two companies trying to sell cameras to retailers so they could track where in the store customers paused. And as I saw these stick figures I wondered if they were the key to bringing more privacy in a world determined to put cameras everywhere.
While many of us are familiar with the bounding box in AI demonstrations that gets placed around cars or people to indicate what the computer is tracking and trying to identify, few if any of us know what the computer “sees” when AI is used to track activities using a camera.
The image above, however, shows what cameras can see and what they can strip away while still delivering relevant information. Sony Semiconductor provided the image in conjunction with its launch this week of a new AI imaging platform with Microsoft that will let companies deploy, train, and manage AI models on cameras equipped with Sony’s AITRIOS sensing platform.
I’m writing about it because after the casual use of cameras and AI capabilities at CES, which left me feeling like I was in a reality TV show or a dystopian Sci-Fi novel, I was ready to see both more privacy-protecting versions of camera tech and understand how they work.
I’ve been hesitant about the proliferation of cameras in the IoT for a long time, but have grown to accept them — largely because I don’t feel like I have much choice. Cameras can deliver a ton information, and at a lower cost than other sensors, which means that, increasingly, they will be deployed everywhere.
Indeed, we’ve all become used to our image being captured, be it by by doorbell cameras, municipal cameras, dash cams, and even average people wielding their smartphones. But when those images are captured they are then saved to a cloud, where they can be easily searched and matched to our identity. In other words, being in public now carries the risk that anything you do may be captured, stored, and disseminated without context, without consent, and then remain searchable forever.
Being in public, in other words, has become a much riskier proposition than it was just 20 or 30 years ago. Going to the store or dropping off a casserole has the potential to get an individual as much attention as a celebrity walking on the red carpet.
I don’t know that people can really live like this. I don’t want to, and my stakes are incredibly low. Sure, I’ve tripped on a doorstep and had an angry moment in a crowded subway platform, clips of which I’d hate to see plastered across the internet. But I’m not hiding my sexual orientation from a conservative boss, dodging a stalker, or seeking asylum in another country to avoid political persecution at home. Cameras in public places can serve myriad worthwhile functions, but they can also cause irreparable harm.
So I was keen to see the stick figures at CES and learn about Sony’s latest technology, which promotes the processing of images on the camera itself so they don’t get sent to the cloud. Companies that elect to use cameras with Sony’s AITRIOS technology or developers building models for use on the AITRIOS-enabled cameras don’t have to choose the most privacy-preserving settings, but I like that Sony is making them more accessible.
Oftentimes, developers or camera buyers are simply plucking what’s easy and available off the shelf to leverage for their use cases. Having local machine learning handle image processing in a privacy-protecting way on a camera that’s guaranteed to support the specific algorithm that’s readily available gives end users an option that they may not have had before.
I’m writing this because I want everyone to know that they should take that option. Watching a bunch of stick figures move across a trade show floor or pick up their groceries can still be used to count people, monitor for suspicious behavior, track customer interest, and even detect safety problems, but without creating a recognizable image that could haunt a person forever. Let’s all take that option and make it the norm.
Leave a Reply