This story originally ran on Friday, July 28, 2023 in my weekly IoT newsletter. You can sign up for the newsletter here.
Bloomberg this week called the end of the smartphone era and the dawning of the AI era, all based on the fact that chip manufacturing firm TSMC now has a “double-digit percentage gap” in quarterly revenue between sales of high-performance computing chips and chips destined for smartphones. Bloomberg may have been really kneading the data point to support its thesis, but if it is the beginning of the AI era, should I be writing about AIoT instead of IoT?
No. Because what we call it doesn’t matter. What matters is what we do with AIoT or AI or IoT. And so far, we’re making the same mistakes we always make when it comes to technology innovations.
To me the IoT has always been about AI, simply because putting millions of connected sensors around the world does nothing without some way to make sense of the data those sensors generate. In every presentation I give, I start off by explaining that the arrival of cheaper sensors, ubiquitous connectivity, and inexpensive cloud computing are driving IoT.
I then explain that we should think of the physical sensors and devices as the body of the Internet of Things, and the insights provided by AI and analytics as the soul of the IoT. So I’ve never been one to lean into acronyms like AIoT because the IoT includes AI. There’s no point in deploying sensors if they don’t provide new insights and information.
And it isn’t clear that we are actually done deploying the physical infrastructure that comprises the IoT body. Connecting devices is still tough. Security concerns still drive CIOs up the wall.
Moreover, most companies thought that installing connected devices and layering analytics on top would lead to seemingly magical business changes. Instead they learned that once the IoT delivers new insights, employees and managers must do something with that information. There usually isn’t a closed-loop automated solution to a complex business issue. Instead management has to make judgement calls to prioritize assets or costs to do the best they can with the new insights.
IoT can help, but it can’t replace making a tough business decision (it might be able to replace the judgement call if a business only wants to maximize profits, but that could quickly become untenable to employees, environmental regulators, or any moral individual.) As is often the case when we add technology to a business problem, we assume technology can solve the problem on its own, when in most cases technology can identify the problem and in the best case merely offer solutions that might fix it.
And here’s where I see our current obsession with AI leading us. Once again we have a technological tool that looks powerful and amazing, and we assume it will take over many of our jobs or solve our problems. But what we neglect is the realization that if we want it to solve problems, we still have to make the tough business decisions that weigh the value of profits over the cost to people or the environment. We need people to deliver judgements that computers simply can’t make.
We also often fail to see the limits of the technology itself until we start to use it in everyday business situations and measure where it falls short. In the early days of the IoT, companies might have deployed sensors that helped reduce the need for an employee to run around reading meters, but then failed to consider battery changes or how robust the equipment is.
With AI, we see countless examples of people, governments, and businesses placing their faith in a technology despite its well-known limitations. This is why I’m not concerned that AI might replace my job right away. Instead it helps me do about 30% of my job more quickly, allowing me time to do what I can do best.
If a company eager to maximize profits thinks AI can replace journalists, then it will take steps to make that happen. But in optimizing for profits, it will also negatively impact its brand as the content produced by the AI makes mistakes. On the other hand, perhaps the news site’s owners want to focus only on the lowest common denominator content that AI might credibly produce. There’s nothing to stop a race to the bottom for AI-generated content on sites that are barely differentiated.
So as I talk to people in the IoT about AI and the new hotness of generative AI, I caution them to think about the use of AI as a tool, with existing limitations and an inability to solve complex problems absent a human or organization making hard calls about what to optimize for.
And when it comes to regulating AI, we need to keep that top of mind. We can’t have closed-loop systems where AI is ingesting information, delivering an insight, and then letting an automated system implement that insight without room for human judgement. That’s how we get lawyers quoting made-up cases in court, nurses who must administer drugs they know will not help a patient, and police officers arresting the wrong man because of an inaccurate AI.
We’re always so quick to think that technology can cut out humans, that it can scale beyond our emotionally charged and inefficient brains. But while technology can certainly help us scale, we will always need some form of human judgement in the loop to tell the technology how to prioritize resources and behave, well, humanely.
And when companies tell you differently, they are either unsophisticated and don’t recognize the limitations of AI or they are actively parroting this viewpoint so it can act as a smokescreen to shield them while they optimize for profits at the expense of humans. If this is the era of AI, please let it also be the era when we wake up to how to use this technology appropriately.