Analysis

The next IoT workload is autonomy

Sometimes it’s hard to know where we are heading with IoT. I feel like it’s clear that we’re able to gather more information from more places and analyze that information more cheaply, but sometimes it’s hard to see where all of that gets us. Yes, we’re able to optimize production or predict some failures before they happen. We’re able to track assets and can make the invisible visible in ways that could help us get a handle on pollution. But much like the early aughts, when e-commerce and email were the clear winners of the broadband revolution, we’re still in the early days of IoT.

Last week, however, I had a glimpse of where the IoT might take us in five years, and it’s pretty awesome. We’re heading into an era that will see machines act autonomously in the world on our behalf — freeing us up to do even more interesting things with our time and energy.

Toyota is investing in automation via robots designed for the home. Image courtesy of TRI.

Autonomy is the next big workload for IoT, and how we react to it will help us set a course for a better future. In the next few years, we will hit an inflection point with regards to robotics in cars, homes, and factories, and it’s up to all of us to choose how to use the time robots can free up for us. We can become the blobby people mindlessly consuming content and food from “Wall-E,” or we can use that autonomy to let us become more connected and creative.

But first, let’s talk about what we need to get there. Last week, ARM showed off three new chip designs in its AE line of silicon, which it launched two years ago. Both the AE standards for automotive enhancement and the original chips were designed to ensure that vehicles containing them didn’t suffer some sort of computer glitch that would, for example, cause a car to malfunction or take an unexpected action while in autonomous mode.

Chet Babla, VP of the automotive business at ARM, says that after the AE chips were launched for automotive customers, industrial companies let the chip design firm know that this sort of silicon was necessary for their workloads. Indeed, robots, computer vision algorithms, and other automation are becoming more common in industrial settings, especially as COVID hollows out an already stretched workforce.

In factories, autonomy can help robots handle dangerous or dull tasks formerly completed by humans while also freeing up workers to take on some of the more fun or creative aspects of the job. But yes, regardless what boosters say, autonomy will also mean that folks who could take on the dirty, dull, and dangerous tasks most people don’t want will lose those jobs.

The new ARM silicon is designed so that the computing cores are either running in a cluster of two that work together, with one double-checking the other core’s work, or in a cluster, where every few cycles a core takes a break to check the work being done by the rest of them. With its newest chips, ARM has also announced a hybrid mode that helps offset some of the performance hit that both of these other methods will demand. To provide that extra level of surety, ARM is launching a high-performance CPU core and two graphics core designs.

These chips will help in computer vision use cases, industrial robots, and yes, autonomous vehicles. For ARM, the bottom line is that there is a looming autonomy boom. And it wants its silicon to help take automation from theory to reality while also ensuring workplace safety.

ARM’s designs won’t make it into cars or industrial robots for another three to five years, which feels incredibly far away. But on Wednesday of last week, I was given the chance to see the robotic future thanks to Toyota’s Research Institute, which held a virtual open house to show off some of the multi-purpose robots it’s building for the home.

Probably the most mind-blowing part was Toyota’s decision to mount a robot on a gantry for use in Japan’s small and cluttered homes. The image of a house-cleaning robot dangling from the ceiling like a bat was like walking through a mirror world. After I got over that aspect, I was excited to see the progress Toyota was making in robotics.

The company is using virtual reality to train robots as quickly as possible with the knowledge required to operate in individual homes. Homes are difficult environments. They are more akin to a roadway than a factory, where every motion is prescribed and the infrastructure is relatively static. In a home, there are people, pets, last night’s game of Monopoly on the floor, and an unknown number of unexpected routines. Any robot designed for that environment must be able to adapt to an almost endless set of parameters. Toyota’s solution — other than using VR for training — is to equip the robots with padded grips so they can’t hurt anyone. These pads also have cameras embedded inside to further teach the robots how to handle things found in the home, from dishes to a piece of fruit.

Another solution is to slow the robots down. But these robots already move incredibly slowly so as to allow people and pets to avoid them. And having seen more than one small dog face down a Roomba, I’m not sure how effective this will be. But slow robots are probably better than fast ones. Toyota researchers stressed that this technology wasn’t ready for the real world yet, but it did seem a lot closer than it ever has been.

For me, Toyota’s version of the robotic future resonated. Instead of being imagined as robotic caretakers, these robots were imagined as assistants for the elderly. Or as assistants that could handle caretaking chores while a person used the available time to handle interactive tasks such as health checks — even chatting with a lonely senior over a cup of tea. Obviously, I’m not so naive as to think that a senior living facility with a cleaning and caretaking robot would actually keep the same amount of staff as before, but I would hope it wouldn’t reduce its staff to the bare minimum, leaving seniors mostly alone with the occasional robotic visitor.

I want to ask all of us to think about the future. Autonomous robots are clearly coming to our roads, factories, and homes. So do we want to use the time that these devices free up for us to watch Netflix and buy new things? Or do we want to train more people to join the creative class and the knowledge workers who are already going to get an advantage from the rise of robots? Do we want to ensure that robots take on tasks humans can’t, or that are too dangerous, to do? And in the process, leave roles open for humans and empower those humans to interact with others in ways that can boost their overall experience?

As we head towards this era of autonomy, let’s think about what we want to optimize for — beyond just the bottom line.

Stacey Higginbotham

Share
Published by
Stacey Higginbotham
Tags: ARMToyota

Recent Posts

Episode 437: Goodbye and good luck

This is the final episode of The Internet of Things Podcast, and to send us…

8 months ago

So long, and thanks for all the insights

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

9 months ago

We are entering our maintenance era

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

9 months ago

IoT news of the week for August 18, 2023

Verdigris has raised $10M for smarter buildings: I am so excited by this news, because roughly eight…

9 months ago

Podcast: Can Alexa (and the smart home) stand on its own?

Amazon's head of devices, David Limp, plans to retire as part of a wave of executives that…

9 months ago

Z-Wave gets a boost with new chip provider

If you need any more indication that Matter is not going to kill all of…

9 months ago