Featured

Forget sleep and wake; it’s time to embrace ambient mode

It’s been almost a decade since Apple put a processor inside an iPhone designed to handle sensor data. The motion co-processor appeared on the iPhone 5s in 2013 and marked a subtle shift in how manufacturers thought about battery-powered devices. Now, as we embrace wearables, always-on sensors with batteries, and energy-harvesting devices, it’s time to call out the rise of an ambient state in devices.

Forget binary sleep and wake. When we ask a fitness tracker to measure heart rate or a battery-powered earbud to have an always-on listen mode without requiring a fresh charge every hour, we’re taking advantage of ambient mode. As we are when we leverage the face detection sensors on our phones that wait until our face is detected before “waking up” a primary processor for the heavy lifting of facial recognition. Its use cases like these that are driving the rise of “ambient mode” in processors.

Ambient mode is an option between sleep and wake for your devices.

Another example was on display this week with Qualcomm’s new wearables platform. The chip maker said it had shunted several “jobs” over to a lower-power, always-on processor so a particular wearable could spend 95% of the time in ambient mode. Those jobs included the playing of audio, always-on wake word detection, sensor tracking, and notifications. Notably, the always-on processor even includes a smaller GPU that powers the screen in ambient mode without sacrificing image quality.

This growing trend of using ambient mode creates both opportunities and complications for device makers.

On one hand, when it comes to a battery-powered device, it’s always more challenging to stop thinking in binaries (awake/sleep) and start evaluating tradeoffs in power consumption and cost for an entirely new state. On the other hand, spending time on the front end considering use cases and designing for an ambient mode will result in a device that can go longer on a single charge without sacrificing function.

Another challenge is that for certain classes of devices, figuring out the jobs that belong in an ambient mode will be tough for new features or entirely new devices. We know that always-on wake word detection is important for earbuds, glasses, and phones. But what is the next use case?

Engineers will need to have a pretty good understanding of the jobs they want in ambient mode and the power and performance those require. Because they will be dealing with limited capacity for a low-power, always-on processor, they will also want to stick with jobs that people already want in their devices. This sort of calculation will be tougher to make in a wearable or a phone as opposed to a limited purpose sensor, because there are so many potential jobs.

It also means that — unless it’s a killer feature for a device — the most innovative features won’t hit ambient mode right away. There are many other new ways of thinking about the tradeoff between performance and the amount of energy required by adding ambient to our existing two modes of computing. But there are also tools that will help.

Machine learning on microcontrollers will help make more workloads possible in ambient mode. For example, always-on voice activation is powered by a simple wake word detection algorithm. These sorts of TinyML jobs tend to revolve around simple use cases such as face or a specific image recognition, wake word recognition, or even something like anomaly detection.

This is one reason I’m so keen on use cases for TinyML. The more things we can figure out how to do on slow-power processors, the more we can shunt to an ambient mode on a device. That means we could give sensors, wearables, and any other battery-powered devices superpowers without shortening their battery life.

As we put computing in more places, we will need to embrace ambient mode on processors and in devices, but we will also need to embrace the idea of an ambient intelligence inside our homes. Having always-on sensing platforms and local ML processing input in order to deliver users a better experience is a powerful idea, but it will also require us to think about privacy, computing, and data in new ways.

Are we ready to make that leap?

Stacey Higginbotham

Share
Published by
Stacey Higginbotham

Recent Posts

Episode 437: Goodbye and good luck

This is the final episode of The Internet of Things Podcast, and to send us…

8 months ago

So long, and thanks for all the insights

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

8 months ago

We are entering our maintenance era

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

8 months ago

IoT news of the week for August 18, 2023

Verdigris has raised $10M for smarter buildings: I am so excited by this news, because roughly eight…

8 months ago

Podcast: Can Alexa (and the smart home) stand on its own?

Amazon's head of devices, David Limp, plans to retire as part of a wave of executives that…

9 months ago

Z-Wave gets a boost with new chip provider

If you need any more indication that Matter is not going to kill all of…

9 months ago