Categories: AnalysisFeatured

The future is here and tech firms own it

Google’s line of new hardware, including stuff that used to have nothing to do with computing.

My obsession with the internet of things is in part an obsession with understanding the future. It’s awesome when a moment comes along that perfectly encapsulates the future you’ve vaguely envisioned. Wednesday’s Google event was such a moment for me.

It wasn’t the phones or computers that clarified the future, it was the new speakers, the earbuds that can translate 40 languages in real time, and the weird snippet-taking camera device. I’ve spent years talking about how connectivity and machine learning (or AI) will generate a business transformation for everyone.

I usually focus on business models and what it means for companies when they have more access to data, but Google showed what it means for product development. In doing so it also clarified what it means to embed technology into everyday products — a phrase that gets tossed off in everyday conversation without much meaning attached. But these three devices show how connectivity and machine learning change everything.

With the $399 stand-alone Home Max speaker, Google has taken its research in machine learning and audio codecs to build a speaker that understands where it is in the room and what is happening around it. It then adjusts.

Essentially, Google has made a context-aware speaker that adapts to its environment. That’s amazing. And other than Sonos, I can’t really see another speaker company coming close to truly changing the game on speakers. Even if Google doesn’t sell many of these speakers, it has clearly applied technology that should push every other speaker company to think differently about how it’s improving on the audio experience.

A similar question might be asked of the Apple AirPods or Google’s Pixel Buds. Both take the concept of turning a bluetooth headset into an extension of your computer.  Why didn’t JBL invent the Max Home speaker? Why didn’t Bose invent something like Google’s Pixel earbuds with their real-time translation capabilities?

In the case of the headphones it’s likely because, as owners of the phone, both Apple and Google can easily offload compute-intensive tasks to smartphones while limiting others’ access to the hardware. But in other cases, like the speakers and the Google Clips, it’s a question of culture,  and a lack of deep technical expertise outside their core business.

For example, Bose has innovated plenty with noise cancellation technology that also reacts to the volume of the world around the headphone wearer.  These Bose hearables also use algorithms to help tune headphones so people can hear better in crowded or noisy environments. So why stop there and not recognize that making context-aware, in-room speakers might also improve sound quality. One option is that the Home Max speakers are a gimmick that won’t change the experience much. Another reason might be because Bose simply doesn’t have the data or data scientists needed to make the tech work, because it hasn’t had connectivity in its products for as long or thoughts about using them in that way.

And this leads us to the Google Clips, a $249 tiny camera that will record photos and video snippets with the press of a shutter, or on its own. It’s designed so a user can set it up and forget about it. Google Clips has the “smarts” to recognize the people who matter to the user as well as the ability to recognize when to take a photo (so says Google). What’s notable is all of those smarts run locally on the device.

Clips is confusing because it’s expensive and the market seems limited. Yes, parents might think this is neat, but parents generally have connected baby monitors snapping photos of their kid as well as a fast finger on the smartphone camera button. But when viewed as a research project in shrinking computer vision on a device, or a way to get more training data to help computers learn what makes a good picture, Clips makes sense.

There are few companies that can invest in producing a piece of hardware that has a limited market value with a goal of getting the right kind of data to train and test a new computer vision model, or a smaller computer vision module. This is why, as tech invades more and more of our devices, the giants of the technology world are stretching to build products outside of computers.

Does this mean we’ll get that Apple television or a Google washer? Maybe not anytime soon, although Amazon has applied for a patent on a spoilage-sensing fridge. Tech is coming for everyone’s business and it’s not clear if anyone outside of tech has the resources it takes to win.

Stacey Higginbotham

Share
Published by
Stacey Higginbotham

Recent Posts

Episode 437: Goodbye and good luck

This is the final episode of The Internet of Things Podcast, and to send us…

8 months ago

So long, and thanks for all the insights

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

8 months ago

We are entering our maintenance era

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

8 months ago

IoT news of the week for August 18, 2023

Verdigris has raised $10M for smarter buildings: I am so excited by this news, because roughly eight…

8 months ago

Podcast: Can Alexa (and the smart home) stand on its own?

Amazon's head of devices, David Limp, plans to retire as part of a wave of executives that…

9 months ago

Z-Wave gets a boost with new chip provider

If you need any more indication that Matter is not going to kill all of…

9 months ago