This week began my quest for a Masters degree in Computer Science at Georgia Tech. What does that have to do with the Internet of Things? Nothing by itself. But my first and only class this semester is Human-Computer Interaction (HCI) and I’m finding that much of HCI can be applied to the IoT. In particular, IoT devices makers have a lot to learn (as do I!) about designing effective interfaces.
Obviously, since the semester has just started, I only have cursory formal knowledge on this topic. However, I have a decade of user interface experience with IoT devices and another few decades when it comes to computer interfaces in general. And one of my first takeaways from class that IoT companies need to learn is that when designing, remember that “you are not your users.”
Are IoT interfaces designed for you or someone else?
On the surface that sounds like a generic catchphrase without much meaning. Think about it though. When you buy an IoT device or download a mobile app to control one, do you think the interface was designed specifically for you? Of course, it isn’t.
Instead, in the best-case scenario, the interface was created by teams with user experience (UX) and/or user interface (UI) skills. Those designers would create an interface and then observe how users interact with it to accomplish their tasks. Based on that observable feedback, the interface would be adjusted. Then it’s back to the user testing and more feedback, and so on.
In a worst-case situation though? It’s likely that software engineers supporting the hardware created the interface.
This point is summarized nicely in one of our required texts, Don Norman’s best selling book, “The Design of Everyday Things”:
Engineers are trained to think logically. As a result, they come to believe that all people must think this way, and they design their machines accordingly. When people have trouble, the engineers are upset, but often for the wrong reason. “What are these people doing?” they will wonder. “Why are they doing that?”
Reflecting on my own IoT device interface experiences, there have been some excellent ones and some that are lacking. I know that’s a subjective statement. We all have different values: You may not care about a clunky interface if the connected device controlled by it gets the job done the way you want.
Others clearly do:
LG makes a great TV but their OS is outdated horrible junk. They should do a deal with Apple or Roku and get out of the software business https://t.co/uRwuoqYgSe
— Dave Winer (@davewiner) August 24, 2021
So I’d say there is much room for improvement in this entire market as a whole. And that brings opportunities to future IoT interface designs.
What should an interface accomplish?
Obviously, the main goal for any HCI is to create a way for a human to interact with some computerized device. To be honest, it’s easy to create an interface. It’s difficult to create an effective one that adds value and, for lack of a better word, joy, to the experience.
This is where companies such as Apple hang their hats.
Love it or hate it, Apple’s interface design tries to accomplish both goals: value and joy. We can disagree if it really accomplishes that. In my opinion, Apple’s Home interface isn’t perfect, but it’s up there when it comes to IoT interfaces designed for users.
I’d add Eve to this list as well: The unexpected addition to Thread network information in the Eve app literally brought a smile to my face.
But even if you agree that Apple’s IoT interface is great, it’s still lacking a key HCI component. That’s because HCI doesn’t just provide a useful interface to end-users but it can also change user’s behaviors.
An example of this would be a monthly report of energy usage and potential savings ideas from a smart thermostat.
I get these from Ecobee and I know other connected thermostat makers provide these as well. This is meant to change your behavior in a way intended by the product designers. Sure, it’s to your benefit to act upon this data. You may save money and reduce your energy consumption by doing so.
However, this prodding isn’t (yet) part of the IoT device interface; at least not on any connected thermostat I’m aware of. Why wait after the fact with a month of historical data to try and change future behavior based on IoT data?
Instead, device makers could use data at the individual level and compare it to all owners of that device to collectively offer real-time suggestions. Think about it: A simple red indicator on that smart thermostat to suggest that your home is using more energy than everyone else with the same thermostat. There may be valid reasons for that, but at least you have an interface designed to raise your awareness in real-time. See it enough times and maybe you investigate why your home is such an outlier.
Given that consumers are rightfully concerned about how much personal data they want to share from their smart homes, maybe this isn’t an idea with wide support.
What about voice interfaces?
I’m not sure my class will tackle voice interfaces, although I hope it does. When Siri launched on iOS in 2011, I dubbed voice “the invisible interface.” I said that because the spoken word is nearly universal.
Yes, there are hundreds of languages, but most people have the common ability to speak. I say “most” because I realize there are people who have no physical voice due to medical challenges or other issues.
I still believe in voice interfaces but after 10 years of using them, first with phones and then with smart homes and connected devices, I’m disappointed. Shame on me though for thinking voice interfaces would be easy.
As we’ve learned in this age of smart speakers and voice assistants, there are many nuances around speech and intent. Progress has been made for sure, but if I were an IoT device maker, I’d be investing in better ways to use voice across my product line. Again, my hope is that we tackle this interface during the semester, and if we do, I may share additional thoughts on this topic.
Three takeaways for IoT device makers
Although we’ve seen great progress with IoT interfaces, we still have much to learn. I’d recommend that if device makers aren’t thinking more about UI, they need to do so and get outside feedback. With that feedback, the cycle of improvement will accelerate.
Connected devices also need to move beyond the basics of control and offer opportunities to modify behavior using data. These can be optional for end-users because not everyone wants to be bossed around by their smart home. However, it could lead to new revenue models and homes that are more proactive.
Lastly, while many device makers rely on the large voice assistant platforms, there’s room for improvement. Building better in-house voice interface controls as opposed to following the crowd could bring more joy and functionality to your products.
JD Roberts says
Thank you for acknowledging that not everyone has the ability to vocalize. This comes up in one of the accessible technology groups I’m in. Many of us are highly dependent on voice control, but there are some who can’t vocalize so they are looking for different solutions. Choice is good— particularly in control interfaces. 😎
Congratulations on starting the Masters program! Sounds like it should be very rewarding.
Kevin C. Tofel says
Years ago, I would have inadvertently forgotten this segment. Thanks for speaking up (no pun intended) with many insightful comments on our posts here. I sincerely appreciate it. Cheers!