We often discuss Artificial Intelligence (AI) and Machine Learning (ML) here on the site and in our weekly IoT Podcast. And we should: These technologies take data from sensors and other inputs, turning that data into actionable “smarts”. But for many, ML and AI are nebulous terms, akin to some dark magic taught at Hogwarts School of Witchcraft and Wizardry.
That’s why I was excited to read about a new beta service called Clevr. It was started by Landon Garrison, a student whose goal was to democratize these mystic arts. And it’s not just for technical folks and programmers. Clevr is designed to make AI and ML approachable to anyone.
With Clevr, you can tinker with AI and ML using a simple web interface, although there are options to use it if you’re comfortable with developing code as well. In a very short time with very little effort, you can use Clevr to create a simple learning model that can recognize pictures or textual information.
You can sign up for a beta account on Clevr’s site to use it at no charge. Sure, there are paid plans available based on how much you want to use the service. For a gentle introduction to AI and ML though, the free beta plan should easily suffice. And while Clevr does offer APIs for its service, useful for programmers, you don’t even need to use those. This is essentially a no-code, low-code solution meaning a graphical interface that’s easy for anyone to use.
Here’s an example of using the Clevr dashboard to upload pictures to train the system for image classification and recognition.
Once as few as three to five images are uploaded, Clevr creates an AI model to determine the likelihood some other image is part of this set of machine learning data. As more images are added to train Clevr, the more accurate it becomes.
In the case below, images of different animals were uploaded and based on the learning model, the system has correctly suggested a high probability that the picture of a lion tested against the model is indeed a lion.
Since tigers and cheetahs share some similar qualities with lions, they appeared in the results as well. Notice, however, that the likelihood of the image being tested is much lower for these. That’s because of other attributes that only lions share, which the AI has learned over time. More data to learn means more accurate recognition.
This is really the most basic concept of both AI and ML whether it’s applied to images, text, or smart home devices. Lots of data is gathered and classified to identify something. The system only needs a little help from users in the beginning to know what something is.
That’s where the first handful of images come into play with Clevr or any basic AI/ML system. You’re essentially uploading images of a lion, for example, so the system knows what a lion looks like. Then you might upload a few pictures of tigers and cheetahs. As more images are added, the system figures out the probability of a test image being a lion, tiger, or cheetah. And the more images or data you provide, the more accurate those probabilities become.
Accuracy also increases when you correct the model. You’ve probably seen this technique if you have a smart webcam or doorbell camera that supports face recognition. The app for the smart camera might import images of your contacts as seed information or it might start with an empty model. It may even ask if you know someone, which is the Nest approach, to add them to the machine learning model.
In any case, you can typically view all of the faces seen by the camera and tweak the model by confirming the AI correctly identified someone. And if it didn’t, you can train the system for greater accuracy by telling it that it incorrectly identified someone.
My Nest Doorbell used to confuse a neighbor with my wife whenever the neighbor wore a hat, for example. Why? Because the model used to identify my wife had an image with a similar hat. I simply removed that image from the app and machine learning did its thing: The two were never confused again.
Since AI and ML are designed to work with little to no effort from end-users, why bother learning about them at all?
I stated one reason above: To demystify what AI and ML are by getting a basic understanding of how they work. But there’s a second reason. We already know that some smart home devices are expected to gain the ability for users to customize their training.
When Wyze announced its $100 million funding round in September, it said some of that money would be invested in improved AI. Next year, Wyze camera owners will be able to train their webcams to recognize whatever they want. So you won’t be limited to what Wyze can detect; you can train the camera to see if there are one or two cars in your garage, for example.
Other smart device makers are likely to follow suit. Not only will that provide more value out of your connected devices, but it will also require you to understand the basics of AI and ML. So now’s the time to dip a toe in the water.
There are other options besides Clevr to do so. We’ve previously covered Qeexo, which offers a similar solution, although it requires the purchase of some inexpensive hardware and sensors to use. That same hardware, or a cheap Raspberry Pi along with a little bit of Python code and a Google Coral TPU can accomplish the same thing. If you don’t have or want a Coral TPU but have a Raspberry Pi 4, you could even use Edge Impulse with your hardware to create some learning models.
With all of these choices and custom device training coming next year, now’s the time to dip a toe in the waters of AI and ML.