Every now and then I see technology that’s so impressive, I can’t wait to write about it, even if no one else finds it cool. I had that experience last week while watching a demonstration of a machine learning platform built by Qeexo. In the demo, I watched CEO and Co-Founder Sang Won Lee spend roughly five minutes teaching Qeexo’s AutoML software to distinguish between the gestures associated with playing the drums and playing a violin.
The technology is designed to take data from existing sensors, synthesize the information in the cloud, and then spit out a machine learning model that could run on a low-end microcontroller. It could enable normal developers to train some types of machine learning models quickly and then deploy them in the real world.
The demonstration consisted of the Qeexo software running on a laptop, an STMicroelectronics SensorTile.box acting as the sensor to gather the accelerometer and gyroscope data and sending it to the computer, and Lee holding the SensorTile and playing the air drums or air violin. First, Lee left the sensor on the table to get background data, and saved that to the Qeexo software. Then he played the drums for 20 seconds to “teach” the software what that motion looked like, and saved that. Finally, he played the violin for 20 seconds to let the software learn that motion and saved that.
After a little bit of processing, the models were ready to test. (Lee turned off a few software settings that would result in better models for the sake of time, but noted that in a real-world setting these would add about 30 minutes to the learning process.) I watch as the model easily switched back and forth, identifying Lee’s drumming hands or violin movements instantly.
When he stopped, the software identified the background setting. It’s unclear how much subtlety the platform is capable of (drumming is very different from playing an imaginary violin), but even at relatively blunt settings, the opportunities for Qeexo are clear. You could use the technology to teach software to turn on a light with a series of knocks, as Qeexo did in this video. You could use it to train a device to recognize different gestures (Lee says the company is in talks with a toy company to create a personal wand for which people could build customized gestures to control items in their home). And in industrial settings, it could be used for anomaly detection developed in-house, which would be especially useful for older machines or in companies where data scientists are hard to find. Lee says that while Qeexo has raised $4.5 million in funding so far, it is already profitable from working with clients, so it’s clear there is real demand for the platform.
The company started out trying to provide machine learning for companies, but quickly realized that the way it was trying to solve client problems wasn’t scalable, so it transitioned to building a platform that could learn. It has been active since 2016, providing software that tracks various types of a finger touch on phone screens for Huawei. One of its competitive advantages is that the software takes what it learns and recompiles the Python code generated by the original models into C code, which is smaller and can run on constrained devices.
Lee says the models are designed to run on chips that have as little as 100 kilobytes of memory. Today those chips are only handling inference, or actually matching behavior against an existing model on the chip, but Lee says that the plan is to offer training on the chip itself later this year.
That’s a pretty significant claim, as it would allow someone to place the software on a device and do away with sending data to the cloud, which reduces the need for connectivity and helps boost privacy. For the last few years, it has been the holy grail of machine learning at the edge, but so far it hasn’t been done. It will be, though, and we’ll see if Qeexo is the one that will make it happen.