Categories: AnalysisFeatured

The next step in IoT is vision, so let’s give computers depth perception

The computer-generated map of an environment from stereo cameras on a drone. Taken at Qualcomm’s robotics lab last week. Yes, that is me taking the picture.

I talk a lot about computer vision because I think it’s a core enabling technology for vastly more efficient understanding and use of the world around us. When a computer can see, it can apply its intense analytical powers to the images and offer insights humans can’t always match. Plus, when combined with actuators, computers can direct things in the real world to respond to the data it “sees” immediately.

Thus, computer vision is a huge stepping stone to the promise of the internet of things. John Deere’s purchase this week of Blue River Technology, a company that makes a computer vision system to identify weeds on farms, is an excellent example of this in action.

John Deere is no stranger to connected tractors. It’s one of the early adopters of the internet of things and was implementing IoT before the phrase was even popular. It has been using GPS data, connectivity and sensors in fields to gather all kinds of data about land conditions and crops, and to make driving such bulky equipment more autonomous.

With this acquisition, it’s adding what Willy Pell, director of new technology at Blue River, calls “real-time perception” to the reams of data the ag firm already provides. This perception comes in the guise of computer vision. The tractors can now pull a trailer behind them that snaps pictures of each plant and prescribes certain actions like dropping pesticide on it. By automating the task John Deere can offer farms a weed killing solution that scales cheaply and performs the same way every time while treating each plant individually.

Computer vision is going to pop up everywhere, in part because as humans we are incredibly visual. If dogs were building the internet of things, I bet they’d build sensors that could detect the chemicals that comprise various scents and then translate that back into code a computer could read. While dogs would likely focus on pheromones, we focus on pixels.

And this is an important thing to remember: computers don’t see like we do. Every image is translated into pixels with data associated with each. The computer then applies math to figure out distances between featured points and determines what it is seeing. Right now, a lot of the focus is on teaching computers to use videos, which a computer reads as “flat.” While we can look at a video of an office and estimate a building’s depth, or at least infer it has depth, a computer doesn’t necessarily do that. That’s why facial recognition using cameras can be spoofed by a photo or makeup that disguises contours.

Computers need depth perception to see as well as humans. With self-driving cars, consumer products like the Lighthouse personal assistant, some drones and even the anticipated 3-D sensor on the iPhone, computer vision with depth is hitting the mainstream. So I thought I’d show the picture above, which is a drone mapping out the world using double cameras in stereo, and explain the different ways we’re giving computers depth perception.

Old school depth perception is basically like a moving version of Viewmaster. It requires two cameras on either side plus processing power and algorithms to handle the math required to use the two camera images to provide computers with the sense of depth. When seen represented on a monitor, the edges of things are softer and less defined. In some use cases, especially as cameras decrease in cost and processing power requirements, this can suffice. For example, some drones could use this. Suitable monitors or screens for displaying purposes would more than likely be needed though, in order to display the gained data properly, monitors with high refresh rates and decreased delay times such as these 32 inch monitors could also suffice.

For everything else, there are 3-D depth sensors. They come in three different types. A familiar type are laser range finders that shoot out calibrated laser beams and record what the lasers bounce off of. It’s like sonar for light. This is the type of sensors found in LIDAR. They are extremely accurate at most things, but also expensive and require moving parts.

The other two types also use light. One, which generated the image above, is called a structured light camera. It works by sending out a known pattern of light, usually in infrared. The camera then “sees” by figuring out how the pattern was disrupted. The first well-known structured light 3-D sensor was probably the Microsoft Kinect, which launched in 2010. These are cheaper, but they don’t work well outside.

The final light sensor is a time of flight camera that shoots out precisely timed bursts of light and then measures how long it takes them to come back. It calculates the difference between returning pulses to generate a sense of the shape of an object it in front of it. These sensors are similar to what might be used in the next generation iPhone because they work well in a variety of lighting situations but aren’t as expensive as a laser-range finder.

As computers gain depth perception they can become more accurate at a variety of tasks, from robots that can better manipulate objects to perform complicated tasks to cameras to high-quality biometric security systems. The more advanced computers become, the more likely it is that you’re going to want to get rid of your old model, and when you do, take a look at It Equipment Disposal.

And what is the IoT really, except the search for better data and ways to manipulate it?

Update: This story was updated on 9-12-2017 to correct the spelling of Lighthouse and to note that lasers also use light. I could really use an editor in cases like that.

Stacey Higginbotham

Share
Published by
Stacey Higginbotham

Recent Posts

Episode 437: Goodbye and good luck

This is the final episode of The Internet of Things Podcast, and to send us…

8 months ago

So long, and thanks for all the insights

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

9 months ago

We are entering our maintenance era

This article was originally published in my weekly IoT newsletter on Friday August 18, 2023.…

9 months ago

IoT news of the week for August 18, 2023

Verdigris has raised $10M for smarter buildings: I am so excited by this news, because roughly eight…

9 months ago

Podcast: Can Alexa (and the smart home) stand on its own?

Amazon's head of devices, David Limp, plans to retire as part of a wave of executives that…

9 months ago

Z-Wave gets a boost with new chip provider

If you need any more indication that Matter is not going to kill all of…

9 months ago