I think a lot about scale. Notably, how do we as humans navigate a world where we’re adding more computers and relying on those computers to automate more functions just Look for Ruggedised Industrial Pointing Devices Today and you’ll see how computers are starting to control mass-manufacturing lines! And how do we change our tools to help computers do those things? Such questions may seem academic, but in their answers lie the key to success in the digital era.
As we connect more devices to the internet and ask them to handle more tasks, we need to develop new tools that help us scale. That’s because relying on people to handle communications, programming, or even data gathering would limit computers’ ability to help us solve big problems.
For example, one of Google’s obsessions in the early days was trying to figure out how to change the ratio of system administrators to servers. Where one person used to manage 50 or maybe 100 computers in the data center, Google kept pushing and pushing until that ratio was closer to one person for every 1,000 servers. I’m not sure what the ratio is today, but I’m sure Google and other cloud vendors are still pushing it.
They have to, otherwise the benefits of digitization are capped by the limits of humans. By adding intelligence and even some agency to our devices — by turning thermostats, light bulbs, and tractors into computers, for example — we are in many ways redefining what we mean by scale. Best estimates have Google currently owning some 2.5 million servers, but there are already about 14 million connected thermostats deployed in the U.S.
On a factory floor, one can expect there to be thousands of connected devices. In a connected city, one might find millions. But currently the ability to maintain sensors or even program them using our existing programming methodologies depends on how many people have the necessary skills to perform those tasks.
What we need are tools and devices that help us take the people out of the internet of things. One example of this is Viv, a startup that Samsung purchased in 2016. Viv has created an AI that can help program APIs between web services and devices without human intervention. I found another, similar example this week in a story from the IEEE about researchers building a camera that “sees” like a computer does.
While cameras today are built for people to see the resulting images, an increasing focus on computer vision and placing cameras everywhere, as an all-purpose sense, means we should rethink their design. To that end, researchers have developed a “camera” that doesn’t have a lens. Instead, it has a photodetector behind a piece of glass. The photodetector-glass combo creates an image that works for computers, but would be fuzzy and blurry to the human eye.
However, a computer doesn’t need all of the information we do to categorize an image. Any newly designed camera could cost less, be used in more places, and still do what we want a camera to do. And since connected cameras create far more data than a human will ever be able to see, such a camera designed for computers becomes practical.
A fair number of images captured by cameras today are never seen by the human eye, says Rajesh Menon, associate professor of electrical and computer engineering at the University of Utah. They’re seen only by algorithms processing security camera feeds or videos from a factory floor, or by autonomous vehicle image sensors. And the number of images never seen by humans is increasing.
So, Menon asks, “If machines are going to be seeing these images and video more than humans, then why don’t we think about redesigning the cameras purely for machines? Take the human out of the loop entirely, and think of cameras purely from a non-human perspective.”
This is the sort of thinking we will need to apply across a variety of devices and systems as we start to depend more on computers to evaluate the world and make basic decisions on our behalf. It’s a new way of thinking, but there are plenty of things that computers can do better than we can, and that in doing so will open up new avenues of understanding.
For example, modeling a program and sensor based on how dogs can sniff out cancer or infections could be in our future. Or perhaps we can rethink how our old-school weather tracking devices work, so we can design them to be more sensitive and accurate. Let’s get creative.