There’s a lot of hype around digital twins these days. Most companies use the phrase to mean a collection of digital assets associated with a real-world device. They use them to understand how outside forces might affect a twin’s real-world counterparts, to track problems in manufacturing processes, and even to predict outright machine failures.
The phrase was supposedly coined by NASA scientists trying to model spacecraft for the moon missions. But after visiting the Industrial IoT Lab at Finland’s Aalto University, I was shocked to realize that my understanding of digital twins was shallow at best.
I had two misconceptions. The first was pinpointing what makes a digital twin. There are a lot of companies out there using the phrase digital twin to talk about equipment monitoring programs. Basically, companies guilty of these faux digital twin software packages are using sensors on machines to show operators how those machines are performing in real time. But a fully defined digital twin is much more than a digital model that tracks how an asset is functioning.
A true digital twin has several layers of digital information, ranging from the original CAD drawings of the machine it’s twinning to detailed data around how that machine is manufactured. The goal of using a digital twin is to deeply simulate an individual machine and, by extension, predict how it will behave in certain situations or environments. As time passes, data gets added to the machine’s digital twin. The new data will reflect maintenance, prior sensor readings, breakdown history, etc.
The model of the machine should also be available offline. That means there should be a file in which the data is stored that can be accessed even without connectivity. Finally, the digital twin should also simulate the relationship of both data and metadata associated with the machine. So the twin must model temperature, but it also has to “understand” that the temperature for this particular asset is measured in Fahrenheit. Such a level of capability gives engineers greater flexibility when it comes to building models, and lets them compare data from digital twins across regions or industries.
My second misconception around digital twins was a bit more subtle. I hadn’t considered how much of the data derived from a digital twin is simulated. At the Aalto lab, I watched a machine track the movement of a rotor so it could learn how to best manufacture something such as a wind turbine or predict the movement of a roll of paper.
Watching it prompted me to wonder how much data it takes to create a digital twin. If I had a digital simulacrum of a machine and could apply different environmental or mechanical factors to it, how large could that original simulacrum be? It turns out that’s not how digital twins work. They aren’t virtual doppelgangers. They are actually a series of algorithms that connote how a machine moves or behaves. In other words, a twin isn’t a twin so much as it’s…math.
And yes, for humans, that math can be laid out over a visual model of a machine, but the digital twin isn’t that visual image. To get all metaphysical about it, the visual image is like a body, while the digital twin/algorithm is the soul. I don’t expect Microsoft, Siemens, or PTC to embrace this analogy as part of their digital twin product marketing anytime soon, but it’s a useful one because it provides a necessary level of subtlety.
Why does all of this matter? In manufacturing, health care, and even commercial real estate, people are talking about how digital twins can help us solve vexing problems and deal with an overflow of information. And when applied properly, they probably can. But if you buy a digital doppelganger and not a digital twin, it’s not going to be as useful.
I’m still having trouble understanding the difference between a “digital twin” and a detailed simulation in software. They sound the same to me, except that “digital twin” seems to be associated with more figurative and hyperbolic hand-waving.