Reaching the next frontier for digital twins

Reaching the next frontier for digital twins

Real-time analytics and machine learning can enable the intelligent monitoring of large systems 
 

Elly Yates-Roberts |


Digital twins are fast becoming an essential component within software systems that design, monitor and improve our physical devices – from thermostats to jet engines and skyscrapers.

The concept of the digital twin was originally proposed in 2002 by Michael Grieves for use in the field of product life cycle management (PLM). Some say that it actually dates back to the 1960s when NASA used the digital twin concept to model Apollo spacecraft and help save the Apollo 13 astronauts.  

Digital twins have typically been used to model the inner workings of physical devices to assist engineers in refining their designs. They can reveal problems that would be much more costly to correct after their physical counterparts have been constructed. Once a device has been built and deployed in the field, its digital twin also can receive telemetry that allows engineers to identify issues and make changes in the next version of the device.  

The digital twin market is huge at an estimated $3.1 billion market size in 2020. It is also growing rapidly at a 58 per cent compounded annual growth rate and is expected to reach $48.2 billion by 2026. While digital twin adoption is still in its early stages, there are new and exciting use cases waiting to be tapped. 

One key use case that extends the boundaries of digital twins is real-time analytics, which tracks how live systems are currently behaving instead of how newly designed devices should behave. Here, digital twins continuously analyse streams of telemetry from data sources in the real world to find problems – or in some cases, hidden opportunities. They serve as the eyes and the ears of personnel managing large, complex systems, sifting through torrents of real-time data faster than any human possibly could. They prioritise suspected issues and send alerts when needed so that immediate actions can be taken. 

Consider, for example, a nationwide trucking fleet with thousands of trucks on the road. Dispatchers are tasked with ensuring that the entire fleet runs smoothly and efficiently all day, every day. However, they have far more telemetry to track than they can examine. Every few seconds, each truck on the road sends messages with engine parameters, cargo condition, fuel, location, speed, acceleration, and more. All this data must be digested immediately to spot problems, such as the urgent need for an engine repair, failing refrigeration that could harm cargo, or a lost or fatigued driver. Multiply this challenge by thousands of trucks, and the role for digital twins becomes clear. 

Digital twins can separately monitor the condition of each truck by analysing its telemetry with contextual information about the truck and its driver (cargo, destination, schedule, route, mechanical condition, driver’s history, etc.). They can integrate all this information and determine within milliseconds whether to alert a dispatcher to solve a problem. Together, they work at scale to intelligently track the intricate workings of a massive system like a trucking fleet.  

The applications are countless. Digital twins can analyse telemetry from internet of things devices in cities to monitor traffic sensors, sounds at intersections, and gas leak detectors. They can analyse biometric data from health-tracking devices to look for medical issues, and they can watch entry points in a large factory to maintain security. These are just a few examples. 

A major challenge remains. How can digital twins incorporate real-time analytics algorithms that effectively pick out the signal from the noise in streams of telemetry? How can they be designed to alert when needed but not unnecessarily? For example, when does the telemetry from a truck engine indicate that maintenance will be needed?  

It turns out that machine learning algorithms are especially well-suited for tackling problems like these. They can easily be trained using historical data to look for anomalies in groups of telemetry, for example, engine parameters, like oil temperature, oil pressure, and revolutions per minute. Once they are trained to recognise what’s normal and what’s abnormal, they can then be turned loose to run within digital twins and process live data. Because machine learning algorithms don’t need to know why certain telemetry combinations are abnormal, they can be applied to a wide variety of applications. 

Incorporating machine learning into digital twins takes them well beyond their roots in PLM and opens the door for their rapid adoption in live systems where effective real-time analytics is essential. Whether it’s for our supply chains, security or healthcare, digital twins with real-time analytics and machine learning will undoubtedly play a key role.  

William Bain is CEO of ScaleOut Software

This article was originally published in the Winter 21/22 issue of Technology Record. To get future issues delivered directly to your inbox, sign up for a free subscription.

Subscribe to the Technology Record newsletter


  • ©2024 Tudor Rose. All Rights Reserved. Technology Record is published by Tudor Rose with the support and guidance of Microsoft.