Submission
Privacy Policy
Code of Ethics
Newsletter

From Concept to Reality: Understanding the Power of Digital Twins—Part II.

In the case of Machine Learning (ML), the most characteristic feature of models is that they store information that they have acquired from the training data during their training. An ML model is a program that can find patterns or make decisions from a previously unseen data set. For example, in Natural Language Processing (NLP), machine learning models can analyze and correctly recognize the intent behind previously unheard sentences or word combinations. This ability is the basis of today’s Large Language Models. In image recognition, an ML model can be taught to recognize objects—for example, cars or dogs. The model can perform such tasks by being “trained” on a large data set. During training, the ML algorithm is optimized to find certain patterns or outputs based on the dataset, depending on the task. The output of this process—often a computer program containing specific rules and data structures—is collectively known as a machine learning model.

A rarely emphasized but essential feature of such models is their static nature. This means that once the model has been trained, its “behavior” cannot be changed except by repeating the training process, possibly relying on modified training data.

In contrast, the term “model” in the context of Digital Twins (DTs) refers to a complex, simulated representation that imitates the behavior, properties, and operating conditions of a physical object or system. A DT model allows for real-time analysis, simulations, and predictions while being continuously updated with real-world data. It can be roughly thought of as a very realistic model that has all the important properties of the original object.

So, the most important features that characterize DT models are;

  • The already mentioned data-basedness, which is data collected by sensors about the physical object or environment.
  • Mathematical, physical basis: the model is often based on mathematical equations and physical laws that describe how an object or system reacts to different environmental influences or internal changes.
  • Computer simulation: DT models are simulated using computer algorithms, which allow users to “experiment” with the object or system without making any changes in reality. This helps to eliminate design errors, plan maintenance, and operate more efficiently.

Besides these, perhaps the most important is the dynamic nature of the model, i.e. that

  • the model is dynamically updated as new data arrives, so it constantly reflects the current state of the physical object. This allows the model to respond to changes in time and provide accurate information about current behavior.

The mathematical, often statistical, foundations implicit in ML models are therefore only responsible for pattern recognition, whereas in DT models they determine the model’s response to each input.

To achieve this, it is necessary to create integrity, which is a fundamental aspect of DT design. It means ensuring that the data and modeling underpinning the digital twin correspond accurately, reliably, and credibly to the current state of the physical object or system. This includes, for example, ensuring the accuracy and reliability of the data, but also building in a high level of data protection to ensure that the data is protected from unauthorized access and manipulation.

The above may already illustrate the untapped potential in the development of DT systems. Let’s look at some of them, but without being exhaustive!

In engineering, DTs are already key to optimizing the design and operation of systems. In the aerospace industry, for example, engineers use DTs to assess the structural integrity of aircraft under different conditions. These models predict how aircraft components will react to various stresses, helping to prevent failures before they occur and ensuring aircraft safety and reliability during flights.

Similarly, in urban planning and construction, DTs can simulate the behavior of buildings under different environmental conditions, helping to design for more sustainable and safer designs. They also play a critical role in the maintenance and operation of large infrastructure such as bridges and power plants.

As the accuracy of sensors increases and their cost decreases, DTs can be envisioned as digital representations of entire smart cities. They can be used to simulate traffic situations, optimize the energy consumption of public services, and even increase the efficiency of public transport and contribute to a more livable environment, which can be particularly important in large cities.

Of course, the impact of digital applications can also extend to other social benefits. In healthcare, DTs of organs can help with personalized treatment plans and surgical preparation, which can save lives through precision medicine tailored to individual patients. In environmental science, DTs of entire ecosystems can help manage natural resources more efficiently and fight climate change.

As we continue to develop and refine these digital tools, their role in understanding and managing complex systems cannot be overstated. As today’s systems become more sophisticated and efficient, we could easily see a situation where data-driven decision-making becomes not just a benefit, but a necessity.

Of course, like all areas, developing DTs is not without its challenges. Creating and maintaining the DTs of complex systems is far from easy, despite their huge potential. The accuracy of DTs depends on the quality and quantity of input data, which can be limited or noisy. In addition, the computational power required to run high-fidelity simulations requires significant processing power, which can have a negative impact on costs.

Nevertheless, optimism about the future of the technology is perhaps not overstated, as advances in sensor technology, ML, and computing power seem to be continuing unabated even today. As these technologies continue to evolve, the scope and accuracy of DTs are likely to improve further, making them an even more integral part of design, maintenance, and operations across a wide range of sectors.

In summary, the development of DTs demonstrates how far technology has come and shows how far it can take us. The potential of digitalization to reinvent industries, for example in terms of improving quality of life and protecting the environment, is huge. The journey of DTs from a novel concept to a well-established practice in engineering practice and beyond seems to underpin their transformative power, promising a smarter, more efficient, and better-connected world.


István ÜVEGES is a researcher in Computer Linguistics at MONTANA Knowledge Management Ltd. and a researcher at the HUN-REN Centre for Social Sciences, Political and Legal Text Mining and Artificial Intelligence Laboratory (poltextLAB). His main interests include practical applications of Automation, Artificial Intelligence (Machine Learning), Legal Language (legalese) studies and the Plain Language Movement.

Print Friendly, PDF & Email