On the Way to the Quantum Revolution: How Will Quantum Mechanics-Based Computing Shape the Future?
The history of quantum computing began with the discovery of quantum mechanics in the early 20th century. Quantum mechanics began to be applied to computing in the 1980s when Richard Feynman suggested that computers based on quantum mechanics might be needed to simulate nature. It was at this time that Paul Benioff created the first quantum Turing machine, which also laid the theoretical foundations for quantum computing.
In recent decades, and especially since the 2010s, the development and research of quantum computers have accelerated significantly. IBM, Google, Microsoft, and many other technology giants have invested significant resources in the development of quantum technology, and quantum clouds and open-source quantum programming tools are now available. The IBM Quantum Experience, for example, provides open access to quantum computers via the cloud, giving users the opportunity to access them for free or for research purposes. At the same time, there has been a spectacular development in quantum programming languages. Microsoft’s quantum programming language Q#, for example, allows the integration of quantum and classical programs, a significant improvement over the achievements of the previous decade.
Quantum computing has both huge potential and serious limitations. To understand why the gap between “classical” and quantum computing is so great, it is useful to start with the most basic building blocks of any computer system.
In computing, essentially all information is stored using bits. In the “classical” approach, each bit is implemented by a single transistor, which, depending on the current, can store two states—zero or one. Of course, the current is not discrete but a continuous variable, i.e. it can take any value. Zero usually corresponds to 0V, but one can mean 3.3V or 5V for instance. The point is that in processing, this voltage will be interpreted as 0 or 1 in any case. It is not possible to take advantage of any other states.
Bits used in quantum computers (quantum bits—Qubits) are fundamentally different, and as a result, quantum computing follows a completely different computational model from classical computing. The implementation of the Qubit is much more complex than the above, as it is based on quantum mechanical principles. A Qubit can be physically implemented in several different ways, and the common feature is that each method exploits the properties of quantum mechanics. Without going into too much detail, there are also Qubits made using superconductors, ion traps, or photonic principles. As usual, each of these offers different advantages and presents different difficulties in their implementation. Superconductors, for example, can only be used at very low temperatures, while photonic solutions produce qubits that are easy to transmit but difficult to control.
Qubits are characterized by their ability to exist in several states at the same time. In the case of conventional bits, each bit is in exactly one state at any given moment—either it is energized or it is not. This is also consistent with our general intuition about how the world works. In the case of qubits, it would roughly look like they are both energized and de-energized. For quantum systems, it is possible for them to be in several different states at the same time. In our case, this is equivalent to a qubit being in state 0 and 1 at the same time. These 0 and 1 states are associated with probabilities. Simply put, a qubit can be in a state 0 with a probability of 40% and in a state 1 with a probability of 60% at a given moment in time.
Without excessive formalism, but still using the appropriate notation, this can be written as follows:
|ψ⟩=α|0⟩+β|1⟩
From our point of view, it is not important how exactly α and β can be calculated (they are so-called complex amplitudes), what matters is that |α|^2 is the probability that the qubit is in state 0, while |β|^2 is the probability that it is in state 1. Notice that with this single bit, we can represent both the 0 and the 1 state at the same time. For conventional bits, this would require 2 bits (one in state 0 and one in state 1), or, the two states could only be examined in sequence, which would take twice as long to use (for example, in a real-world calculation).
Imagine that we want to perform simple multiplication. To do this, suppose we need 2 multiples of 2, such as 2*0, 2*1, 2*2, etc. Let’s say the first 8 of these multiples. So, since we need to multiply two by each of the numbers [0, 1 … 8], let’s say we store these numbers in ordinary bits. We can do this as follows:
000 | 0 |
001 | 1 |
010 | 2 |
011 | 3 |
100 | 4 |
101 | 5 |
110 | 6 |
111 | 7 |
The first column of the table represents the values of the classical bits, and the second column represents the number they stand for. This is essentially the binary number system, one of the cornerstones of computing. To do the multiplications above, we need 3 conventional bits (to store numbers from 1 to 7), and then we can do each multiplication one by one. If they all take units of time, this gives a total of 8 units of time.
Let’s examine the same question, but this time let’s use Qubits to store the numbers. From what we have seen so far, 3 Qubits can store all 8 numbers above in parallel. This is because each of the local values (table – first column) can hold both 0 and 1. As before, more formally:
|ψ⟩=α_0 |000⟩+α_1 |001⟩+⋯+α_7 |111⟩
Here α_0… α_7 are the probabilities associated with each state. So, in accordance with the example above, we can essentially do all the multiplications at once on a quantum computer, without having to do them in sequence. That is, the expected results (0, 2, 4, 8, 16, 32, 64, 128) will all exist simultaneously within our quantum system.
However, in addition to superposition (essentially what we’ve been talking about so far), such systems have one more property. Superposition is only present until you want to measure the state of the system. The very fact of measuring leads to the disappearance of the quantum superposition, and this is called “collapse.” More precisely, until a measurement is made, the state of the particle is described by a probability wave function, which indicates the probability of finding the particle in a certain state. The act of measurement, however, removes the superposition and the particle will be in a single specific state. So, for example, when we measure the state of a qubit in the example above, the superposition “collapses” and the qubit is either in state 0 or 1. Before the measurement, it is not possible to say exactly which state it will be in, only the probability that it will be 0 or 1. This also means that of the numbers expected as the result (0, 2, 4, 8, 16, 32, 64, 128), essentially only one will be read out of the system, and it is also quasi-random which will be this result.
This also leads to a very important conclusion. If we really want to exploit the potential of quantum systems, we need to find a way to extract from the information stored in the superposition what we really need at that moment. So, the very fact that we have an architecture in the form of quantum computers that has the potential to speed up computation is, in practice, completely useless. What makes it valuable, and dangerous, is if we can find a way to turn the information stored in superpositions into something that is useful for a task. This is therefore one of the biggest challenges of quantum computing today (even though today’s solutions are still fundamentally unreliable).
Quantum computers therefore hold unprecedented potential, offering computing speed and capacity far beyond the capabilities of today’s classical computers. However, the often counter-intuitive and complex properties of quantum mechanics also pose serious challenges if these capabilities are to be exploited in practice. To truly exploit the potential of quantum computing, we need to find a way to use the information gained from superposition in an efficient and targeted way. Only then will quantum technology become the foundation of future computing, while new opportunities and threats will also emerge.
István ÜVEGES is a researcher in Computer Linguistics at MONTANA Knowledge Management Ltd. and a researcher at the HUN-REN Centre for Social Sciences, Political and Legal Text Mining and Artificial Intelligence Laboratory (poltextLAB). His main interests include practical applications of Automation, Artificial Intelligence (Machine Learning), Legal Language (legalese) studies and the Plain Language Movement.