• 검색 결과가 없습니다.

An open quantum system. The quantum basic equation

문서에서 Quantum computations (course of lectures) (페이지 97-100)

What happens when the system in question comes into contact with an environment that does not have long-term memory, but is able to cause measurements of some part of the system? This question is of great practical importance. For example, if an atom emits a photon, and this photon flies away, being in an entangled state with an atom, the measurement of this photon will automatically lead to the appearance of a mixed state of the atom, that is, it will cause decoherence, in which the atom must be described by a density matrix.

We may not even know what happens to the emitted photon; maybe no one is watching it, but it will be reflected from a distant mirror and fly back to us again - all the same, if it is not nearby, we must consider the state of the atom we have as mixed. A photon, once emitted and not measured by anyone, will arrive again-well, the atom-photon system”it will be in a pure state again, and if another photon arrives instead of ours, which got into someone’s detector, then we will have a density matrix of the mixed state of the composite system.

That is, we can determine whether a photon has been measured only when it arrives to us again. If we arrange an experiment so that a photon arrives at us at each repetition, we can, by changing the basis, determine by tomography whether there was a measurement, that is, whether it was the same photon that once flew out of our atom, or another: when detecting, the photon disappears.

However, this method is statistical. With its help, we can only check whether there is a systematic measurement of outgoing photons in the same type of experiments, or there are no measurements, and all the photons are reflected from the mirror, flying back to us. For a specific case, it is impossible to draw such a conclusion: the conclusions of the quantum theory are always only statistical.

The time change of the density matrix of a system interacting with a stationary en-vironment that does not have long-term memory is described by a generalization of the

Schrodinger equation for the density matrix, which is called the Kossakovsky - Lindblad - Glauber - Sudarshan quantum master equation:

i~ ˙ρ = [H, ρ] + iL(ρ), L(ρ) =

N2−1

X

j=1

γj(AjρA+j − 1

2{A+jAj, ρ}) (97) where the operators Aj are called decoherence factors, and must, together with an identical operator, form an orthonormal basis in the N2- dimensional Liouville space of operators of size N ×N , in which the scalar product is defined by the formula hA|Bi = tr(A+B). Here, following the tradition, we denote the conjugate operator by a cross, and the non-negative numbers γj are the intensities of the decoherence factor Aj.

This equation is a generalization to the quantum case of the main Markov equation P = AP for the probability distribution P ; if a given dynamics of probability distributions˙ is considered in random processes, that is, the dynamics of the main diagonal of the density matrix, then in quantum physics the entire density matrix is considered, and the physical causes of such dynamics are investigated.

The numerical solution of the equation (97) can be carried out by the Euler method.

The fact is that the main term in the right part of [H, ρ] corresponds to the unitary dynamics; this dynamics does not increase the magnitude of the error, so there are no pathological cases of its rapid growth and, as a rule, there is no need to use more accurate methods of the Runge-Kutta type. The solution can be represented as a sequence of steps, each of which corresponds to the time tj, begins with the density matrix ρ(tj) and consists of two actions:

1. The unitary dynamics of the density matrix is calculated

˜

ρ(tj+1) = ρ(tj) + 1

i~[H, ρ(tj)]dt.

2. The action of the Lindblad superoperator L on the ˜ρ(tj+1):

ρ(tj+1) = ˜ρ(tj+1) + 1

~L(˜ρ(tj+1))dt.

The density matrix ρ(t) at any given time must be positive definite, Hermitian, and have a unit trace. The last two conditions, in the presence of random errors, can be easily provided by switching from a slightly corrupted matrix ρ(t) to a corrected matrix (ρ(t) + ρ+(t))/tr(ρ(t)). To ensure positive certainty in case of random errors, it is possible to calculate the eigenvalues once, for example, in 20 steps, and then, when a small negative value appears, correct these values by redistributing the error to all other eigenvectors.

9 Lecture 9. The complexity of quantumm system ad the accuracy of its description

9.1 Introduction

Our understanding of quantum theory has evolved greatly since its inception. If until about 80-90 years of the 20th century, as a rule, simple, from the classical point of view, systems were studied: individual atoms, molecules or ensembles consisting of identical particles that could be reduced to separate independent simple objects, then in recent decades the focus of research has shifted towards more complex systems. In particular, the relevance of microbiology and virology has also aroused physicists ’ interest in studying objects related to living things, for example, the DNA molecule, which can no longer be attributed to simple systems.

Meanwhile, quantum theory, which is the basis of our understanding of the micro-cosm, and, therefore, an accurate understanding of complex systems, has a very rigid and well-defined mathematical apparatus based on the matrix technique. The predictions of quantum mechanics have always proved to coincide very precisely with experiments on simple systems that are traditional for physics, but for complex systems this theory meets a fundamental obstacle. The very procedure of obtaining theoretical predictions requires such unimaginable computational resources that we will never have them at our disposal.

If for simple systems the procedure of computation the quantum state had no relation to the physics of its evolution and was only a technical technique, then for complex systems the situation is different. Here, the computation process is the main part of the definition of the quantum state itself, and therefore should be considered as a physical process, and the device that implements this computation is an integral part of any experiment with complex systems at the quantum level.

This computing device is an abstract computer that simulates the evolution of the complex system under consideration. Thus, all restrictions on this computer, following from the theory of algorithms, have the status of physical laws; and these laws have absolute priority over physical laws in the usual sense in the case of complex systems and processes.

This is a new situation that did not exist in classical physics, where the procedure for obtaining theoretical predictions was not very complicated. In any case, the complexity there has almost always been within the reach of classical supercomputers, which are cre-ated mainly to cover processes from the point of view of classical physics - by the number of particles in the system under consideration. In quantum mechanics, the complexity increases exponentially with the number of particles, and the classical way of computing becomes unacceptable. This was strictly proved by the discovery of theoretically pos-sible (from the point of view of the standard - Copenhagen quantum theory) processes that cannot be modeled on any classical supercomputers - the so-called fast quantum algorithms ([18], [12]).

The attempt to circumvent the complexity barrier with the help of a quantum com-puter proposed by R. Feynman ([36]) has given us a lot to understand the microcosm and some interesting applications, for example, in cryptography and metrology. However,

this attempt did not solve the main problem: the scaling of a fully functional quantum computer is very questionable due to decoherence. Decoherence occurs as a result of spon-taneous measurements of the states of the simulated system from the environment, which is traditionally considered in the framework of the concept of an open quantum system in contact with the environment (see [37]), so that the influence of the environment is reduced to uncontrolled measurements of the state of the original system.

Thus, decoherence is a fundamental factor that cannot be eliminated with the help of mathematical techniques, such as error correction codes (they begin to really work only for a quantum computer with more than a hundred qubits). If we set the task of modeling complex systems at the quantum level, decoherence should be embedded in the quantum formalism itself, and not introduced into it as an extraneous influence. The deviation from the linear unitary law of evolution resulting from decoherence must be naturally justified mathematically.

There is a complexity barrier to matrix formalism. If we have a quantum state of a system of c particles with two possible states that cannot be simplified, and a is the total number of binary signs that determine the position of all these particles, then for the total number of bits a + c, the inequality a + c ≤ q must be satisfied, where the dimensionless constant q can be determined experimentally.

If we move from the strings of qubit values α0, α1, ..., αn−1 to the numbers defined by their binary expansions

n−1

P

j=0

aj2j, then a = cn and we get the classical analogs of accuracy and complexity: A = 2a, C = 2c, for which a relation similar to the coordinate - momentum uncertainty relation is satisfied:

AC ≤ Q, (98)

where Q = 2q.

We will argue in favor of such a ratio of complexity and accuracy in the general case, for complex particles; in particular, we will show that the quantization of the amplitude makes it possible to introduce a certain determinism into the quantum formalism, the nature of which is not reduced to the classical one.

문서에서 Quantum computations (course of lectures) (페이지 97-100)