Get Complete Project Material File(s) Now! »

## Classical error correction

Error correction relies on encoding the information redundantly to make it more robust to errors. A distinction needs to be drawn between the logical bit that is the information and the physical bits that are the physical support. Hereafter, we describe a simple repetition code that allows for sending a single logical bit more accurately by encoding it into several physical bits.

Suppose that Alice wants to transmit one bit of information to Bob, say a 1 for example. She sends her favourite physical bit in state 1 but there is a small error probability p that

Bob receives a 0 instead due to some noise in the communication channel. Therefore, the success probability of this transmission is 1 − p. To increase this probability, Alice sends three bits to Bob, 111, that altogether encode the logical information 1. Each travelling bit has the same probability as earlier to undergo an error but, provided Bob uses the right decoder, the overall success probability is 1−3p2 + 2p3 (see tab. 1.1). This success probability is increased provided p is below a given threshold, 1/2 in this case.

The error probability of this protocol goes from p to 3p2 at leading order. The form of this probability illustrates a fundamental point of error correction:

• the factor 3 illustrates that we start by making things worse, indeed errors are now three times more likely to happen.

• but this is balanced by insensitivity to single errors as shown by the square and things get better when the error probability is low enough.

Sending the information five times, lowers the error probability down to O(p3), seven times to O(p4), etc. Hence, provided the noise is below a threshold, it is possible to reach an arbitrary reliability by transmitting a suﬃcient amount of noisy bits, i.e. increasing the redundancy. However, this is true within some assumptions on the type of errors that may happen. For example, let us imagine there is an eavesdropper on the line during the communication. This eavesdropper is present with a small probability pe but, if present, flips every bit that travels. In this case, whatever the level of redundancy Alice chooses, Bob gets the wrong information and the success probability of the protocol can never be better than 1 − pe due to this source of highly correlated errors. This example emphasizes that a given protocol is only suited for correcting a given set of errors.

Storing information can be done in a similar way. A physical bit has an error rate κ such that for a small time Δt there is a small probability p = κΔt 1 that the bit flips. Using the repetition protocol, by stroboscopically detecting and correcting for errors for each time interval Δt, one decreases the error probability to 3p2 and hence the error rate of the logical bit is reduced to 3κ2Δt. Here, it is best to measure fast as the error rate decreases with Δt. Including errors in the detection and correction parts leads to an optimal finite measurement rate.

Directly transposing this repetition protocol into the quantum world is problematic for two reasons. First, one wants to be able store not only states |0i and |1i but also unknown superpositions of the two. Yet, copying the state of an unknown quantum system is not possible in quantum mechanics as shown by the no-cloning theorem. Second, to determine which error occurred, one cannot directly measure the quantum system since it would destroy the superposition information that it carries. Fortunately, it is possible to circumvent these issues and below we derive part of the original proposal by Shor [4].

**Bit-flip Correction**

Say one wants to store the state α |0i + β |1i with α and β unknown. Copying this state is forbidden by quantum mechanics. Instead, an entangled state of three qubits is built using the encoding circuit of fig. 1.1b and writes α |000i+β |111i. As with classical error correction, this entangled state is susceptible to more errors but is insensitive to single errors provided detecting and correcting these errors are possible.

In the quantum case, error detection is done indirectly by measuring joint observables. Indeed, if one probes a qubit by asking: « Are you in 0 or 1? » this will destroy quantum information. Instead, one must ask the subtle question « Are you in the same state as your neighbour? ». It is crucial that certain bits of information are accessed which keeping unknown others. This translates into measuring the operators Z1Z2 and Z2Z3. To do so, one introduces two ancillary qubits as shown in fig. 1.1b which are then measured tab. 1.2. Assuming only one error occurred, we get diﬀerent outcomes for each altered state so that appropriate feedback corrects the error. As in the classical case, to correct against more than one bit-flip, the redundancy simply needs to get bigger.

A qubit is also susceptible to another error channel, the phase-flip Z. This second error channel can be corrected by increasing the redundancy of the code. The most straightforward way is to encode each one of the three qubits of the bit-flip protocol into three qubits leading to the 9-qubit repetition code proposed by Shor [4]. Interestingly, this encoding is also robust to a bit-flip plus a phase-flip occurring, the Y error and hence is able to fully protect the logical qubit against single physical errors. Since then, there have been many proposals [5] to improve this scheme and experimentally implement quantum error correction. One of the most promising approaches is the surface code [6] which has many appealing properties. In this code, data qubits and ancillary qubits are physically arranged into two interlocked square lattices, which makes it compatible with a 2D architecture. Also, once the unit-cell of this lattice reaches below-threshold error rates (achievable with state-of-art superconducting circuits [7]), it may be replicated many times to increase the error protection. However, this encoding requires tremendous resources of the order of thousands of physical qubits per logical qubit [8] which is far beyond the capabilities of current experiments [9].

In the following, we try to rethink how one can protect a quantum system without relying on physical redundancy and hence to making it manageable for experimental implementation.

### Classical computer memories

Among many features, classical computers perform well thanks to a fast and reliable memory. Each memory is composed of many unit cells. Below, we describe the working principle of two of those, the dynamic RAM and the static RAM cells. These are only

two examples among many types of architectures but gives a glimpse of how data about to be processed1 is stored in a classical computer.

**Dynamic RAM**

Dynamic RAM is based on a capacitor (see fig. 1.2). A charged capacitor represents a 1, a discharged one a 0. To write on the capacitor, the isolating transistor is turned on and voltage bias is applied to charge the capacitor. To reset, the same operation is performed with a zero voltage. To read the information, the isolating transistor is closed again so that the stored charges can flow in line BL. If the resulting voltage is over a given threshold, one assumes the capacitor was storing a 1, below the threshold, a 0.

Over time, due to the finite resistance in parallel of the capacitor and across the tran-sistor, the charge of the capacitor leaks out leading to bit-flip errors. To prevent this, the memory cell is periodically read out (hence the name « dynamic »). If the capacitor is found to be charged, it is set back to its original level with a bias voltage.

The major advantage of this architecture is that it requires a limited amount of hardware: one capacitor and one transistor for one bit of data. The storage density in dynamic RAMs is high. In return, this memory cell needs constant control and feedback which makes it harder to operate and slower to access.

**Static RAM**

Static RAM is based on a type flip-flop circuit, the inverter loop (see fig. 1.3). When isolated, this loop stores a 0 when there is a positive voltage on the right and the left is grounded and conversely. To read the data, the isolating transistors are switched on, the voltage propagates in line BL or BL and the resulting signal is amplified. To write 1 in the memory while 0 was stored, first line BL is biased and line BL is grounded. When, the isolating transistors turn on, there is a value conflict on each side of the loop and, provided that the isolating transistors are bigger than the inverters transistors, the memory cell switches state.

This kind of memory cell is more complex and has a lower storing density than the previous one but has several advantages. It does not require constant monitoring of the memory state to prevent losing information (hence the name « static »), so it is easier to control and faster to access. Also, as long as the power is supplied (via the voltage source V ) for the inverters to work and cooling is provided (via a bath at temperature T ) to keep the temperature low enough, once the state is set, it remains for a time exponentially long in eV /kBT (where e is the charge of the electron and kb is the Boltzmann constant).

1We do not detail how data is stored in mass storage devices, because these data is further from the actual processing.

**Conclusions**

The first scheme we presented, the dynamic RAM, fights against dissipation by control and feedback. Reading the system out to protect it, is not allowed for a quantum system since it destroys any superposition. Still, we saw in sec. 1.2 that there are ways around this. One can encode the information redundantly and only read joint quantities out. In this sense, dynamic RAM is similar to the standard strategy for quantum error correction except that the constraints set by quantum mechanics increases the complexity of the control and feedback.

On the other hand, the static RAM never needs to measure the state of the inverter loop to maintain it. Actually, the power supply, which is always on to fight against the losses in the circuit, is agnostic about the information it contains. This feature makes this kind of architecture very appealing for quantum error correction. At the heart of the static RAM is a flip-flop circuit which is a bistable system. Stable means that if the system is perturbed it relaxes back to its original state that we call a steady-state. A bistable system has two such states and can flip from one to the other only if the perturbation is big enough. Such a perturbation is used for writing on the memory cell for instance. To illustrate, imagine that a sudden disturbance sets the left bias to some voltage, while bit 0 is being stored. As long as this disturbance is smaller than V , the system relaxes back to 0. Otherwise the state relaxes to 1 leading to bit-flip error and hence, the quality of this memory cell is directly related to the ratio between the voltage noise level and the voltage bias. The disturbance dissipates due to the finite resistance of the wires composing the circuit associated with a cold bath that absorbs the resulting heat. In this example and more generally, dissipation has profound consequences on the stability of a system.

In the following we try to find a more simple bistable system which is able to protect information against bit-flips, in the same spirit flip-flop circuits store bits of information.

**Bistable systems in classical mechanics**

It is easy to build bistable systems in the classical world. One can think of a power switch for which the on and oﬀ positions are stable. However, transposing such a device into the quantum world is challenging. Indeed, each individual atom that it is made of, adds additional degrees of freedom that are potential decoherence path. The ideal bistable candidate is a dynamical system with only 1 degrees of freedom, the couple of conjugate variables: position and momentum. This brings us to the 1D-oscillator for which the “steady-states” are periodic orbits or limit cycles in phase-space. Below, we quickly explain how to make an oscillator bistable. We refer the reader to Appendix D for a concrete example.

From the consideration of the previous section, several ingredients should be present:

• losses, they are a necessary condition for the system to be stable

• a power supply, otherwise the losses bring us into a trivial state.

In the case of our oscillator, the power supply consists of a time-dependant drive and there are inherent losses that are modelled via a regular damping term. We will see that losses can also be engineered to provide bistability.

**Driven oscillator**

Driven-damped harmonic oscillator We work with normalized position and mo-mentum coordinates, x and p respectively or equivalently the mass of the oscillator is set to 1. We renormalize time so that the natural frequency of the oscillator is 1. Under these conditions, the equations of motion of a damped harmonic oscillator driven at resonance writes x˙ = p , p˙ = −x − Q−1p + cos(t) where Q is the quality factor of the oscillator and the driving strength. As we expect for a driven-damped harmonic oscillator, it has a single limit cycle as shown in fig. 1.4.

Poincaré map By defining a new variable θ, the system becomes autonomous ˙ = ˙ = − − Q−1p + cos( ) , θ˙ = 1 x p , p x θ , and since the equations are periodic in θ, this new variable is conveniently defined modulo 2π. A trajectory, i.e. a solution of these equations for a given initial condition, is a curve in the 3D phase-space (x, p, θ). To study the dynamics of the system and in particular the existence of stable periodic orbits, it is advantageous to consider only the intersections between the trajectories and a section of phase-space. Here, we choose the section θ = 0. On this section, we can define the Poincaré map that is the application defined from the section to itself that maps a point of intersection to the following one in time. An important property is that fixed points of a Poincaré map are periodic orbits of the full dynamics.

**Parametric oscillator**

A parametric oscillator is an oscillator for which the drive modulates a parameter of the system. For a mass-spring system, the modulated parameter maybe the mass or the spring constant. A pendulum whose anchoring is driven up and down is a good example of such oscillator. This type of oscillators are know to act as amplifiers when the drive frequency is at twice the oscillator frequency [12]. We investigate this particular regime.

Non-linear parametric oscillator The diverging case is not physical. At some point, non-linearities arise preventing the system to go to ±∞. If the system neither goes to infinity nor stays at zero, we expect it settles somewhere in between for a carefully chosen non linearity leading to two periodic orbits (one for x > 0, one for x < 0).

Having the frequency changing with the oscillations amplitude is a possible non-linearity. This way, the parametric drive gets detuned from the resonance condition as the ampli-tude increases ˙ = ˙=−(1+ nl x2 + cos(2 )) − Q−1p , θ˙ = 1 x p , p ω θ x .

The system dynamics are represented in fig. 1.4d and lead to two stable limit cycles. However their stability is only guaranteed by the presence of the losses Q−1p. This is detrimental: making progress on the quality factor of the oscillator aﬀects the stability of these orbits.

Another possibility is to have the losses scale with the amplitude of the oscillations ˙ = ˙=−(1+ cos(2 )) − Q−1p − nl p3 , θ˙ = 1 (1.1) x p , p θ x κ .

The system dynamics are represented in fig. 1.4e and lead to two stable limit cycles. This stability is provided by the non-linear dissipation and hence the two limit cycles remain attracting even when Q → ∞. The doubled frequency of the drive is crucial here fig. 1.4f, this allows for the energy source to be agnostic of the state of the oscillator like in the SRAM memory. We will see in chapter 4 that crucially, although this mechanism is dissipative, it is not detrimental for storing quantum information.

**Superconducting circuits**

Let us dive more into the details of how to physically implement quantum bits. For a physical quantum system to be a good candidate, it must satisfy several criteria [13]. As we mentioned in the beginning, there are natural candidates that natively have discrete quantum states that can be addressed to form a qubit. Examples of such physical entities are electronic or nuclear spins [14], neutral atoms [15], trapped ions [16], photons [17], or even non-abelian particles [18]. Outstanding experimental progress has been made in the past few decades, realizing the thought experiments of the founders of quantum mechanics [1] and increasing the likelihood of building a quantum computer someday.

In this thesis, we work on superconducting circuits [19, 20] that are not microscopic entities but rather consist of pieces of superconducting material arranged together into electrical circuits that contain a macroscopic number of atoms. Retrospectively, it is not straightforward that these circuits have quantized energy levels [21] that are addressable specifically to be used as qubits. In a normal metal, each atom contributes electrons to the Fermi sea and a single excitation consists of an electron-hole pair formed on both sides of the Fermi surface, a so-called quasiparticle. At finite temperature, there is a macroscopic number of such quasiparticles because any tiny external perturbation can generate them: they are not good qubits. In superconductors things work diﬀerently [22]. At low temperature, a superconductor experiences a phase transition during which electrons pair together to form Cooper pairs. These pairs have a finite binding energy, equal to the superconducting energy gap (typically 43 GHz in aluminium), such that quasiparticles require a finite perturbation to be generated. Hence, collective motions of the sea of Cooper pairs are able to flow with no dissipation despite the small defects of the system, leading to the so-called supercurrent. The superconductor can be shaped into distributed capacitances and inductances so that this supercurrent oscillates with very small dissipation. Also, as collective excitations, these modes have large coupling capabilities. During the past two decades great progress has been made in the field to increase the coherence of these modes by improving circuit design, fabrication, shielding and control techniques.

However, these superconducting circuits would be linear and hence useless for quantum information if it were not for existence of the Josephson junction. In this thesis we work with Josephson tunnel junctions that are composed of a thin insulating barrier separating two superconductors that allow Cooper pairs to tunnel through. On a macroscopic scale it can be viewed as a non-linear inductor that brings non-trivial dynamics to the circuits without inducing dissipation. In particular, the electromagnetic modes that have some current flowing through the junction inherit its non-linearity. Thus, single transitions can be addressed, so that the circuit behaves as an artificial atom or, more simply, as a qubit. superconducting circuits have a great flexibility in design which lead to a flock of diﬀerent qubits, finding ingenious ways to increase the coherence times from tens of nanoseconds to almost milliseconds [19]. This non-linearity is also leveraged in many applications such as amplifying signals [23], detecting photons [24–28] or coupling to other quantum devices [29–31].

Superconducting circuits consist of plates and wires that are in the micrometer range (hundreds of nanometers for Josephson junctions) and are easy to fabricate using the standard lithography techniques of the semiconductor industry. Also, in order to lie below the superconducting gap, the electromagnetic modes of superconducting circuits are typically in the GHz range. This makes them practical to operate with the com-mercially available microwave components used in radars and telecommunications. To perform experiments in the single excitation regime, these modes should be in a cold environment so that thermal photons do not randomly excite them. For the black body radiation to be small in the GHz range, the temperature should be much less than 200 mK (for a mode at 4 GHz). Superconducting circuits are usually clamped at the base plate of a commercially available dilution cryostat at 20 mK.

Superconducting circuits seem to have the good ingredients to implement the ideas of the previous section: low inherent losses and strong non-linearity.

**Table of contents :**

**1 Introduction **

1.1 Quantum information

1.2 Quantum error correction

1.2.1 Classical error correction

1.2.2 Bit-flip Correction

1.3 Classical computer memories

1.3.1 Dynamic RAM

1.3.2 Static RAM

1.3.3 Conclusions

1.4 Bistable systems in classical mechanics

1.4.1 Driven oscillator

1.4.2 Parametric oscillator

1.5 Superconducting circuits

1.6 Outline

**2 Hamiltonian and dissipation engineering with superconducting circuits**

2.1 Circuit quantization

2.1.1 Circuit description

2.1.2 Equations of motion

2.1.3 Quantization

2.1.4 AC Biases

2.2 Engineering dynamics

2.2.1 Hamiltonian engineering

2.2.2 Dissipation engineering

**3 Parametric pumping with a transmon **

3.1 The Transmon qubit

3.1.1 Low energy behaviour

3.1.2 Full description

3.2 Quantum dynamics of a driven transmon

3.2.1 Description of the experiment

3.2.2 Observing the transmon escape into unconfined states

3.2.3 Effect of pump-induced quasiparticles

3.3 Additional Material

3.3.1 Sample fabrication

3.3.2 Photon number calibration

3.3.3 Multi-photon qubit drive

3.3.4 Charge noise

3.3.5 Numerical simulation

3.3.6 Quasiparticle generation

**4 Exponential bit-flip suppression in a cat-qubit **

4.1 Protecting quantum information in a resonator

4.1.1 General considerations

4.1.2 The bosonic codes

4.2 Exponential bit-flip suppression in a cat-qubit

4.2.1 The dissipative cat-qubits

4.2.2 Engineering two-photon coupling to a bath

4.2.3 Coherence time measurements

4.3 Additional Material

4.3.1 Full device and wiring

4.3.2 Hamiltonian derivation

4.3.3 Circuit parameters

4.3.4 Semi-classical analysis

4.3.5 Bit-flip time simulation

4.3.6 Tuning the cat-qubit

**5 Itinerant microwave photon detection **

5.1 Dissipation engineering for photo-detection

5.1.1 Engineering the Qubit-Cavity dissipator

5.1.2 Efficiency

5.1.3 Dark Counts

5.1.4 Detector Reset

5.1.5 QNDness

5.2 Additional Material

5.2.1 Circuit parameters

5.2.2 Purcell Filter

5.2.3 System Hamiltonian derivation

5.2.4 Adiabatic elimination of the waste mode

5.2.5 Qubit dynamics and detection efficiency

5.2.6 Reset protocol

5.2.7 Spectroscopic characterization of the detector

5.2.8 Tomography of the itinerant transmitted photon

**Conclusion and perspectives **

A Circuit quantization

A.1 LC-oscillator

A.2 General Circuit quantization

B Native couplings in superconducting circuits

B.1 Capacitively coupled resonators

B.2 Mutual inductances in superconducting loops

B.3 Inductively coupled resonators

C Useful Formulae

C.1 Miscellaneous

C.2 Standard changes of frame

D Classical mechanical analogy to the stabilisation of cat-qubits

E Fabrication Recipes

E.1 Wafer preparation

E.2 Circuit patterning

E.3 Junction fabrication

**Bibliography**