The superconducting circuit companion -- an introduction with worked examples

This tutorial aims at giving an introductory treatment of the circuit analysis of superconducting qubits, i.e., two-level systems in superconducting circuits. It also touches upon couplings between such qubits and how microwave driving and these couplings can be used for single- and two-qubit gates, as well as how to include noise when calculating the dynamics of the system. We also discuss higher-dimensional superconducting qudits. The tutorial is intended for new researchers with limited or no experience with the field but should be accessible to anyone with a bachelor's degree in physics. The tutorial introduces the basic methods used in quantum circuit analysis, starting from a circuit diagram and ending with a quantized Hamiltonian, that may be truncated to the lowest levels. We provide examples of all the basic techniques throughout the discussion, while in the last part of the tutorial we discuss several of the most commonly used circuits for quantum-information applications. This includes both worked examples of single qubits and examples of how to analyze the coupling methods that allow multiqubit operations. In several detailed appendices, we provide the interested reader with an introduction to more advanced techniques for handling larger circuit designs.


Lagrangian and
Since Richard Feynman first proposed to use quantum simulators to simulate physics [1,2], an increasing amount of attention has been given to quantum processor and quantum technology, something which is only expected to increase further in the coming years [3][4][5]. This increased attention has led to swift progress within the field of quantum mechanics taking it from basic science research to engineering of multi-qubit quantum systems capable of preforming actual calculations [6][7][8][9][10]. In the mist of this evolution a new discipline has emerged, coined quantum engineering, bridging the basic science of quantum mechanics with areas traditionally considered engineering fields [11]. It is expected that the advent of quantum engineering will lead to computational speed-ups, making it possible to solve classically un-solvable problems [12][13][14].
In this tutorial we aim to give an introduction to the field of superconducting circuit analysis for researcher new to the field. With this we aim to give the tools needed for tailoring macroscopic circuit to a desired quantum mechanical behavior. The present tutorial can be viewed as an introduction to more advanced reviews of the field such as Refs. [11,50,51,54,[56][57][58][59][60][61][62][63][64] and is by no means a review of current state-of-the-art technology or practices, but rather a detailed introduction to the methods needed to analyse superconducting circuits in a quantum setting. We will not discuss actual experimental production of superconducting circuits, but limit the tutorial to theoretical analysis of such circuits. The tutorial assumes knowledge of undergraduate level quantum mechanics, electrodynamics, and analytical mechanics, meaning that the tutorial should be accessible to readers with a bachelor's degree in physics or equivalent.
The tutorial is organized as follows: In Section I we present the basic knowledge needed in in order to analyze superconducting circuits. The classical part of the analysis revolves around the method of nodes for finding the Hamiltonian of the circuit, which is then subject to quantization. The methods presented in this section should be enough for analyzing most standard superconducting circuits.
In Section II we present more advanced methods. These are not necessary for doing most standard circuit analysis, unless mutual induction or dissipation is to be included, but can be very useful in several cases. A thorough understand of the basic methods in Section I or experience in superconducting circuit analysis is recommended as prerequisite to this part. The methods presented in Section II are based on a method proposed by Ref. [58] which based their method on electrical network graph theory [65]. In Section II we therefore also introduce some fundamental graph theory, and we include both mutual inductance and dissipation in our equations of motion.
In Section III we present examples of common superconducting qubits, couplings, qudits, and more advanced circuits. Most of the examples are accessible after having read Section I.
Finally in Section IV we present an overview of the methods and compare some of the more advanced topics to the basic methods.

I. BASICS
In this section we present the basic tools needed for analyzing superconducting circuits, starting from a lumped circuit model to obtaining the Hamiltonian of the system, quantizing it, removing the least contributing terms, and finally truncating it at a desired number of quantum levels, i.e., a specific number of degrees of freedom. The tools presented in this section should be sufficient for analyzing most superconducting circuit diagrams without including mutual induction and dissipation. For these cases see Section II. Finally in Section I H we give an overview of the basic methods used for analyzing superconducting circuits. The methods presented here can be considered 'need to know' when analyzing electrical circuits, and newcomers should in general read this entire section.

A. Analyzing lumped-element circuit diagrams
In this section we will put forward an algorithmic approach, or 'recipe', to analyzing a lumped-element circuit diagram. The goal is to provide the reader with rigorous methods to analyze and find the Hamiltonian of most designs that may come up in the context of superconducting circuits. We will start by introducing the dynamical variables used when analyzing superconducting circuits and then present the basic elements of the circuits.
Our analysis takes its starting point in the lumpedelement model. This model simplifies the description of a spatially distributed system (in our case a superconducting electrical circuit) into a topology of discrete entities. We make the assumption that the attributes of the circuit (capacitance, inductance, and resistance) are idealized into electrical components (capacitors, inductors, and resistors) joined by a network of perfectly conducting wires.
An example of a lumped circuit can be seen in Fig. 1. We discuss the different components in Section I A 2.
We assume all the circuits discussed in this tutorial to be superconducting, meaning that there is no electrical resistance in the circuit and magnetic flux field are expelled from the wires (the Meissner effect). Therefore we will ignore the dissipation in this analysis, however, we discuss how to add dissipative elements in the advanced section using the Caldeira-Leggett model [66], see Section II C. Ignoring dissipative elements effectively means that our circuits become lossless. While this does indeed make the analysis easier, even the best superconducting circuits experience external loss. Thus in a realistic description noise should always be added. However, as these corrections are often small they can be added as corrections in the end, e.g., when simulating the behavior of the circuit using an open quantum system approach.

Circuit variables
Circuit analysis aims to find the equations of motion of an electrical circuit. Typically this means determining the current and voltage through all components of the circuit. For simplicity, we only consider circuit networks containing two-terminal components, i.e., components connected to two wires. Each such component is said to lie on a branch b and is characterized by two variables at any given time t: The voltage V b (t) across it and the current I b (t) through it. We define the orientation of the voltage to be opposite the direction of the current. Thus these two are defined by the underlying electromagnetic field by where µ 0 is the vacuum permeability, and E and B are the electric field inside the wire and the magnetic field outside the wire respectively. The closed loop in the second integral is done in vacuum encircling the given element. As we use the lumped-element model, the voltage and current are independent of the integration path. Thus we can choose the path such that the magnetic field is zero when integrating over the electric field and vice versa. For more details on integration of electromagnetic fields see e.g., Ref. [67]. We define the branch flux and branch charge variables as where it has been assumed that the system is at rest at t = −∞ with zero voltages and currents. As there are less degrees of freedom in the circuit than there are branches in the circuit, these are, just as the currents and voltages, not completely independent but related through Kirchhoff's laws all b arriving at n where q n is the charge accumulated at node n andΦ l is the external magnetic flux through the loop l. A node can be understood as a point in-between elements/branches, see Fig. 1 where we denote nodes with a dot. A more rigorous definition of branches and nodes, using graph theory, is given in Section II A. We have to eliminate superfluous degrees of freedom. In standard circuit theory there are two methods for this: The method of loops and the method of nodes. The method of loops (or mesh current method as it is commonly known in planar circuits) is usually taught in undergraduate courses on electrodynamics and we will therefore not discuss this method. Instead we will discuss the, for our purpose, more useful method of nodes in Section I B. A more advanced approach suitable for larger and more complex circuits can be found Section II B.

Circuit components
Before elaborating on the method of nodes we discuss the basic components of the circuit. Besides linear elements we will consider a type of non-linear inductor called Josephson junctions which only occurs in superconducting circuits. In an ideal superconducting circuit there is of course no resistance, and dissipative elements will therefore play a minor role when analyzing superconductive circuits. To summarize, we consider the following twoterminal elements: Linear capacitors, linear inductors, Josephson junctions, and sources. a. Capacitors The first component we consider is the capacitor. For a general capacitor, the charge over the capacitor is determined as a function of the voltage q(t) = f (V (t)). In this tutorial we will only consider linear capacitors. For a linear capacitor the voltage is proportional to the charge stored on the capacitor plates where C is the capacitance of the capacitor. This linear relationship is the defining property of the linear capacitor.
In the real world this is merely an approximation, as there are small non-linear effects, which makes C a function of q and V . These effects are usually small and therefore it is standard to neglect them. Equation (4) can be rewritten to the flux-charge relation using Eq. (2a) aṡ where the dot indicates differentiation w.r.t. t. The current q(t) is equal to the branch current and using Eq. (2b) we find the branch current The energy stored in the capacitor is found by integrating the power P = V (t)I(t) from t = −∞ to t E = 1 2 CΦ 2 (t).
For superconducting circuits, typical values of the capacitances are of the order 10 fF. In lumped-circuit diagrams we denote the capacitor as a pair of parallel lines, see Fig. 1. b. Inductors The time dependent current flowing through a general inductor is a function of the flux through it, I(t) = f (Φ(t)). We consider both linear inductors and a type of non-linear inductors called Josephson junctions. For a linear inductor the current is proportional to the magnetic flux, where L is the inductance of the inductor. Integrating over the power again, the energy stored in an inductor is then For superconducting qubits, typical values of linear inductances are of the order 1 nH. In lumped-circuit diagrams we denote the inductor as a helix, see Fig. 1. As a short clarifying example we will consider the classical LC-oscillator shown in Fig. 2. From Kirchhoff's current law in Eq. (3a) we know that I C = I L , where I C and I L is the current through the capacitor and inductor respectively, and Kirchhoff's voltage law gives us I c V c L V L I L C Figure 2. Simple LC-oscillator circuit. A capacitor with capacitance C is connected in a closed circuit with an inductor of inductance L. The voltage over the two components are VC and VL respectively, while the current is IC and IL respectively. The resulting equations of motion is a harmonic oscillator.
V C = −V L . Using Eqs. (2a), (2b), (5), and (8) we can set up the equations of motion for the system where we have introduced Φ(t) = Φ C (t) = −Φ L (t) to get rid of the subscripts. Evidently, the system behaves as a simple harmonic oscillator in the flux. This is analogous to a spring, where the flux is the position and the mass and spring constant are replaced by the capacitance and inverse inductance, respectively. c. Josephson junctions So far we have only considered elements with linear current-voltage relations. For reasons that will become clear when we treat the quantization of lumped circuits, constructing a qubit from only linear elements is in no way straightforward. We therefore need non-linear elements which comes in the form of the Josephson junction. The Josephson junction plays a special role in superconducting circuits as it has no simple analog in a non-superconducting circuit, due to the fact that it is related to charge quantization effects that occur in superconductors. Before elaborating on the Josephson junction we give a short introduction to superconductivity. For a more in-depth discussion see e.g., Ref. [68].
In some materials the resistivity suddenly drops to zero at low enough temperatures. This is a phase of the material called superconductivity, which have some exceptional properties. Together with the Meissner effect, i.e., that the material perfectly expels all magnetic fields, the perfect conduction is the defining property of a superconductor.
The phase transition between the non-superconducting phase and the superconducting phase of a material happens because the material condenses into a so-called BCS ground state. A priori this might seem impossible for electrons to condense in to a single quantum state due to the Pauli exclusion principle. However, as L. N. Cooper suggested, some attractive force between the electrons lead to formation of electron pairs [69], which have integer spin and thus behave like bosons. This makes it possible for all these so-called Cooper pairs to condense into a single quantum ground state and in this state the solid becomes superconducting.
A Josephson junction consists of two superconducting islands separated by a thin insulator, a non- superconducting metal, or a narrow superconducting wire. Cooper-pairs can then tunnel through the barrier from one island to the other, a phenomenon known as the Josephson effect [70,71], see Fig. 3 for a sketch. The tunneling rate (i.e., current) and voltage between the two islands depends on the superconducting phase difference, φ, between the islands through [72] where I c is the critical current of the junction, which depends on the geometry of the junction. Equation (12) allows us to express the junction phase difference to the generalized flux through Φ = φ/2e. The charge and flux are thus related througḣ where we have defined the magnetic flux quantum Φ 0 = h/2e. The Josephson junction thus work as a flux dependent inductor with inductance given by [57] where we have defined the Josephson inductance L J = Φ 0 /2πI c . For superconducting qubits typical values of Josephson inductances are of the order 100 nH. The energy of a Josephson junction is also non-linear where we will often neglect the constant term. We denote the factor in front of the bracket as the Josephson energy of the Josephson junction, E J = Φ 2 0 /(2π) 2 L J = Φ 0 I c /2π. In this tutorial we denote Josephson junctions as a box with an x inside in lumped-circuit diagrams, see Fig. 1. Other literature might simply use an x without a box to denote the Josephson junction.
It is conventional to simplify notation in a way such that charge and fluxes becomes dimensionless, This is done by using units where = 2e = 1 and thus Φ 0 2π = 1. (16) This means that we get rid of the cumbersome factor of 2π/Φ 0 in the sinusoidal Josephson junction terms. Note that in this notation the units of capacitance and inductance become the inverse of an energy unit. Also, with this choice of units the junction phase differences is equal to the generalized flux φ = Φ. d. Voltage and current sources We can also treat constant voltage and current sources by representing them as capacitors or inductors. Consider a constant voltage source V . This can be represented by a diverging capacitor, with a capacitance C → ∞, in which an initially large charge Q is stored such that V = Q/C.
In a similar manner a constant current source can be represented be a diverging inductor, with inductance L → ∞, in which an initially large flux Φ is stored, such that I = Φ/L. For examples of the use of constant voltage and current source see Sections III A and III B 1.

B. Method of nodes
In this section we present the method of nodes for removing superfluous degrees of freedom and obtaining the Hamiltonian and Lagrangian. This method solves most practical problem involving Josephson junction not including mutual inductance and dissipation. This discussion follows the method proposed by Devoret [50,73].
First we define an active node as a node connected to more than one type of circuit element. Contrary to the active node is the passive node, which is a node where only one type of element converge. We further consider nodes which lie between an element and ground. These nodes are inactive since the flux through them is zero and thus they do not contribute to the dynamics of the system. We denote these ground nodes. Considering the example circuit in Fig. 1 we note that the two nodes on either side of the inductor are active nodes, while the bottom node is the ground node.
Consider now an electromagnetic circuit, say the example circuit in Fig. 1. We represent the circuit as a set of nodes, i.e., the example circuit has three nodes, and a set of branches, which are equal to the set of elements in the circuit, i.e., the example circuit has five branches, one for each component.
We now divide the system into its capacitive subnetwork, containing only branches and nodes connected to capacitors, and its inductive subnetwork, which of course contains branches and nodes connected to inductors (i.e., both linear and Josephson junctions). In the example circuit in Fig. 1 the two capacitor branches and all three nodes are in the capacitive subgraph, while the inductor branch and the two Josephson junction branches are in  the inductive subgraph together with the three nodes. Note that nodes can be in both subgraphs at the same time. See Fig. 4.
In a realistic circuit every node connected to an inductance should also be connected to a capacitor, due to the fact that a parasitic capacitance will be present for all inductors. It can, however, be advantageous if this parasitic capacitance can be made as small as possible.
The capacitive subnetwork consists only of linear capacitors, and thus we can express the energy of a capacitive branch in terms of the voltages, i.e., the derivative of the flux, using Eq. (7). By doing this we have in fact now broken the symmetry between the charge and flux, and the flux can now be viewed as the "position". With this treatment the capacitive energy becomes equivalent to the kinetic energy, while the inductive energy becomes equivalent to the potential energy.
Starting from the ground node we now construct a spanning tree, T . This is constructed by connecting every node to the ground node by only one path along the tree (see Definition 3 in Section II A for a more mathematical definition using graph theory). Note that there are often several choices for both ground node and spanning tree. This is not a problem for the analysis and can be seen analogous to the choice of a particular gauge in electromagnetic field theory or the choice of a coordinate system in classical mechanics. Note that even though we refer to only a single node as the ground node, there can be several nodes connected to ground in the circuit. A choice of spanning tree in the example circuit could be the capacitive subgraph as seen in Fig. 4(a), however, there are several other valid choices.
We have now partitioned the elements into two sets: The set of elements on the spanning tree, E T and its complementary set,Ē T = E \ E T , i.e., the elements not on the spanning tree. We call the latter set the set of closure branches due to the fact that its elements closes the loop of the spanning tree.
The flux φ n of node n can be written as a sum of incoming and outgoing branch fluxes, with the correct sign depending on the direction of the flux. With this in mind we can write the branch fluxes in terms of the node fluxes where n and n are the nodes at the end and start of the given branch respectively andΦ is the external flux through the element on the branch. Note that the external flux only occurs if the branch is a closure branch (this is due to Kirchhoff's laws). Note that as we always consider even powers of the flux it is often irrelevant which node is considered the start node and which is considered the end node. The ground node has zero flux and is therefore often not included in circuit diagrams. Substituting the node fluxes into the expressions for the energy of the different components seen in Eqs. (7), (9), and (15) we can express the energies as a function of the node fluxes. The results can be seen in Table I.

Equations of motion
The equations of motion can, in principle, now be found using Kirchhoff's circuit laws for every node and every loop in the circuit. This is, however, often quite cumbersome and it is usually easier to find the Lagrangian instead. The Lagrangian is obtained by subtracting the potential (inductive) energy term from the kinetic (capacitive) energy term.
Writing the Lagrangian up like this can be rather tedious for larger circuits since it includes a lot of sums. We therefore seek a more elegant way to write the Lagrangian; this is achieved using matrix notation. First we list all the nodes 1 to N and define a flux column vector φ T = (φ 1 , . . . , φ N ), where T indicates the transpose of the vector. Note that we do not include the ground node since it equals zero, and, as we will see later, it does not contribute to the true degrees of freedom in any case. In the case of the example circuit in Fig. 1 the flux column vector is φ T = (φ 1 , φ 2 ). As the circuit is symmetric it does not matter which node corresponds to 1 and 2. For the sake of concreteness we label the right node 1 and the left node 2.
We are now ready to set up the capacitive matrix C of the system. The non-diagonal matrix elements are simply minus the capacitance −C jk connecting nodes j and k. The diagonal elements consist of the sum of values in the corresponding row or column, multiplied by minus one. If a node is connected to ground via a capacitor, this capacitance must also be added to the diagonal element. With this N × N matrix we can write the kinetic energy term as where we have assumed that all capacitors are placed on the spanning tree or that the external flux is timeindependent. If this is not the case one must add a term similar to the one in Eq. (21). In the case of the example circuit the capacitive matrix becomes Had the linear inductor in between the two node been a capacitor, the capacitive matrix would have taken the form We now consider the contribution from the linear inductors. We set up the inductive matrix L −1 in the same way as the capacitive matrix. The non-diagonal elements are −1/L jk connecting nodes j and k, while the diagonal elements consist of the sum of values in the corresponding row or column, multiplied by minus one. If a node is connected to ground via an inductor, this inductance must also be added to the diagonal element. Of course, if no inductor is connecting two node, the non-diagonal element should be zero. Contrary to capacitors, inductors are rarely placed on the spanning tree, but on closure branches, and we must therefore remember to include external magnetic flux in the term. Thus the energy due to linear inductors becomes If we consider the example circuit again, the inductive matrix is simply With this the inductive energy of the example circuit becomes whereΦ is the external flux through the circuit loop.
When there is only one (or few) inductors, as in the example circuit, it might be more straightforward to write the energy without the matrix notation. For the example circuit it simply becomes The Lagrangian can now be obtained simply by subtracting the inductive energies from the capacitive energies where E JJ is the sum of energies from the Josephson junction, which we do not write in matrix form as it includes cosines. The full Lagrangian of the example circuit is thus whereΦ i are the external fluxes through the different loops. With the Lagrangian, one can obtain the equations of motion from Lagrange's equation Applying this to the example circuit we find the equations of motion where the minus is for n = 1 and the plus is for n = 2.

Obtaining the Hamiltonian
The Hamiltonian of the circuit can be found by a simple Legendre transformation of the Lagrangian. First we define the conjugate momentum to the node flux by which in vector form becomes q = Cφ. If the capacitance matrix is invertible we can expressφ as a function of q, which we denote node charges, due to the fact that they corresponds to the algebraic sum of the charges on the capacitances connected to node n. The Hamiltonian can now be expressed in terms of the node charges q n for the kinetic energy and node fluxes for the potential energy where the potential energy is a non-linear function of the node fluxes. The independent variables corresponds to degrees of freedom, while the non-independent corresponds to offset charges. Note that the functional form of the Hamiltonian may differ depending on the choice of spanning tree, however, it always returns the same total energy. With the Hamiltonian it is possible to find the equations of motion using Hamilton's equationṡ which is of course equivalent to Lagrange's equations Eq. (27).

Normal modes
Lagrange's equations tell us thatq n = 0 for all passive nodes. This means that the circuit has at most the same number of true degrees of freedom as the number of active nodes except the ground node. The number of true degrees of freedom is identical to the number of normal modes of the system. If all inductors can be approximated as linear inductors (and external fluxes are ignored), the Lagrangian takes the form This particular nice form of the Hamiltonian means that the equations of motion takes the form which is essentially Hooke's law in matrix form where the capacitances plays the role of the masses and inductive matrix plays the role of the spring constants [74]. The normal modes of the full systems can be found as the eigenvectors of the matrix product Ω 2 = C −1 L −1 associated with non-zero eigenvalues. These non-zero eigenvalues correspond to normal mode frequencies of the circuit squared. Note that if C and L −1 can be diagonalized simultaneously, their eigenmodes become the normal modes of the system. It can be advantageous to find these eigenmodes and change basis into them (see Section I B 4), as this can be used to control unwanted couplings between modes. This is useful for implementing a system with direct and thus strong multi-body interactions [75]. For example three-body couplings are necessary in gauge theories, and four-body couplings are necessary for quantum annealing. There are as such many proposals for their implementation, though these are usually indirect, i.e., the interaction is an effective interaction and does not actually occur in the Hamiltonian. Indirect interactions are generally slower and noisier, as they consist of multiple subprocesses. Indirect interactions can be implemented using perturbation theory [76][77][78][79], mutual inductive couplings to a common resonator [80][81][82][83] and the decomposition of the multibody interaction to many two-body interactions involving ancilla qubits [80,[84][85][86][87][88], among other approaches.

Change of basis
Here we present a method for changing into the normal mode basis of a circuit. Given a circuit with N nodes and a matrix product Ω 2 = C −1 L −1 let v 1 , v 2 , . . . , v n be the orthogonal eigenvectors of Ω 2 , with eigenvalues λ 1 , λ 2 , . . . , λ n . Let φ be the usual vector of the node fluxes of the circuit. We can then introduce the normal modes ψ via where is a matrix whose columns are the eigenvectors of Ω 2 .
Notice that ψ i = v i · φ/|v i | 2 , such that the eigenvectors v i are exactly the normal modes in terms of the node fluxes up to normalization. The kinetic energy term in Eq. (18) can now be written where we have introduced the capacitance matrix in the transformed coordinates K = V T CV. Note that we can perform a similar transformation for any V representing an ortonormal transformation of the basis. If C and L −1 can be simultaneous diagonalized then K a diagonal matrix with entries λ i |v i | 2 . In terms of the canonical momenta p conjugate to ψ, the kinetic energy takes the usual form where the inverse of K is trivial to find, as it is also a diagonal matrix, whose entries are 1/(λ i |v i | 2 ) (below we consider the case λ i = 0, but we ignore it for now). We must also consider how the higher order terms of the inductor behave under this coordinate transformation.
Even though we have approximated all inductors as linear in order to find the normal modes, higher order terms from the Josephson junctions still contribute as corrections, often leading to couplings between the modes. Such terms transform the following way where (v i ) k is the kth entry of v i . Such fourth order terms can result in both two-body interactions as well as more-than-two-body interactions. These multi-body interactions can complicate the equations of motion more that the change of basis simplifies them. Coordinate transformations are therefore often most useful where the capacitors are symmetrically distributed, which results in simple normal modes. The center of mass (CM) mode plays a special role in analytical mechanics as it often decouples from the dynamics of the system. The same is the case for electrical circuits. The center of mass mode has v cm = (1, 1, . . . , 1) T , which yields ψ CM = 1 N 2 (φ 1 +φ 2 +· · ·+φ N ). This mode is always present and it corresponds to charge flowing equally into every node of the circuit from the ground and oscillating back and forth between ground and the nodes. Furthermore, since all its entries are identical it always disappears in the linear combination of Eq. (38). Hence, this mode completely decouples from the dynamics, justifying calling it the center-of-mass mode.
The decoupling of this mode is related to how we can arbitrarily choose a node in our circuit as the ground node, whose node flux does not enter our equations, or rather is identically set to zero. This redundancy of the node fluxes, allowing for one degree of freedom to be removed, is what is surfacing here again. If the circuit is not grounded, if a ground node has not been chosen, then λ CM = 0, making K singular. We therefore always assume the circuit is grounded such that λ CM = 0.
For an example of multi-body interactions see Section III F. For other examples of change of basis see Sections III C 4 and III D 2.
C. Quantization and effective energies

Operators and commutators
After the Hamiltonian has been obtained classically we are now ready to transform to the quantum mechanical description of the circuit. This is done through canonical quantization, replacing all the variables and the Hamiltonian by operators φ →φ, q →q, The node flux operators,φ n , corresponding to position coordinates, all commute. However, the conjugate momen-tum,q n operator and the flux operator do not commute if they corresponds to the same node. Instead they obey the canonical commutator relation where δ nm is the Kronecker delta. This relation is only valid if the electric state of the nth node is a true degree of freedom, meaning that neitherφ n orq m are constants of motion, i.e., Hamilton's equations in Eq. (31) does not yield zero. We could also have used the same approach as Dirac, and used that the value of the classical Poisson bracket determines the value of the corresponding commutator up to a factor of i [89]. Using this approach for the branch flux operator and the charge operator we find that the Poisson bracket is where the sign is plus for a capacitive branch and minus for an inductive branch. Following Dirac's approach we arrive at the following commutator relation Note that in general these branch operators are not conjugate in the Hamiltonian. One must still find the true degrees of freedom before quantization is applied.

Effective energies
Consider the generalized momentumq = Cφ. The time derivative of the generalized momentum is exactly the current through the capacitors, I = Cφ. Note that in the limiting case of one node, this reduces to the current over a single parallel-plate capacitor, as it should. This makes sense since we can think of the conjugate momentum as the sum of all charges on the capacitor. We therefore definen as the net number of Cooper pairs stored on the nth node. If we consider the kinetic energy of a circuit we can write Now for a diagonal element we have 4E C,n n 2 n , where we have defined the effective capacitive energy of the nth node as which is equivalent to the energy required to store a single charge on the capacitor. Note that in our dimensionless notation from Eq. (16) we haven n = −q n while the effective energy becomes E C,n = (C −1 ) (n,n) /8. In a similar manner we will introduce the effective energies of the linear and Josephson inductances, E L,n and E C,n of each node. The effective linear inductive energy is simply the diagonal elements of L, which is equivalent to the sum of the inverse inductances of the inductors connected to the given node. The effective Josephson energy is found the same way, it is the sum of Josephson energies connected to the given node.
Returning to our example circuit in Fig. 1, we can now write it using operators and effective energies. It becomeŝ where the coupling energy of the linear inductor is E L,12 = 1/2L 12 and we have setΦ = 0 for simplicity. The effective energy of the Josephson junctions is simply the Josephson energy. Note that since our example does not include any coupling capacitors we do not obtain any coupling term (C −1 ) (1,2)n1n2 since C is diagonal. In reality this is rarely the case.

D. Recasting to interacting harmonic oscillators
If we require the effective capacitive (charging) energy, E C , to be much smaller than the effective Josephson energy, E J , the flux will be well-localized near the bottom of the potential (we omit the node subscript in this section for simplicity). This is equivalent to a heavy particle moving near the equilibrium position. If we further require the anharmonicity of effective potential (flux-dependent part of the Hamiltonian) of the superconducting circuit to be sufficiently large, we are in what is called the transmon regime, named after the transmon qubit [90]. In this case we can Taylor-expand the full potential part of the Hamiltonian up to fourth order in φ, such that the Josephson junction terms takes the form Throwing away the irrelevant constant term, we are left with a Hamiltonian consisting of pure second and fourth order terms. If we require the couplings between different parts of the superconducting circuit to be small, we can treat each part individually as harmonic oscillators perturbed by quadratic and quartic interactions (each part will then later become a qubit). For each part in our system we have a simple harmonic oscillator on the form The simple harmonic oscillator is well understood quantum mechanically, and using the algebraic approach [91] we define the step operatorŝ where we have defined the impedance The step operator fulfill the usual commutator relation Expressing the flux and conjugate momentum operators in terms of the step operators, we can rewrite the oscillator part of the Hamiltonian aŝ where we have introduced the usual number operator N =b †b . Doing this we must of course also rewrite all quadratic and quartic interaction terms. An overview of how to rewrite these into step operators can be seen in Table II. Returning to our example circuit in Fig. 1 we can write the Hamiltonian using step-operatorŝ where we have omitted all constant terms, which are irrelevant for the dynamics. We have further defined where we will refer to ω n as the frequency, α n as the anharmonicity, and g 12 the coupling of our (an)harmonic oscillators. The reason for not including the factor of 1/12 in α will become apparent in Section I E. Note that if the effective inductive energy is zero, E L,n = 0 then the anharmonicity in Eq. (55b) becomes α n = −E C,n , which holds generally.

E. Time averaged dynamics
When analyzing the Hamiltonian of the circuit it is often advantageous to consider which terms dominate the time evolution and which terms only give rise to minor corrections. The latter can often be neglected, without changing the overall behavior of the system. It can often be difficult to determine which terms dominate as different scales influence the dynamics of the system, this stems from the fact that the modes of the oscillators are usually of the order GHz while the interactions between the different oscillators are usually much smaller, of the order MHz. We therefore employ separation of scales, such that we remove the large energy differences of the modes from the Hamiltonian making it possible to see the finer details of the interactions. In order to do this we first introduce the concept of the interaction picture where the interacting part of the Hamiltonian is in focus 1. Interaction picture Consider the state |ψ, t S at time t which satisfies the Schrödinger equation whereĤ is the Hamiltonian, and the subscript S refers to the Schrödinger picture. In the Schrödinger picture operators are time independent and state are time dependent. We wish to change into the interaction picture where both the operators and state are time dependent. We therefore split the Hamiltonian into a non-interacting (or free) part and an interacting partĤ =Ĥ 0 +Ĥ I,S . States in the interaction picture are defined as where the subscript I refers to the interaction picture. The operators in the interaction picture are defined aŝ whereÔ S is an operator in the Schrödinger picture. It is then possible to show the Schrödinger equation in the interaction picture becomes whereĤ I = e iĤ0tĤ I,S e −iĤ0t is the interaction part of the Hamiltonian in the interaction picture. In general one can transform to any rotating frame using the transformation ruleĤ whereÛ(t) = exp(−iĤ 0 t) is the unitary transformation of the rotating frameĤ 0 one wishes to transform into. This transformation rule holds for any unitary transformation as is quite useful to keep in mind. The advantages of rotating in to a specific frame is that we can highlight some desired physics while ignoring other parts that are well understood. This is analogous to choosing a reference frame rotating with the Earth when doing classical physics in a reference frame fixed on the surface of the Earth. Note that Eq. (60) is equivalent to transforming the Hamiltonian into the interaction pictureĤ →Ĥ I whenĤ 0 is the non-interacting part of the Hamiltonian, as the second term simply removes the non-interacting part of the Hamiltonian. One can also show that the time evolution in the interaction picture is governed by a Heisenberg equations of motion where we have assumed no explicit time dependence in O S . Note that this implies that the voltage of the bth branch can be calculated asV For more information about the interaction picture see e.g., Ref. [91].

Rotating wave approximation
Consider now the weakly anharmonic oscillator as seen in Fig. 5(c), which has the quantized Hamiltonian where we have removed all constant terms. We have that the frequency is ω = √ 8E C E J and the anharmonicity is α = −E C where E C and E J is the effective capacitive energy and Josephson energy respectively. Now we choose the first term as the non-interacting Hamiltonian,Ĥ 0 = ωb †b . We now want to figure out how the step-operators behave in the interaction picture, i.e., we want to calculate Eq. (58) for the step-operators. First we notice that H n 0b † =b † (Ĥ 0 + ω) n . Using this and expanding the exponential functions we can prove that By taking the complex conjugate we find a similar expression forb, but with a minus in the exponential factor. We now wish to consider how different terms of the stepoperators transform into the interaction picture. Starting with the number operatorN =b †b , we see that it does not change as the exponential factor fromb † cancels the exponential factor fromb. This is not surprising as the non-interacting Hamiltonian is chosen exactly as the number operator. However, if we consider terms like Jb †b † we find that in the interaction picture they take the form Jb †b † e 2iωt . If ω is sufficiently large compared to the factor, J, in front of the term (which is often the case in superconducting circuit Hamiltonians where ω ∼ GHz, while other terms are usually of the size J ∼ MHz), these terms will oscillate very rapidly on the timescale of J. The time average over such terms is zero, and we can therefore neglect them as they only give rise to minor corrections. This is essentially the rotating wave approximation (RWA) which is thoroughly used in atomic physics [92,93]. The story is the same forbb terms. In fact all terms which do not conserve the energy of the system, i.e., terms where the number ofb † -operators are not equal to the number ofb-operators, will rotate rapidly and can therefore safely be neglected. Note that in spite of the fact that we refer to this as energy conserving and non-conserving terms, we could just as well talk about conservation of quanta, as created and destroyed by the step operators, as is implied from the discussion above. It is important to point out that in spite of the naming, the 'conservation' is not related to a conservation law resulting from a symmetry, i.e. like in Noether theorem. Rather, the statement here means that the energy conserving terms are much more important than the non-conserving terms as long as the conditions for using the rotating wave approximation are satisfied. Now consider the anharmonicity term of Eq. (62). When only including energy conserving terms and removing irrelevant constants, the anharmonicity term takes the form α 12 (b † +b) 4 =α 1 2b †b †bb +b †b + Non-conserving terms.
The last term,b †b , is simply the number operator, and we can therefore consider it a correction to the frequency, such that the dressed frequency becomesω = ω + α = √ 8E C E J − E C . The remaining (b †b †bb )-term makes the oscillator anharmonic. For this reason we call α the anharmonicity of the anharmonic oscillator. Thus if we remove terms that are not conserving energy the Hamiltonian takes the form (in the Schrödinger picture) Consider now an interaction term like the last term of Eq. (54) Changing into the interaction picture we realize that the two last terms obtain a phase of exp ±i(ω i + ω j )t, which can be considered a fast oscillating term if the frequencies ω i + ω j are much larger than the interaction strength, which is usually the case. We can therefore safely neglect these non-conserving terms. The two first terms on the other hand obtains a phase of exp(±iδt), where δ = ω i − ω j is the detuning of the two oscillators. It is therefore tempting to say that these terms only contribute if ω i ≈ ω j . This is, however, not the whole story, in fact we find that which can be useful in some situations, e.g., when driving qubits, see Section I G. However, as a general rule-ofthumb one can neglect these terms unless ω i ≈ ω j . For a more general discussion on the validity of the time averaging dynamics see Ref. [94].
If we consider the example circuit in Fig. 1 under the assumption that |α n | ω n , we time average its Hamiltonian in Eq. (54). We chose the non-interacting Hamiltonian as H 0 = ω 1b † 1b 1 + ω 2b † 2b 2 , which means that the interacting part of the Hamiltonian in Eq. (54) becomeŝ where we have defined the detuning δ = ω 1 −ω 2 . Assuming ω 1 ω 2 we have that δ 0 and if we further assume that ω n g 12 then the last two terns are fast oscillating and can thus be neglected. We are usually interested in the Hamiltonian in the Schrödinger picture, and we write the Hamiltonian in this frame. In this frame the fast rotating terms are denoted non-conserving terms, as they do not conserve the energy (in the sense discussed above), and using we interaction picture we have showed that they only contribute small corrections. This means that we can write the Hamiltonian in the Schrödinger picture without

non-conserving terms aŝ
where we have introduced the dressed frequenciesω n = ω n + α n .

F. Truncation
A harmonic oscillator, as one gets from a regular LCcircuit, has an infinite amount of states [see Fig. 5 . This is not desirable as we wish to only consider the lowest states of the system. However, when we introduce a Josephson junction instead of a linear inductor, we introduce an anharmonicity, see Fig. 5(a) and (c). The anharmonicity stems from the (b †b †bb )-terms (see Eq. (65)) and can be viewed as perturbations to the harmonic oscillator Hamiltonian, if |α| ω. This anharmonicity changes the spacing between the energy levels of the harmonic oscillator, making it an anharmonic oscillator [see Fig. 5(d)]. For this reason we require the anharmonicity to be of a certain size. The anharmonicity is defined as the difference between the first and second energy gap, while we define the relative anharmonicity as the anharmonicity divided by the first energy gap Note that this anharmonicity is the same anharmonicity in front of the (b †b †bb )-terms as mentioned in previous sections. As a rule-of-thumb the relative anharmonicity should be at least a couple of percent in order to make an effective qubit. It does not matter whether the anharmonicity is positive or negative. For transmon-type qubits it will be negative, while it can be either positive or negative for flux-type qubits. The relative anharmonicity is proportional to E C /E J which means that this ratio must be of a certain size for the anharmonicity to have an effect. This is in contrast to what was discussed in the beginning of Section I D where we argued that we required this ratio to be as low as possible in order to allow for the expansion of cosines. Thus we need to find a suitable regime for the ratio, E C /E J . This regime is usually called the transmon regime. When the anharmonicity is sufficiently large we can distinguish between the energy gaps. This means that a transition between the ground state and the first excited state does not require the same amount of energy as a transition between the first and second excited states, or any other pair of states for that matter. This is due to the fact that the anharmonicity term varies in size for different energy levels as we will see in the following sections. In the following we assume that we have a sufficiently large anharmonicity and truncate the system in to a two-level system, however, there is nothing stopping us from keeping more levels.

Two-level model
In a two-level system we can represent our states as In this reduced Hilbert space all operators can be expressed by the Pauli matrices, and the identity. If we view the unitary operations as rotations in the Hilbert space we can parameterize the superposition of the two states using a complex phase, φ, and a mixing angle, θ where 0 ≤ θ ≤ π and 0 ≤ φ < 2π. With this we can illustrate the qubit as a unit vector on the Bloch sphere, see Fig. 6. Unitary operations can then be seen as rotations on the sphere and the Pauli matrices are thus the Step operators Figure 6. The Bloch sphere. Each point on the Bloch sphere corresponds to a quantum state. Rotations around the sphere corresponds to transformations of the state.
generators of rotations. Linear operators will then be represented by 2 × 2 matrices as In general we denote the n × n matrix-representation of an operatorÔ with M n [Ô].
In order to apply this mapping to the Hamiltonian we must map each operator in each term. As an example we truncate the (b † +b) 3 -term from Table III: Using the orthonormality of the states we obtain the representation of the operator Truncation of the remaining terms are presented in Table III. If we consider the example circuit in Fig. 1, after we have removed non-conserving terms as in Eq. (69) and assume an anharmonicity large enough for truncation to a two-level system, we obtain the following Hamiltonian where we have defined σ ± n = (σ x n ∓ iσ y n )/2. This is essentially two qubits which can interact by swapping excitation between them, i.e., interacting via a swap coupling.

Three-level model
It can be desirable to truncate to the three lowest levels of the anharmonic oscillator, i.e., the three lowest states of Fig. 5(d). This can, e.g., be useful if one want to study qutrit systems [95][96][97], or the leakage from the qubit states to higher states [98,99]. In this case the operators will be represented as 3 × 3 matrices, the matrix representation of the step operators become while the number operator becomes and powers ofb † +b becomes From Eq. (76e) it is clear to see the varying size of the anharmonicity, as the differences 15−3 = 12 and 39−15 = 24 between the levels changes. This pattern continues for higher levels and means that we can distinguish between the levels.
As we are dealing with 3 × 3 matrices we can no longer use the Pauli spin-1/2 matrices as a basis for the operators. In this case one could use the Gell-Mann matrices as a basis, however, often it is more convenient simply to leave the step operators be. We are not limited by three levels, and it is possible to truncate the system to an arbitrary number of levels, thus creating a so-called qudit.

Exact truncation
In the above truncations to two and three levels the cosine contribution from the Josephson junction is expanded to fourth order, which introduces some error and limits the qubits to operate in a given regime. However, the behavior of the exact Hamiltonian should, however, be nearly the same with only negligible differences. It is of interest to find the truncated version of the entire cosine-operator. This can be done numerically by writing the node fluxes in terms of creation and annihilation operators, which would be represented by finite matrices in numerical calculations, and calculating the matrix cosine function of these. However, that requires much harder computations than if we would be able to write the truncated cosine directly in terms of creation and annihilation operators. We therefore present a method for doing so here. The method presented employs the displacement operator, which is often used in quantum optics studies of optical phase space [100].
Consider the standard cosine term of a Josephson junction bridging two nodes with node fluxes φ 1 and φ 2 . Written in terms of the creation and annihilation operators, this becomes where ζ i is the impedance (Eq. (50)) of the two nodes respectively. We want to find the matrix elements of the cosine operator in Eq. (77) with respect to the lowest levels of the two anharmonic oscillator modes. The trick is to rewrite the cosine in terms of exponentials that only contain one mode. We write the cosine in terms of two complex exponentials as usual and note that each exponential can be written as a product because the operators of different (an)harmonic oscillators commute From this expression it is clear that we need to find the matrix representation of the general operator exp[ik(b † +b)] for some real number k = ζ/2. To find the desired matrix representation we consider the displacement operator where ξ is complex number. This operator is unitary and satisfies D(ξ) † = D(−ξ), as well as the following commutation relation The operator creates coherent states by 'displacing' the vacuum stateD where |ξ is the coherent state defined byb |ξ = ξ |ξ . A coherent state can be written in terms of Fock-states as Clearly, we have Using the above commutation relation, we can derive the effect of the displacement operator on any other Fockstate. With that we can calculate its matrix elements, as desired. For ease let us just write the commutation relation and coherent state for ξ = ik For truncation to the two lowest levels we need only calculate three matrix elements With this we can conclude which is exactly what we need to truncate cosineoperators.
Consider again the standard Josephson junction term from Eq. (78). We can now perform exact truncation of it to its lowest two levels using the above identity. We find where we have ignored constant terms. Hence, the cosine term has resulted in the usual contribution to the qubit energies, as well as XX-and ZZ-couplings. The difference, however, is that the coefficients of these terms are more accurate as the calculation has not involved any Taylor expansions. In particular each coefficient has a factor of exp (−(ζ 1 + ζ 2 )/4), which contains contributions corresponding to the infinitely many possible virtual processes of exciting the anharmonic oscillator modes to any higher lying level and de-exciting again, which affect the dynamics of the two-level subspace.

Exact four-level model
Using the method of exact truncation the cosine term and the other usual operators appearing when recasting a superconducting circuit Hamiltonian in terms of harmonic oscillator modes can also be truncated exactly to the four lowest levels. This can be useful for numerical studies of the higher levels effect on the two-level dynamics. It is advantageous to perform the truncation analytically to avoid having to do it numerically, in particular the truncation of the cosine term. First we define some new matrices and finally X ij and Y ij for i, j = 0, 1, 2, 3 with i < j, whose (a, b)th entries are Together with the identity, Z, A, and B describe contributions to the energy levels. In particular Z can be seen as the 4 × 4 expansion of σ z , while A describes the anharmonicity, and B describes a similar anharmonic energy shift beyond the regular anharmonicity only relevant for the third excited state and higher. In terms of the usual bosonic number operatorn =b †b = diag(0, 1, 2, 3), we may say that Z corresponds ton, A corresponds tô n 2 and B ton 3 . Alternatively, we may say that A is proportional tob †b †bb and B tob †b †b †bbb , which shows how A does not affect levels below the second excited one, while B does not matter for levels below the third excited. The X ij and Y ij describe rotation or flipping between the ith and jth energy level, and are thus the expansion of σ x and σ y . In terms of these matrices, we can write With the above method of truncating to four levels, we can find the exact anharmonicities as the coefficients of the standalone Z-operators. Just as there are ZZ-couplings among spins, which change energy levels depending on the state of the system, there will couplings involving Z that change the anharmonicities. But just as we only look at standalone Z-operators when determining basic energy levels, we would not include the interaction-contributions to the anharmonicity when calculating it. These contributions will in general also be smaller, as they originate from terms involving more node fluxes and therefore more ζ's. If one wishes to just find the anharmonicity without finding and reducing the complete four-level Hamiltonian, one can simply replace the exponentials e i √ ζ/2(b † +b) with only 1 − ζ 4 + ζ 2 8 A e −ζ/4 , and then find the standalone A matrices. We do not need to include the other terms from Eq. (90e) as they will only contribute to interactions which are not interesting in this case.

G. Driving the qubit
Single-qubit rotations in superconducting circuits can be achieved by capacitive microwave control. In this section we go through the steps of analyzing a microwave controlled transmon-like qubit and then generalize to a dlevel qudit. To this end we consider the superconducting qubit seen in Fig. 7, which is capacitively coupled to a microwave source. Using the approach presented in Section I B the Lagrangian of this circuit becomes where φ is the node flux. Expanding the last term, we obtain The first term in the parenthesis is an irrelevant offset term, the second term is simply a change of the capacitance of the node, while the last term is our driving term.
With this in mind we rewrite The conjugate momentum of the node flux, φ, is then Doing the usual Legendre transformation, our Hamiltonian takes the form where have removed an irrelevant offset term and denoted the anharmonic oscillator part of the Hamiltonian, H ano and the eternal driving part H ext . We are now ready to perform the quantization and the driving term becomeŝ Assuming a large enough anharmonicity we can truncate the Hamiltonian into the two lowest levelŝ where ω is the qubit frequency and Ω = C ext /( √ 2ζ(C + C ext )).
We now rotate into the frame rotating with the frequency of the qubit, also known as the interaction frame as discussed in Section I E. In particular we use H 0 = −ωσ z /2 for the transformation in Eq. (60). In this frame the Hamiltonian becomeŝ which is equivalent to the external driving part of the Hamiltonian in the interaction picture, i.e., H R = H I ext . We now assume that the voltage is sinusoidal on the form where V 0 is the amplitude of the voltage, η(t) is some dimensionless envelope function, ω ext is the external driving frequency and ϕ is the phase of the driving. One usually defines the 'in-phase component I = cos(ϕ) and the out-of-phase component Q = sin(ϕ) [11]. Inserting the voltage in Eq. (99) into the Hamiltonian in Eq. (98) we obtain where δ = ω − ω ext is the difference between the qubit frequency and the driving frequency and we have neglected fast oscillating terms, i.e., terms with ω + ω ext , in accordance with the rotating wave approximation. This Hamiltonian can be written very simple in matrix form from this we conclude that if we apply a pulse at the qubit frequency, i.e., ω ext = ω, we can rotate the state of the qubit around the Bloch sphere, see Fig. 6, by setting ϕ = 0, i.e., using only the I-component we rotate about the x-axis, and by setting ϕ = π/2, i.e., using only the Q-component we rotate about the y-axis.
We consider now the unitary time-evolution operator of the driving Hamiltonian. At qubit frequency, i.e., δ = 0 it takes the form This is known as Rabi driving and can be used for engineering efficient single-qubit gate operations. Note that this holds only for δ = 0, as here the Hamiltonian commutes with itself at different times. For non-zero δ, one needs to solve the full Dyson's series in principle [91]. The angle of rotation is defined as which depends on the macroscopic design parameters of the circuit, via Ω, the envelope of the pulse, η(t), and the amplitude of the pulse, V 0 . The latter two can be controlled using arbitrary wave generators. In case one wishes to implement a π-pulse one must adjust these parameters such that Θ(t) = π. Note that a single π-pulse corresponds to implementing a not-gate which inverts the qubit [101][102][103][104]. A series of singe-qubit gates can then be performed on the qubit by applying a sequence of pulses [105].

Generalization to qudit driving
Now let us generalize this to a d-dimensional qudit. Quantizing and truncating the anharmonic oscillator part of the Hamiltonian in Eq. (95) to d levels, the qudit Hamiltonian becomeŝ where ω n is the energy of qudit state |n . This is a rewriting of the ωb †b -term and the anharmonicity term, where the anharmonicity have been absorbed in to the set of ω n . Now, starting from Eq. (96) and for simplicity setting the phase in Eq. (99) to π/2, such that V (t) = V 0 cos(ω ext t), we can move to the rotating frame as was also done above for the qubit using Eq. (60). We choose the frame rotating with the external driving frequency which is contrary to what we did for the qubit, where we rotated into a frame equal to the qubit frequency. We see that for a qubit (d = 2) we get H 0 = −ω ext σ z /2 up to a global constant, which we could have also chosen to use above, instead of the qubit frame. Applying Eq. (60) to the qudit Hamiltonian in Eq. (104), we get:Ĥ The same transformation is performed on H ext by using the standard expansion of the bosonic operators. Thus we can find, by expanding the cosine in the voltage drive using Euler's formula, the total Hamiltonian in the rotating framê where δ n = ω n − nω ext is the detuning of the nth state relative to the ground state driven by the external field and is the Rabi frequency of the kth transition. Of course we can also here include a phase in the drive, as discussed above. Thus, by just using a single drive, we achieve great control over this specific qudit transition. Transitions between other neighboring qudit states can be performed simultaneously by using a multi-mode driving field. The external field thus introduces exchange terms between the qudit states, enabling transitions between the two states if the effective detuning between two states k and k + 1, ∆ k,k+1 = δ k+1 − δ k = ω k+1 − ω k − ω ext , is small enough (on the order of the Rabi frequency, Ω n ). One thing to note is the small amount of leakage to other levels because of the limited anharmonicity of the transmon qubit -and thus limited detuning of unwanted transitions. Luckily, this can largely be eliminated by the DRAG (Derivative Removal by Adiabatic Gate) method and its improvements [103,106].

H. Overview of the basic method
Here we present the an overview of the simplest approach to analyzing superconducting circuits. Note that the order of some of the steps can be interchanged when analyzing specific circuits.
1. Draw the circuit and label all components and external fluxes.
2. Label all active nodes and determine the ground node(s).
3. Chose a spanning tree. If no external fluxes are present, this step can be skipped, see Section I B.
4. Add the energy of each element, following Table I, in order to construct the Lagrangian.
• The energy of components on the spanning tree must be added without external fluxes. • The energy of components on the closure branches, i.e., not on the spanning tree, must be added with external fluxes.
5. Change into Hamiltonian formalism by doing a Legendre transformation, see Section I B 2.
6. Quantize the Hamiltonian as in Section I C 1.
7. Introduce the effective energies of the components Section I C 2.
8. Assuming that the effective capacitive energy of each node is sufficiently larger than the Josephson energy of the node, recast the Hamiltonian into interacting harmonic oscillators, following Section I D. This is done following these steps: (a) Expand cosine terms from potential Josephson junctions.
(b) Collect all second order terms.
(c) Change into step operators following Table II. (d) Define frequencies, anharmonicities, and coupling strengths for simplicity.
9. If needed eliminate terms which only give rise to small corrections, using time averaging dynamics as presented in Section I E. This is done either by removing non-energy-conserving terms or by rotating to a desired frame using Eq. (60).
10. Assuming that the anharmonicity is sufficiently large for each node, truncate the anharmonic oscillators into qudits as done in Section I F. For the case of qubits follow Table III.

II. ADVANCED
In this section we present some of the more advanced methods for analyzing superconducting electrical circuits. These methods are not essential for such an analysis, unless mutual inductance is to be included in the analysis. As such they may be considered 'nice to know'. A thorough understanding of the 'need to know' methods presented in Section I is to be preferred before approaching this section.
We first present some fundamental graph theory followed by a method for obtaining the Lagrangian using said graph theory, which includes mutual inductance. We then present a method for obtaining the Lagrangian of a system with dissipation. With this it is possible to find a master equation for the system, which we present together with equations for calculating the decoherence times for two-level systems.

A. Fundamental graph theory of electrical networks
In this section we present some fundamental definitions from graph theory. The reason for this is that graph theory is the natural language of electromagnetic circuits, where each circuit element can be represented as an edge on a graph. By introducing the following definitions we obtain a more rigorous framework for analyzing circuits. We will start by defining the fundamental concepts of graph theory. We will also describe the quantities important to circuit analysis using the example circuit shown on Fig. 8(a). The example circuit consists of a transmon qubit capacitively coupled to a resonator, which is a very common setup [90,107]. For more material on graph theory see, e.g., Ref. [108].  will allow multiple branches to connect the same pair of nodes. Sometimes this is called a multigraph in order to distinguish it from simple graphs where only one branch can connect the same pair of nodes.
With this definition we can consider each circuit as a graph, with each element being a branch. The first step of any circuit analysis is to label every branch of the graph. These can be labeled in different ways, usually via the element or the flux through the current. These are of course equivalent and often both are used as they complement each other.
Using the elements as the labels, the set of branches in Fig. 8(a) becomes B = {L r , C s , C r , C c , L J }. The order of the graph is G = 3 and the nodes can be labeled arbitrarily. The number of branches is B = 5 and we can thus write the flux over all branches as a vector with five elements where the order of the fluxes corresponds to the order of the branches in B. Note that we have indicated the direction of every branch in Fig. 8(a) using arrows. We will define positive branch current I b > 0 as the case where current flows through a branch in the direction of the arrow. Using the passive sign convention, the voltage over a branch is then given by , which ensures that the power P b = I b V b is positive if energy is being stored or dissipated in the branch element. Strictly speaking, this makes our graph a directed graph, but since all electrical network graphs are directed graphs, we are simply going to call them graphs. We are also going to assume that our graph is connected, meaning that there exists a path between every pair of nodes.
In the electrical circuit setting, the notion of subgraphs are often used to describe the capacitive and inductive subgraphs of the circuit. In the case of the example in Fig. 8(a) the capacitive subgraph is defined to be the set of branches B C = {C s , C r , C c }, while the inductive subgraph is defined by B L = {L r , L J }. Note that the set of nodes are identical for the capacitive and inductive subgraph as well as the full (super)graph, i.e., N C = N L = N . Also, even though we have assumed our graph to be connected, its subgraphs are not necessarily connected.
The next step in the analysis is to specify a subgraph called a spanning tree for our graph.
Definition 3 (Spanning tree) A spanning tree of a graph G is a connected subgraph T that contains the same nodes as G (i.e., N T = N G ) and contains no loops.
The branches of the spanning tree are called twigs and branches of the complement of the spanning tree are called links (or chords). Note that there are B G − (N G − 1) links.
The spanning tree connects every pair of nodes through exactly one path. For our example we have chosen branch 1 and 2 as our spanning tree as shown in red on Fig. 8. The linear inductor together with the C s shunting capacitor constitute the twigs of the tree while the remaining capacitors (C r , C c ) and Josephson junction (L J ) are links. Note that we are free to chose our spanning tree differently as long as it obeys the definition. We could for example have chosen the inductive subgraph, as mentioned above, however, we could not have chosen the capacitive subgraph as it includes a loop. This freedom in choosing the spanning tree corresponds to a gauge freedom in the equations of motion.
Choosing a spanning tree also allows us to define fundamental cutsets and fundamental loops, which are important when deriving the equations of motion for a circuit. We will start with the fundamental cutsets.
Definition 4 (Cut) Given a graph G = (N , B) a cut is a partitioning of nodes N into two disjoint sets N A and N B . With every cut we can associate a cutset, which is the set of branches that have endpoints in both N A and N B .
Note that removing a single twig cuts the spanning tree, T , into two disjoint subgraphs with nodes N A and N B . Such a cut is called a fundamental cut, and the branches that must be removed in order to complete the same cut on the full graph is called a fundamental cutset. More formally: Definition 5 (Fundamental cut) Given a graph G and a spanning tree T we define a fundamental cut or f-cut as a cut whose cutset contains only one twig.
In practice the fundamental cutsets can be found by removing one twig from the spanning tree. This creates two disjoints subgraphs of the spanning tree with nodes N A and N B . Now remove the links of the full graph with endpoints in both partitions. The cutset is then the set of all the removed links and the single twig. We thus end up with a unique cutset with one twig and any number of links. This can be done for every twig, and the number of fundamental cutsets is thus equal to the number of twigs |T | = N − 1. The fundamental cutsets of our example graph can be seen in Fig. 8(b).
We now turn our attention to the loops. By taking the spanning tree and adding a single link from the full graph we form a unique loop. Such a loop contains exactly one link and one or more twigs. We call these loops the fundamental loops of the G with respect to the spanning tree T .
Definition 6 (Fundamental loop) Given a graph G and a spanning tree T we define a fundamental loop or f-loop as a loop consisting of exactly one link and one or more twigs.
The number of fundamental loops that can be formed is equal to the number of links. The fundamental loops of our example graph can be seen in Fig. 8(c).
As we shall see in the following section the fundamental loops and cuts allows us to write Kirchhoff's laws in a compact and useful way.

Circuit matrices
Using the notion of f-loops and f-cuts, we define two characteristic matrices for the network graphs.
For every loop we can define the orientation, i.e., clockwise or anti-clockwise. For an f-loop we will let the orientation be determined by the orientation of its link. We can then define the fundamental loop matrix.
Definition 7 (Fundamental loop matrix) Given a graph G = (N , B), with spanning tree T , we define the fundamental loop matrix, or f-loop matrix, F (L) as In other words we iterate through the branches and the set of f-loops. If the given branch is in the given f-loop the matrix entry becomes ±1, with a plus if the branch has the same orientation as the f-loop (which is determined by the link of the f-loop). If the branch is not in the given f-loop the matrix entry is 0.
Consider our example circuit and its fundamental loops from Fig. 8(c). The first fundamental loops consists of the link Φ 3 and the twig Φ 1 . The orientation of the loop (determined by Φ 3 ) is clockwise, which means that the F (L) 11 = −1, since the twig Φ 1 points in the anti-clockwise direction. The only other non-zero entry in the first row is F (L) 13 = 1, corresponding to the link Φ 3 oriented in the clockwise direction. Following the same method for the other two f-loops we find As with the loops, we can also choose an orientation for the cutsets. If a cutset is oriented from N A to N B , we say that a branch in the cutset has positive orientation if it begins in N A and ends in N B . We choose to orient every f-cutset such that its twig in an f-cutset has positive orientation. We can then define the fundamental cutset matrix.
Definition 8 (Fundamental cut matrix) Given a connected graph G = (N , B), with spanning tree T , we define the fundamental cut matrix, or f-cut matrix, In other words we iterate through the branches and the set of cutsets. If the given branch is in the given cutset the matrix entry becomes ±1, with a plus if the branch has the same orientation as the cutset (which is determined by the orientation of the twig of the cutset). If the branch is not in the given f-cutset the matrix entry is 0.
As an example take the first cutset from Fig. 8(b). The twig Φ 1 and link Φ 3 both point towards the same node and thus have positive orientation. The final link Φ 4 points away from the node and has negative orientation. Thus the first row of the cutset matrix becomes [1, 0, 1, −1, 0]. By analyzing the other cutset in the same manner, we find the fundamental cutset matrix All branches of the graph are either twigs or links. Every f-cutset contains only one twig, and every f-loop contains only one link. Additionally, for every partition of nodes defined by an f-cut, every f-loop must begin and end in the same partition. Thus every f-cutset and f-loop share either 0 or exactly 2 branches. Now consider the elements Evidently, the (i, j)th element depends only on the ith f-loop and the jth f-cut. If the f-cutset and f-loop share no branches all the terms are zero, and in the case where they share exactly two branches we get two non-zero terms with opposite sign. We thus have Multiplying Eqs. (111) and (113) we see that this is exactly the case for the example graph, as it should be.

B. Method of electrical network graph theory
In this section we present a more mathematical method for obtaining the Hamiltonian of an electrical superconducting circuit. This method is based upon Ref. [58] and uses electrical network graph theory [65]. This method can be advantageous over the method presented in Section I B for larger and more complex circuits.
The first step is to label and order all the circuit elements (branches) of the network graph, and choose a spanning tree for the graph. Without loss of generality, we will order the elements such that the first |T | branches are the twigs and we can write the fluxes and currents over all elements as vectors where Φ t (I t ) are the fluxes (currents) of all the twigs and Φ l (I l ) are the fluxes (currents) of all the links. After all elements have been labeled and a tree has been selected, we construct the fundamental matrices of the graph F (L) and F (C) following Definitions 7 and 8 respectively. In the following we show how these matrices may be used to set up the equations of motion and reduce the number of free coordinates.
a. Kirchhoff 's current law states that no charge may accumulate at a node. Mathematically we may write this as b incident on n s n,b I b = 0, for every node n, where we have s n,b = +1 if the branch b ends at node n and s n,b = −1 if b begins at n. This is equivalent to the definition in Eq. (3a). Recall that a cutset is the set of branches between two partitions of nodes. Thus if no charge is to accumulate at a single node, the total current from one partition of nodes to another must be zero. We can write this using the f-cut matrix as b. Kirchhoff 's voltage law states that if we choose some oriented loop of branches L, the algebraic sum of voltages around the loop must equal the electromotive force induced by external magnetic flux,Φ L , through the face enclosed by the loop, i.e.
whereΦ T = (Φ 1 , . . . ,Φ B−N +1 ) is the vector external fluxes through the fundamental loops. c. Reducing the number of coordinates Using Kirchhoff's voltage law, we are able to reduce the number of free coordinates. In fact we only need to specify the fluxes of the spanning tree in order to calculate the remaining fluxes. In order to do so, we will write our f-cut matrix as where F is a |T | × |G\T | = (N − 1) × (B − N + 1) matrix and the identity is a (N − 1) × (N − 1) matrix. Note that our specific ordering of the circuit elements (twigs first, then links) allows for the simple block structure of Eq. (121). This structure is clearly seen in the example in Eq. (113). Reordering the elements simply shuffles the rows and columns of the fundamental cut matrix, and the following derivations can easily be generalized. From Eq. (115) and Definition 7 we find that we can write the f-loop matrix in a similar manner where F is the same matrix as in Eq. (121), meaning that the identity is now (B − N + 1) × (B − N + 1). This structure is again seen in the example in Eq. (111). We can then rewrite Kirchhoff's voltage law in Eq. (120) and isolate the fluxes of the links and use this to write our flux vector in Eq. (116) in terms of the twig and external fluxes meaning that we have eliminated the fluxes of the links.

Equations of motion (without dissipation)
In this section we will use Kirchhoff's current law to set up the equations of motion for the system. For this purpose it is convenient to introduce the species-specific vectors I S and Φ S where the species subscript, S, indicates the element species, i.e., capacitor, inductor, etc. This can be understood as the current and flux vectors with everything but S species removed. We use C for capacitors, L for linear inductors and J for Josephson junctions. The first step is to express the current of every branch in terms of the tree fluxes Φ t . The current flowing through a capacitor with capacitance C is given by I = CV = CΦ and we can thus write the current flowing through all capacitors as where D C is a diagonal matrix with the circuit capacitances on the diagonal. In this context all other circuit elements are counted as having zero capacitance. The flux stored in the linear inductors is related to the currents through where L is a symmetric matrix with diagonal elements L ii = L i where L i is the inductance of the ith element. For all other elements than linear inductors we set L i = 0. The off-diagonal elements are the mutual inductances L ij = M ij = k ij L i L j between the ith and jth inductor, with −1 < k ij < 1 being the coupling coefficient. If a positive current in the one inductor results in a positive magnetic flux contribution through another we have k ij > 0. If the contribution is negative, we instead have k ij < 0. The numerical value of k ij depends on the placement of the inductors relative to each other. Note that all the rows and columns belonging to elements not on the inductor subgraph are zero. By removing these zero rows and columns we get a N L × N L matrix L , where N L is the number of inductors. We can then rewrite Eq. (127) as where I L and Φ L are the corresponding vectors found by removing all the non-inductor entries of the full size vectors I and Φ. The magnetic field energy stored in the inductors is which means that L must be positive semi-definite. We will further assume L is positive definite meaning that 0 < I T L L I L for I L = 0. This assumption is also physically sensible since any current through the inductors must store at least some magnetic field energy in a realistic configuration. It also ensures that the symmetric L matrix is invertible and we can write We can expand the matrix L −1 to work on the full flux vector by inserting zeros on the non-inductor columns and rows. Similarly, we also build the corresponding full inductor current vector I L . The resulting equation can be written where L + is the matrix found by expanding L −1 with the zero-columns and rows of the non-inductor elements. Formally, L + is the Moore-Penrose pseudo-inverse [109] of the original full inductance matrix L. Now we only need to include the current through the Josephson junctions, which follows from the Josephson relation where D J is a diagonal matrix with the Josephson critical currents on the diagonal, see Eq. (11) for the case of a single Josephson junction. As with L and C, all other elements than Josephson junction are counted as having zero critical current. The vector sinΦ = (sin Φ 1 , . . . , sin Φ B ) T is understood as the vector of sines of the branch fluxes. Thus, the current through each branch can be written as a function of the branch flux and its derivatives as seen in Eqs. (126), (131), and (132), and Kirchhoff's current law thus gives a set of coupled second order differential equations where we have defined the 'mass' and 'spring constant' matrices (analogous to in Section I B 3) and the offset charges and flux induced currents Note that these matrices are different from the capacitive and inductive matrices presented in Section I B.

Voltage and current sources
Until now we have assumed that external fluxes are our only control parameters, but we can also add current and voltage sources. A voltage sources can be added in series with existing elements without introducing new constraints on the branch fluxes. This effectively just transforms the external flux vector where (V V ) i is the voltage generated by the source on the ith branch, or 0 if the ith branch is not a voltage source, i.e., defined analogously to Eq. (125). Similarly, we can add a current source in parallel with an existing element without introducing additional constraints on the free currents. This simply modifies I 0 according to where I B is the bias current vector with zeros on all entries except those belonging to a branch with a current source, where instead it has the applied current, i.e., as in Eq. (125a).

Lagrangian and Hamiltonian
One can show, using Eq. (27), that a Lagrangian fulfilling the equations of motion in Eq. (133) is where we have defined the critical current vector The conjugate momenta of the twig branches are then given by and the Hamiltonian can be found using the Legendre transformation This Hamiltonian can easily be quantized using the approach presented in Section I C 1, where this time the canonical variables are the branch fluxes Φ b and Q b of the twigs, with the commutator relation in Eq. (42).

C. Dissipation
In this section we introduce dissipation to the circuits. We continue from the dissipation-free Hamiltonian derived in Section II B 2, however, dissipation can also be added to the formalism used in Section I B.
Dissipative elements, like resistors, behave irreversibly. This is a problem as we wish to formulate a Hamiltonian formalism of the circuits, and Hamilton's equations of motion [Eq. (31)] are invariant upon time reversal. However, it is possible to extend the Hamiltonian formalism in order to fix the reversability problem. On way to fix this problem is called the Caldeira-Leggett model [66] and applies to systems with linear dissipation. The Caldeira-Leggett model is a general semi-empirical model which models the dynamics of a system coupled to an environment. In this model a bath (reservoir) of harmonic oscillators is introduced in order to describe the environment, which in our electromagnetic circuit setting is the degrees of freedom of the two-terminal linear dissipative element.

Dissipative elements
A linear dissipative two-terminal element can be characterized by a frequency dependent admittance Y (ω). Admittance is a measure of how easily a device will allow a current to flow. It is defined as the reciprocal of the impedance Z(ω) = 1/Y (ω) analogous to the relation between conductance and resistance. The electrical impedance of a circuit is defined as where R is the resistance of the circuit and the imaginary part X is the reactance. We have used the symbol j = −i which is standard in electrical engineering. This means that we can treat shunt resistors in a circuit as external impedances.
The idea of the Caldeira-Leggett model, in the context of electrical circuits, is to replace the linear dissipative element, characterized by admittance Y (ω), by an infinite series of LC oscillators in parallel, see Fig. 9. The internal degrees of freedom of the admittance can be seen as the fluxes of the intermediate node in the LC oscillators. This series of harmonic oscillators constitutes the bath and it is indeed the passage from a finite number of degrees of freedom to an infinite number that reconciles the irreversible behavior on physical time scales and the formal reversibility of Hamilton's equations of motion.
The admittance of each oscillator k in the series is given Figure 9. Caldeira-Leggett model of an admittance Y (ω). The two-terminal dissipative element can be represented as an infinite number series of LC oscillators in parallel. Each inductor in the series has inductance L k , while the capacitors has capacitance C k . In between is a node φ k . by and is purely imaginary. When we take the infinite limit we obtain both an imaginary and a real part of the admittance, with the real part being responsible for the resistance of the circuit. Following the Caldeira-Leggett model we then replace the real part of the admittance function Y (ω) with an infinitely dense comb of δ functions. The relevant quantity for a good description of the dissipation mechanism is the bath spectral function where m k and ω k are the analogous mass and frequency of the kth bath harmonic oscillator, while κ k is the coupling parameter between the system and the kth harmonic oscillator. The spectral bath function puts a constraint on the choice of κ k depending on the desired form of J. Consider the form J ∝ ω α . When α = 1 the dissipation corresponds to the classical ohmic dissipation. When α > 1 the dissipation is called super-ohmic, and when α < 1 it is called sub-ohmic.

Adding impedances to the equations of motion
Both the admittance and impedance can be defined in the time domain instead of the frequency domain. These two domains are related by a Fourier transformation where we use the notation that f (ω) is the Fourier transform of f (t). We have also used that both Y (t) and Z(t) are causal (or one-sided) functions meaning that they are zero for t < 0. In order to ensure convergence of the Fourier transform one must replace ω → ω + i with → 0 + . In the frequency domain the voltage of the external impedances in a circuit can be described as where I Z (ω) is defined as in Eq. (125) with Z indicating impedances and V Z (ω) is defined similarly, while Z(ω) is the impedance matrix with the external impedances on the diagonal and zero on the diagonal elements corresponding to branches with no external impedances. The external impedances can also be described in the time domain where the asterisk denotes convolution Using the admittance we can also express the current of the impedances as a function of their branch fluxes by rewriting Eq. (146) where Y is the matrix of admittances, corresponding to Z −1 but only taking the inverse of the non-zero diagonal elements. Formally this is equivalent to taking the Moore-Penrose pseudo-inverse as done with the inductive matrix in Eq. (131), however, much more simple as there are no non-diagonal elements. Changing into the time domain we find where the factor comes from the fact thatΦ Z (ω) = iωΦ Z (ω), which can be seen by Fourier transforminġ Φ Z (ω) and performing integration by parts. In order to add external impedances to the model we must thus add a term F (C) I Z to Eq. (133) such that it becomes where we have defined the dissipation matrix and we have assumed no external flux on the impedance branches. This matrix is in fact real and symmetric since the admittance matrix, Y , is purely imaginary and diagonal, which means that we can apply the Caldeira-Leggett system bath model [66].

Single impedance
In the lowest-order perturbation theory of the equations of motion in the dissipative parameter Y i = Z −1 i (which we assume to be small), the contributions to M d from different external impedances are additive. This means that one can calculate M d for each impedance Z i separately, while the inverse of the remaining impedances vanishes, Z −1 j =i → 0, and then add the contributions to obtain the full coupling Hamiltonian. Similarly, decoherence rates due to different impedances are additive in lowest-order perturbation theory. If Y can be written as a sum where each term contains only one impedance Z i the contributions to M d from different external impedances are additive in the exact case.
Due to the above we will only consider the case of a single impedance, as we can use lowest-order perturbation theory to describe the dynamics of a system of several external impedances, simply by adding the impedances and modelling them as a single external impedance.
In the case of a single external impedance, only a single entry is non-zero in Y (ω). This means that we can define the scalar function K(ω) as iω times this non-zero entry in Y (ω). The single non-zero entry in Y (ω) picks out a single column of the F (C) matrix in Eq. (152) and defining this column asm, we can further define the normalized vector m =m/ √ µ parallel tom, where µ = |m| 2 . We can then write the dissipation matrix as Note that µ is the eigenvalue of the matrixmm T .

The Caldeira-Leggett model
As mentioned in Section II C 1 the Caldeira-Leggett model models dissipative elements as an infinite sum of LC oscillators. A single impedance modelled as an infinite series of LC oscillators, see Fig. 9, has the Hamiltonian where C k and L k are the capacitance and inductance of the kth harmonic oscillators respectively andφ k andp k are the (fictitious) flux variable and conjugate momentum of the kth harmonic oscillator respectively, obeying the usual commutator relation in Eq. (40). Finally m ·Φ represents the branch fluxes which couples to the impedance.
Here m picks out the branch in the vector of fluxesΦ which couples to the impedance via the coupling parameters κ k , which depend on the details of the coupling. Thê H Z term must be added to the dissipation-free Hamiltonian in Eq. (141). Now we rewrite this, isolating the term describing the dynamics of the bath and the term describing the interaction between the bath and the rest of the system. The dynamics of the bath is given bŷ where we have redefined m k = C k as the mass of the kth fictitious harmonic oscillator and ω k = 1/ √ L k C k as the frequency of the kth harmonic oscillator. As this term only describes the dynamics of the external impedance it is often of little interest. Much more interesting is the coupling term between the bath and the system, where we have redefined the coupling constant κ k = −m k ω 2 k κ k . The first term in H SB is the actual coupling, while the second term is a counter term that compensates the energy renormalization caused by the first term. This term makes sure that the dissipation is homogeneous over all space, but it does not play a role in the two-level approximation for qubits.
We now turn to the equations of motion of the full Hamiltonian, i.e.,Ĥ =Ĥ B +Ĥ S +Ĥ SB . Using Eq. (31) the equations of motion for the bath variables iṡ Changing into the frequency domain we find thatφ k = −ω 2 ϕ k , which we use to solve for the bath variableŝ Similarly we find the equations of motion for the flux variables of the system (159) Inserting Eq. (158) into the above equation and rearranging, we obtain where we have used the identity to show that the last term of Eq. (159) compensates part of the coupling term. Comparing Eq. (160) to the Fourier transform of Eq. (151) and using the single impedance assumption in Eq. (153) we find that Using the spectral function defined in Eq. (144) we can rewrite this as Due to the fact that K(ω) is a function of the external impedance Z(ω) and the fact that we have already made the replacement ω → ω + i in order to ensure the convergence of Eq. (145), we can now apply the Sokhotski-Plemelj theorem from complex function theory. Doing so we find where P.V. indicates the principal part (or principal value) of the integral. If we compare K(ω) to the spectral function in Eq. (144) we see that the imaginary part of K(ω) is equal to the spectral function up to some factor, J(ω) = −µ Im{K(ω)}. We can also relate the real and imaginary part of K(ω) using the Kramers-Kronig relation This means that if we experimentally measure the spectral function of some external impedance we can determine both the real and imaginary part of K(ω).

Master equation
In the above section we found the equations of motion describing the dynamics of inductors, capacitors, Josephson junctions and the dissipative elements. Notwithstanding, we are not interested in the dynamics of the dissipative elements, only in their effect on the dynamics of the circuit elements. We therefore derive a master equation for the dynamics of the circuit variables only. A master equation describes the time evolution of a system (in our case an electrical circuit) where we model the system as a probabilistic combination of states at any given time, and where we can determine the transition between the states by a transition rate matrix. [110] However, before finding such a master equation we briefly introduce the density matrix formalism. Consider a quantum system described by a set of states {|ψ i }. We define the density matrix of the system as where w i are weight satisfying i w i = 1. The density matrix has the following properties: • Hermitian, ρ † = ρ • Normalized, Tr ρ = 1 • Tr ρ 2 ≤ Tr ρ, with the equality if and only if ρ = |ψ ψ| for some unit state vector |ψ .
• Eigenvalues are larger than or equal to zero, λ|ρ|λ ≥ 0, when |λ is an eigenstate of ρ • The expectation value of an operatorÔ can be found by Ô = ψ|Ô|ψ = Tr(Ôρ), where ρ = |ψ ψ| An important point of the density matrix formalism is that it includes off-diagonal elements. Consider e.g., |+ = (|0 + |1 )/ √ 2. In the computational basis its density matrix becomes This should be compared to a statistical mixture i p i |φ i φ i |, for an ortonormal basis {|φ i }, which has no off-diagonal elements.
Just as the Schrödinger equation describes the time evolution of the pure states of a quantum system, we can find an equation for the time evolution of the density matrix of the system. This equation is the Liouville-von Neumann equation and is given aṡ whereĤ is the Hamiltonian of the system. Note how the first equality resembles Heisenberg's equations of motion in Eq. (61), but with a different sign due to the fact that the density matrix is a dynamical variable. In the second equality we have used the Liouville operator, which is defined through the condition thatLρ is equal to −i times the commutator between the Hamiltonian of the system and the density matrix. The Liouville operator is sometimes called a superoperator due to the fact that it operates on an operator yielding another operator. Writing the von Neumann equation using the Liouville operator makes it analogous to the classical Liouville equation, where the Liouville operator is defined through the Poisson bracket, see Section I C 1. Analogous to how the exponential of the Hamiltonian governs the time evolution of states it is possible to show that the Liouville operator governs the time evolution of density matrices Now consider the Hamiltonian of the system and the bath,Ĥ =Ĥ S +Ĥ B +Ĥ SB as derived in Section II C 4. In this case the von Neumann equation, Eq. (168), describes the dynamics of the whole system, i.e., both the superconducting phases and the bath modes of the external impedances. In order to simplify the calculations we change to the interaction picture, see Eq. (61), meaning that we only need to consider the interaction part of the Hamiltonian, i.e., H SB . When we take the integral form of Eq. (168) in the interaction picture we obtain where the interaction Hamiltonian in the interaction picture is defined aŝ As we are not interested in the dynamics of the dissipative elements, we take the partial trace over the bath modes, which yields the density matrix of the superconducting phases only Inserting Eq. (170) into Eq. (168) and taking the partial trace we finḋ where we have assumed that Tr B [Ĥ SB (t), ρ(0)] = 0, meaning that the interaction between the system and the bath does not influence the initial state of the whole system. Equation (173) can in principle be solved, which yields an exact result describing the time evolution of the system ρ S . However it is difficult to calculate, as it involves the diagonalization of the complete problem. Therefore, in order to simplify the problem, we assume the weak coupling limit, i.e., the limit where the coupling between the system and the bath is small. Weak coupling is satisfied if holds for all transition energies ω ij = ω i −ω j of the system. Here J(ω) is the spectral function in Eq. (144), k B is the Boltzmann constant and T is the temperature. The requirements in Eq. (174) can be satisfied even for a strong coupling to the external impedances, however in that case we must require that the impedances and resistances are large compared to the quantum of resistance In the weak coupling regime we can justify that the bath modes do not change significantly, meaning that ρ B is time-independent, and that the full system remains separable as ρ(t) = ρ S (t) ⊗ ρ B . This is sometimes known as the Born approximation, which yieldṡ (176) In order to further simplify this we make a Markov approximation. First the integrand ρ S (t ) is replaced with ρ S (t), meaning that the time development in the equations of motion for the reduced system density matrix now only depends on the present state ρ S (t). However, this replacement is not enough to create a master equation, since the time evolution of the reduced density matrix still depends upon an explicit choice for the initial preparation at time t = 0. In order to fix this we substitute t for t − t in the integral and extend the upper limit of the integral to infinity. This is allowed if the integrand disappears sufficiently fast for t much larger than the timescale of the bath. This is equivalent to assuming that the time scale of the bath is much shorter than the smallest time scale of the system. This can be interpreted as the dynamics of the system not being correlated with the dynamics of the system in the past. This typically leads to exponential decay in coherence and population. With the Markovian approximation we finḋ (177) This equation is called the Markovian master equation or Redfield equation [111], and the combined above approximations are usually termed the Born-Markov approximation. Note that the Markov approximation is not always appropriate [112,113].
Consider now the eigenbasis {|n } of the system not including the bath, i.e., we haveĤ S |n = ω n |n . By taking the matrix elements of Eq. (177) we obtaiṅ where ρ nm = n|ρ|m and we have defined the Redfield tensor where we have inserted the completeness relation and the time dependent states of the system are given as If we evaluate the commutators in the Redfield tensor we find where ω nm = ω n − ω m and H SB (t) nm = n| e iĤ B tĤ SB e −iĤ B t |m .

Two level approximation
The main objective of this tutorial is to create a description of two-level systems in superconducting circuits, i.e., superconducting qubits. We therefore prepare the system in the two lowest energy states, |0 and |1 , and assume all rates R nmkl for n, m = 0, 1 and k, l = 0, 1 are negligible compared to rates R nmkl for n, m, k, l = 0, 1. In other words this means that we neglect all transitions other than those between the two lowest states. This assumption can be justified at low temperature, βω 12 1. With this assumption we can restrict our description of the system to the two lowest levels. This is equivalent to the truncation done in Section I F 1. In this case the density matrix becomes a 2 × 2 matrix which can be described by the Bloch vector where σ = (σ x , σ y , σ z ) T is the vector of Pauli matrices. Using the Bloch vector to write the Redfield equation in Eq. (178) we obtain the Bloch equation (in the interaction picture)ṗ with the relaxation matrix and where R R nmkl is the real part of R nmkl and R I nmkl is the imaginary part. In the above we have used facts such that Γ + nklm = (Γ − mlkn ) * and R nmkl = R * mnlk . If the qubit frequency is much larger than the rates, i.e., ω 01 R nmkl , we can apply the rotating wave approximation as discussed in Section I E. This approximation is often called the secular approximation in this setting. This means that we only retain rates with n − m = k − l, which means that the first two entries in p 0 vanishes leaving only p 0 = (0, 0, R R 1111 − R R 0000 ) T while the relaxation matrix becomes The off-diagonal terms can be considered a correction to the qubit frequency, yielding the renormalized qubit frequencyω 01 = ω 01 − R I 0101 . This leaves the relaxation matrixR where we have denoted the diagonal elements as the inverse of the longitudinal decoherence time T 1 and the transversal decoherence time T 2 , which are given as where we have defined the pure dephasing time T φ . The longitudinal decoherence time, T 1 , describes depolarization along the qubit quantization axis, often referred to as "energy decay" or "energy relaxation", which is why it is often referred to as the relaxation time. Longitudinal relaxation is caused by transverse noise, via the x-or y-axis on the Bloch sphere, see Fig. 6. Depolarization of the superconducting circuit occurs due to exchange of energy with the environment, leading both to excitation and relaxation of the qubits, meaning that one can write Due to Boltzmann statistics and the fact that superconducting qubits are operated at low temperatures (T 20 mK) and with a frequency in the GHz regime, the qubits generally lose energy to the environment, meaning that the excitation rate 1/T + is suppressed exponentially. The pure dephasing time T φ describes depolarization in the x-y plane of the Bloch sphere. It is referred to as "pure dephasing," to distinguish it from other phase breaking processes such as energy excitation or relaxation. Pure dephasing is caused by longitudinal noise that couples to the qubit via the z-axis. This longitudinal noise causes the qubit frequency, ω 01 , to fluctuate such that it is no longer equal to the interaction frame frequency, causing the Bloch vector, p, to precess forward or backward in the interacting frame.
The transverse relaxation time T 2 describes the loss of coherence of a superposition state pointed along the x-axis on the equator of the Bloch sphere. It is caused both by longitudinal noise, which fluctuates the qubit frequency and leads to pure dephasing, and by transversal noise, which leads to energy relaxation of the excited-state component of the superposition state. The transverse relaxation rate is therefore given as the sum of half the longitudinal decoherence rate and the pure dephasing rate, see Eq. (191b). For more on decoherence see Ref. [11].
Using Eq. (183) we find expressions for the longitudinal decoherence and the pure dephasing from which the transversal decoherence can be found using Eq. (191b). Note that the transversal decoherence time can be twice the longitudinal decoherence time if the pure dephasing rate disappears. From Eq. (193b) we see that the pure dephasing rate vanishes, in our theory, when 0|m ·Φ|0 = 1|m ·Φ|1 , which can be obtained by tuning the external fluxes. However, in reality the pure dephasing will never disappear entirely due to effects beyond our theory, such as higher-order corrections, other noise sources, or non-Markovian effects.

III. EXAMPLES
In this section we present some examples of analysis of more or less well-known superconducting qubits. We start from some simple early single qubit designs, over the transmon qubit, couplings between qubits, to flux qubits. In Fig. 10 we present an overview of the qubits mentioned in the following example.
There are four fundamental types of qubits: Phase qubits, charge qubits, flux qubits, and quasicharge qubit. These qubits can be ordered in pairs according to the behavior of quantum fluctuation in the Cooper pair condensate. Charge qubits with its single-charge tunneling is dual to flux qubits with single-flux tunneling, while phase qubits with phase oscillation are dual to the quasicharge qubits with quasicharge oscillations. These fundamentals qubit can be seen in Fig. 10.

A. Phase qubits
The simplest superconducting qubits are called phase qubits, which is a current-biased Josephson junction, operated in the zero voltage state with a non-zero current bias. They operate in the so-called phase regime, where E C E J , i.e., the effective energy of the capacitor is The marker indicates the type of qubits, with yellow squares indicating phase qubits, red dots indicating charge qubits, green triangles indicating flux qubits, and a blue star for the quasicharge qubit. Note that the placement of the qubits are only approximate as the effective energies are not definitive. Note that the 0-π qubit is plotted twice, once for each of its modes, where the ϕ-mode works similar to a fluxonium qubit, while the θ-mode works similarly to the transmon qubit. much less than the effective energy of the Josephson junction. In this regime the Josephson tunneling dominates over the charging of the capacitor, which makes the anharmonicity effects very small for the lowest lying states. As the capacitor is analogous to the kinetic energy and the Josephson junction to the potential energy, the phase regime corresponds to low kinetic energy compared to potential energy, which means that the lowest eigenstates are close to the bottom of the potential. A consequence of this is that the wave function in the node flux space is well localized and we can approximate the Josephson term to just second order, removing the anharmonicity.
The simplest realization of a phase qubit is a single Josephson junction qubit, which as the name suggests, consists of a single Josephson junction with a current applied over it [73,114]. The circuit can be seen in Fig. 11(a), where the parasitic capacitance, C J , of the Josephson junction with energy E J is drawn explicitly. A bias current is applied to the element, as indicated by I, which fixes the anharmonicity.
The source of the bias current can be thought of as a very large inductor L, which is precharged with a very large flux Φ such that I = Φ/L. Setting one of the plates of the capacitor to ground we are left with one node yielding the Lagrangian where we have taken the limit L → ∞ in the last line, killing the quadratic term in φ and leaving Φ/L → I (the constant irrelevant quadratic term in Φ is removed). The Hamiltonian thus becomeŝ As the last two terms are the potential terms, we obtain a washboard potential, as seen in Fig. 11(b). Due to this slope of the potential it is possible to drive the current I close to the critical current I c of the Josephson junction (see Eq. (13)), and thus introduce an anharmonicity. One adjusts the bias current I such that a suitable anharmonicity is introduced while minimizing the macroscopic tunneling through the right side of the potential's local minimum. The fact that we can tune the anharmonicity and spacing of the energy levels dynamically is a strength of this type of qubit. However, these qubits have rather large decoherence noise compared to the non-tunable charge qubits as introduced below.

B. Charge qubits
Another kind of qubits is the so-called charge qubits. These have their name from the fact that the basis states of the qubit are charge states, meaning that they are only dependent on the number of excess Cooper pairs in a disconnected superconducting island, and mostly independent of the node fluxes. We will start from the single Cooper pair box and move onto the transmon qubit. Figure 12. Circuit diagram of the single Cooper pair box, consisting of a Josephson junction, with energy EJ and parasitic capacitance CJ , in series with a capacitor with capacitance Cg. The gate voltage is denoted Vg and the system is connected to the ground in the right corner. There is only one active node, denoted with a dot.

Single Cooper pair box
A significant improvement in coherence times in superconducting qubits was found in 1997 with the discovery of the first charge qubit, known as the single Cooper pair box [115]. The circuit consists of a Josephson junction with energy E J and a capacitor with capacitance C g in series, with a superconducting island in between them. A parasitic capacitance C J is included in the Josephson junction. This is a lumped circuit element representation of the natural capacitance that the junction will have by way of construction. The circuit is biased with a gate voltage V g over the capacitor, which makes it possible to transfer electrons from the reservoir to the superconducting island via the gate capacitance C g . The circuit is connected to the ground and thus there is only one active node with flux φ through it. The corresponding circuit diagram can be seen in Fig. 12.
We follow the method presented in Section I B. In order to write the Lagrangian, we must consider the fixed gate voltage. We model this as an external node with a well-defined flux φ V = V g t, meaningφ V = V g . Setting φ T = (φ, φ V ) we write the Lagrangian where the capacitance matrix is Since we know thatφ V = V g is a classical externally controlled variable, it should not be quantized. Therefore we only calculate the conjugate momentum Solving forφ we do the Legendre transformation and find the Hamiltonian We now change into conventional notation and define the effective capacitive energy which means that we can write the Hamiltonian aŝ where we have quantized the dynamic variables and removed constant terms. We have further defined the offset charge n g = C g V g /2e.
We can now discuss the operational regime of the Cooper pair box. When the Josephson junction is absent from the circuit (E J /E C = 0), and thus absent from the Hamiltonian, the energy spectrum of the system is simply a set of parabolas when plotted against n g , one for each eigenvalue ofn. The parabolas cross at n g = n + 1/2, where n ∈ Z, see Fig. 13(a). If we consider the eigenstates ofn we find that the states |n and |n + 1 are degenerate at n g = n + 1/2. These states are essentially charge states of the capacitor. In this picture, the Hamiltonian of the capacitor becomeŝ which in matrix representation is just a diagonal matrix with (n − n g ) 2 on the diagonal. Introducing the Josephson junction lifts the degeneracy, thus making an avoided crossing at n g = n + 1/2. The matrix representation now becomes a tridiagonal matrix with E J /2 on the diagonals below and above the main diagonal, which is just the entries from the capacitor discussed above. The addition of the Josephson junction Figure 14. Circuit diagram of the transmon qubit, consisting of a Josephson junction, with energy EJ and parasitic capacitance CJ , in series with a capacitor with capacitance Cg. The system is shunted by a large capacitance, CB. The gate voltage is denoted Vg and the system is connected to the ground in the right corner. There is only one active node.
leads to the following term in the Hamiltonian Solving the full system,Ĥ =Ĥ C +Ĥ J , (using either Mathieu functions [116] or numerically) yields the avoided crossings seen in Fig. 13(b). The distance between these avoided crossings is approximately equal to the Josephson junction energy E J for the lowest states in the spectrum.
In order to realize a qubit we set n g equal to some halfinteger, which yields two states close to each other but with a large gap to higher states [112]. This qubits is, however, quite sensitive to small fluctuation of the gate voltage V g , since this changes n g , and the energy dispersion is quite steep around the working point n g = n + 1/2, as seen in Fig. 13(b) for E J /E C = 1.0, meaning that the qubit only works in this sweet spot, as it is sensitive to charge noise. This reduces the decoherence time of the system. The transmon qubit attempts to fix this problem.

Transmon qubit
The transmission-line shunted plasma oscillation qubit (or just transmon qubit) was proposed in 2007 as an attempt to increase the coherence time in charge qubits [90,117]. It exploits the fact that the charge dispersion reduces exponentially in E J /E C , while the anharmonicity only decreases algebraically in E J /E C , following a power law. The setup resembles that of the single Cooper pair box, the difference being a large shunting capacitance, C B , between the two superconducting islands, followed by a similar increase in the gate capacitance C g . The circuit diagram is seen in Fig. 14.
Capacitors in parallel simply add to one effective capacitor, hence the effective capacitance can be seen as the sum of the capacitance of the three capacitors C Σ = C J + C B + C g . With this elementary knowledge the Hamiltonian of the transmon become identical to that of the single Copper pair box from Eq. (200), with the exception that where we have simply changed the effective capacitance in Eq. (199). This gives much more freedom in choosing the ratio E J /E C , and we can thus solve the Hamiltonian for the energy dispersion for larger E J /E C . The result is seen in Fig. 13(c) and (d).
From these results we observe that the energy dispersion becomes flatter for larger ratios of E J and E C , which means that the qubit becomes quite insensitive to charge noise, in fact a completely flat dispersion would lead to no charge noise sensitivity at all. However, we also notice that the anharmonicity decreases for larger ratios. This is a result of the before mentioned fact that the charge dispersion decreases exponentially in E J /E C while the anharmonicity has a slower rate of change given by a power law. Therefore we cannot just increase the shunting capacitance until all charge noise disappear, as we still need a working qubit. We are thus left with some effective values for the transmon which are usually somewhere in the range E J /E C ∈ [50,100].
Even though the transmon has a ratio E J /E C close to that of the phase regime (E C E J ), it is still classified as a charge qubit due to the structural similarity to the single Cooper pair box qubit. Due to that and the fact that capacitors in parallel add, we will often just put a Josephson junction and a parasitic capacitance in place of the transmon in larger circuits for simplicity. We further notice that if the ratio is large enough, the bias voltage becomes irrelevant and can be omitted as well.
When implementing the transmon qubit on an actual chip, an Xmon qubit is often used [44]. The Xmon qubit is essentially the same as that of a transmon, in fact its electrical circuit model is equivalent to that of a transmon, however, when implemented in an actual chip the Xmon provides more possibilities for couplings, due to its design of having cross-shaped arms (hence the name).
Recently a dual to the transmon qubit, called a quasicharge qubit, or blochnium, have been proposed where the shunting capacitance is replaced by a large shunting inductance. This large inductance makes the qubit very robust against flux noise, which could open up for exploration in high-impedance circuits [118].

C. Flux qubits
Flux qubits are implemented in a looped superconducting circuit interrupted by one or more Josephson junctions. A current is driven through these circuits using the fact that the fluxoid quantization means that only an integer magnetic flux quanta is allowed to penetrate the loop. As a response to external flux, currents occur in superconducting materials in order to enhance or diminish the total flux such that an integer number of flux quanta is achieved.
A superposition of clockwise and counterclockwise currents is obtained by setting the external magnetic field at half a magnetic flux quantum. Changing to node flux space this can be seen as a superposition of the ground states in a double well potential. In the double well potential a small tunneling occurs between the two sides of the well, which couples the two wave functions, making an avoided crossing, and thus a two level system close to each other, but with a very large gap to the remaining states.

DC-SQUID
A superconducting quantum interference device, or just SQUID, consists of two Josephson junctions in a ring, with an external flux,Φ, going though the ring. A circuit diagram of a SQUID can be seen in Fig. 15(a). If we add an external bias (DC) current over the SQUID, from node 1 to node 2, we obtain a DC-SQUID [119]. We consider symmetrical junctions in the SQUID, but it is a neat exercise to extend it to asymmetrical junctions.
Choosing the spanning tree over the capacitors [not included in Fig. 15(a)] the potential part of the Hamiltonian becomes where we have divided the flux equally between the two arms of the SQUID. Rewriting this using the trigonometric identity 2 cos α cos β = cos(α − β) + cos(α + β) we find where we have defined φ = φ 1 − φ 2 . Note that the fluxoid quantization condition gives us a constraint on the fluxes, namely where k is an integer, meaning that we can remove one degree of freedom. In other words we have obtained an effective Josephson energy of E J (Φ) = 2E J | cos(Φ/2)|, which means that it is possible to dynamically tune the Josephson energy. This idea is often implemented in superconducting circuits instead of a single Josephson junction so that the spacing of the energy levels can be tuned dynamically by changing E J viaΦ. However, we will usually just place a single Josephson junction in a circuit diagram.

C-shunted flux qubit
The idea behind the C-shunted flux qubit (CSFQ) is the same as for the transmon, however here the capacitive shunting is over a flux qubit (Sometimes called a persistent-current qubit (PCFQ)) [120,121]. As with the transmon qubit the C-shunting improves the coherence of the qubit [43,122]. We therefore consider the flux qubit without going in to details of the shunting, see Section III B 2.
The flux qubit consists of two Josephson junctions in series, with energy γE J , which are then placed in parallel with a third Josephson junction, with Josephson energy E J , where γ is ratio of the geometrical size of the Josephson junctions. The qubit can be seen in Fig. 15(b), where all parasitic and shunting capacitances are collected in a single capacitor. Naively it looks like there are three degrees of freedom in this circuit (or at least two if the ground node is removed), one for each node. However, to a good approximation it turns out that there is only one true degree of freedom when we assume γ > 1. This is due to the fact that the node in between the two Josephson junctions is a passive node. Using the same trigonometric tricks as for the SQUID, we can write the potential energy of the three Josephson junctions as where we have introduced the change of coordinates ψ ± = φ 1 ± φ 3 and ψ 2 = φ 2 which is the middle coordinate in between the two Josephson junctions. This coordinate transformation turns out to diagonalize the capacitance matrix as well leaving only ψ − a non-zero eigenvalue, thus the two remaining node fluxes must be superfluous, and from the constraints obtained from the Euler-Lagrange equations we find that ψ 2 = ψ + /2, which leaves the potential This potential no longer has the usual sinusoidal form, and its final form depends on the external fluxΦ and the junction ratio γ. The most common configuration is for an external fluxΦ = (1 + 2l)π, where l ∈ Z. These points are often called the flux degeneracy points and corresponds to one half superconducting flux quantum threading the qubit loop (remember that there is a factor of Φ 0 /2π on the external flux). In this configuration the qubit frequency is most robust against flux noise, leaving the qubit with optimal coherence times.
As mentioned above we have assumed γ > 1, which means that we could eliminate a degree of freedom. This can be understood as approximating the particle as moving in one dimension in a two-dimensional potential, and is sometimes called the quasi-1D approximation, this approximation fails if γ < 1. If 1 < γ < 2, the potential takes the form of a double well, which has been investigated as the PCFQ [120,121]. If, on the other hand, γ > 2, the potential takes the form of a single well much similar to the transmon qubit, which is why the CSFQ has been investigates in this regime [43,122]. In both cases, if the anharmonicity is sufficiently large, the quantized potential can be truncated to the lower levels.

Fluxonium
The fluxonium qubit is the natural extension of the flux qubit. Instead of two Josephson junctions in parallel with another Josephson junction the fluxonium features an array of up to N = 100 Josephson junctions [123][124][125], sometimes referred to as a kinetic or super inductance. The circuit diagram can be seen in Fig. 15(c). Using the same quasi-1D approximation as in Section III C 2 repeatedly, we arrive at a potential where ψ is the sum of all node fluxes in between the array of Josephson junctions on the left side of Fig. 15(c). When the number of Josephson junctions N becomes large the argument in the first cosine, ψ/N , becomes small such that the cosine can be approximated by a second order approximation which leaves where E L = E J γ/N . This has the same effective form as an rf-SQUID [126], however the kinetic inductance of the fluxonium qubit is much larger than the geometric inductance of the rf-SQUID. When the external flux bias isΦ = 0 the potential has minimum at ψ = 0. For small fluctuations of ψ the potential is approximately harmonic and the lowest lying states are close to simple harmonic oscillator states. At higher energies the anharmonic cosine term of the potential comes into play as seen in Fig. 16. This ensures the anharmonicity necessary for using the two lowest lying states as the qubit subspace. However, the fluxonium qubit is most often operated atΦ = π, similarly to the flux qubit. In this regime the potential exhibits a double well structure, and it is possible to achieve a much larger anharmonicity than in theΦ = 0 case.
In recent experiments fluxonium qubits have reached impressive lifetimes of 100 − 400 µs [125]. This is done while maintaining a large anharmonicity suitable for fast gate operation. This puts fluxonium among the top qubit candidates for near future quantum computing applications. In addition, the success of the fluxonium qubit proves that long coherence times can be achieved even in more complicated system with a large number of spurious modes. This should encourage quantum engineers to further explore circuit design utilizing large kinetic inductance.
The 0-π qubit consists of four nodes which are all connected to each other by two large superinductors, two Josephson junctions, and two large shunting capacitors, as shown in Fig. 17. We denote the shunting capacitors as C, the Josephson junctions as E J and assume they The node fluxes of the circuit are denoted (φ 1 , φ 2 , φ 3 , φ 4 ), and the normal modes of the circuit can be written using the transformation Here Σ is the CM coordinate, which has no influence on the dynamics of the system and can be discarded. This basis transformation diagonalizes the capacitance matrix C = 2 diag(C J , C J + C, C). The Hamiltonian then takes the form where n ϕ , n ζ , n θ are the canonical momenta, E −1 Cϕ = 16C J , E −1 Cθ = 16(C + C J ) and E −1 Cζ = 16C are the charging energies of each mode, while E L = 2 L is the effective inductive energy. Note that the ζ-mode completely decouples from the rest of the system and can thus be ignored. By transforming the θ variable θ → θ +Φ 2 we can rewrite the Hamiltonian into the simpler form The circuit is engineered such that C C J , and we can thus think of the system as a heavy particle moving along the θ-axis and a lighter particle moving along the ϕ-axis. In the basis of the computational states |0 and |1 (which are chosen as the ground and first excited state respectively) the θ variable is well localized around either 0 or π, as shown in Fig. 18, which is also the reason for the qubit's name. Setting θ = 0 or π in Eq. (213) we see that the potential along the ϕ-axis is similar to that of fluxonium biased by a flux of either 0 or π. As a result the two states have vanishing matrix elements 0| θ n |1 , 0| ϕ n |1 0. This makes the qubit highly resistant to noise-induced relaxation as described in Section II C.
In recent experiments [132] with the 0-π qubit, relaxation times above 1 ms have been achieved, making it an exciting candidate for future research. As with fluxonium, the 0-π qubit proves that it is not only the most simple qubits, such as the charge and flux qubit families, that can achieve long coherence times. Researchers should make note of this when developing new circuit designs in order to tap into the potential that more complicated components, such as the kinetic inductances used in fluxonium and the 0-π qubit, bring to the table.

D. Coupled qubits
In the above examples we presented a few different qubit designs, and the next step is naturally to try to couple these. The simplest form of coupling is to couple two qubits using a capacitor or inductor like in the examples in Figs. 1 and 8. This yields a simple linear coupling like in Eq. (75) [134]. We therefore focus on couplings beyond static linear couplings. Below we present two tunable couplers [107,135], but there are other types of couplings where couplings tuned using external fluxes are introduced [136][137][138].
Coupling of qubits naturally increases the complexity of the system, which is why the following examples are considered more difficult than the above.

Gmon coupler
The gmon coupler introduces tunable swapping couplings between two transmon-like qubits by exploiting mutual inductance [135]. Two transmon qubits are coupled via a Josephson junction of energy E c = 1/L c , where L c is the Josephson inductance as in Eq. (14), and in parallel with each qubit is a linear inductor of inductance L g . The circuit diagram can be seen in Fig. 19(a). The crucial point of the gmon coupler is that the two linear inductors have a mutual inductance between them.
An excitation current I q over, say, the right qubit (both the Josephson junction and the capacitor) flows through both L g and L c , and Kirchhoff's current law gives us I q = I L1 + I Jc . Using Kirchhoff's voltage law going around the inner loop (in the direction indicated by the arrow) of the circuit over the inductors we find L cİJc + L gİL2 = L gİL1 . Note that this assumes no direct mutual inductance between the inductors. However, assuming that the current over the second qubit is negligible, we can integrate this from minus infinity to t and insert our result from Kirchhoff's current law to find i.e., we have the fraction of current going through the coupler. This current generates a flux in the second qubit Φ 2 = L g I Jc = M I q , where we have expressed the flux using an effective mutual inductance M . Solving for M and inserting Eq. (214) we find that this mutual inductance is Now the inductance of the coupling Josephson junction is actually dependent on the Josephson phase, Φ c , of the coupler, which is why we must exchange L c → L c / cos Φ c , meaning that we can tune the mutual inductance by changing the Josephson phase. The next step is to find the Hamiltonian. As seen above, the effect of the L c junction can be modelled as a mutual inductance. We will therefore use the method presented in Section II B. To begin with we ignore dissipation. We therefore label all seven branches in Fig. 19 corresponding to the circuit graph. We choose our spanning tree as the capacitive subgraph of the circuit (see Section II A). This means that all inductive elements lie in the set of links. In terms of the current vectors, we have [see Eq. (116)] where we note that in this notation we have I q = I J1 +I C1 . Using the definitions in Section II A 1 we find the F-matrix as As a sanity check we use Eq. (118) combined with Eq. (121) to find the same equations as Kirchhoff's current law in Section II B 1 would yield. The reduced linear inductance matrix with all noninductive entries removed, as used in Eq. (128), is which we can expand to the full linear inductive matrix by adding a row and a column of zeros such that The Josephson matrix is Combining all this we find that the equations of motion from Eq. (133) where C = diag(C, C, 0), the flux Φ T = (Φ 1 , Φ 2 , Φ c ), and where the non-zero part of the matrix is the inverse to Eq. (218).
Using Eq. (141) we finally find the Hamiltonian of the circuit to be where we have removed the φ c term as it is effectively included in the mutual inductance, such that the branch fluxes and currents are given as Φ T = (Φ 1 , Φ 2 ) and Q T = (Q 1 , Q 2 ) respectively. If we only consider the interacting part of the Hamiltonian, which is only in the inductive term, we find Quantizing the Hamiltonian and changing into step operators this term becomeŝ where we have changed into step operators [Eq. (52)] and employed the rotating wave approximation in the second equality, and the impedance, ζ, is given by Eq. (50).
Changing to a qubit model we can write H int = g(σ + 1 σ − 2 + σ − 1 σ + 2 ) (see Section I F 1), with the coupling constant being where we have assumed L g M . We now want to include dissipation as in Section II C. We do this by adding a single impedance, which we place over the coupling Josephson junction, such that it affects both degrees of freedom. This means that we must add the following matrix to Eq. (217) which is exactly the vector we need in order to model dissipation via the Caldeira-Leggett model Section II C 4. We must now determine the dissipation matrix in Eq. (153). We find that µ = 2, and K(ω) = iω/Z(ω), where the impedance Z(ω) should be determined experimentally and then modelled using the spectral function in Eq. (144). In order to progress any further we need actual experimental values of the components. We therefore leave the example here.

Delft coupler
The Delft coupler [107] introduces tunable non-linear couplings between two qubits in a center-of-mass-basis. The following example is a simplification of the supplementary material to Ref. [107]. Consider the circuit diagram in Fig. 20. Following the approach in Section I B we find the following capacitance matrix where we have defined the flux vector as φ = (φ 1 , φ 2 , φ 3 , φ 4 ) T . This yields the following circuit Lagrangian We now change into a CM-basis (see Section I B 3) of the capacitive sub-system using the following transformations This decouples the center-of-mass coordinate, ψ CM , from the remaining coordinates (note that this is due to the identical grounding capacitances C g ) as the transformed capacitance matrix takes the form where we have chosen the basis such that ψ = (ψ CM , ψ 1 , ψ 2 , ψ S ) T . Doing a Legendre transformation and quantizing we find the Hamiltonian where we have defined J . Lastly we have also defined the impedance, as in Eq. (50), as ζ = 4E C /E J . We do not include the center-of-mass coordinate as it does not influence the dynamics of the system. Note how the two modes are affected by both their 'own' Josephson junction and the coupling Josephson junction.
Assuming that the sloshing mode, ψ S , is detuned from the remaining two modes we can remove couplings to the sloshing mode using the rotating wave approximation from Section I E. After this approximation the interaction part of the Hamiltonian takes the form where we have used the assumption that the modes are resonant. The swapping coupling strength is given by The non-linear coupling factor is given as The first non-linear term in Eq. (233) is sometimes called the cross-Kerr coupling term with coupling strength V , while the second non-linear term tunnels a pair of excitations from one mode to the other with coupling strength V /4. Therefore this term does not contribute if the Hamiltonian is truncated to a two-level model, but it may result in correction to the model (using for instance the method described in Section I F 3). Thus truncating to a two-level model the Hamiltonian becomeŝ Figure 21. Circuit diagram of two coupled qutrits. The three circuit nodes, φa, φ b , φc, are indicated by dots.

E. Coupled qutrits
In this example, we present a system with two qutrits coupled via an effective Heisenberg XXZ-coupling. The proposed implementation is shown in the effective lumped circuit element model in Fig. 21. The idea is to mix the nodes φ a , φ b and φ c such that we obtain two lowdimensional degrees of freedom (after truncation) with the desired coupling and a decoupled third degree of freedom which can be seen as a center of mass coordinate and can be ignored. Due to the left-right and top-down symmetry there is no resulting exchange interaction from the capacitances and the Josephson junctions. By picking an asymmetry in the inductances, we can engineer the system to obtain the desired exchange interaction, and it can be completely eliminated by making L = L . On the other hand, the longitudinal ZZ-coupling does not cancel in the qutrits, giving an energy shift depending on the other qutrit state, thus removing all degeneracies in the two-qutrit system. This interaction can furthermore be tuned by adding external fluxes. Also, we include driving lines to each of the three nodes, which enables us to control the qutrit energies dynamically by the AC Stark shift arising from detuned driving as explained in Section I G. We note that the inductances in Fig. 21 may be physically arranged in a manner that may allow for a mutual inductance as an additional manner of coupling. This could be analyzed as above int he Gmon case, and we neglect this here for simplicity.
We start from the circuit in Fig. 21, which yields the following Hamiltonian using the method of nodes as presented in Section I B where the capacitance matrix is and we have defined q T = (q a , q b , q c ). We now follow Section I B 3 and transform into a CM system of the capacitive sub-system using the following transformation From this the transformation matrix V can be constructed such that Eq. (34) is satisfied. With this transformation the capacitance matrix takes the form where V is the transformation defined by Eq. (239) when the basis is chosen such that p T = (p 1 , p 2 , p CM ) as in Eq. (37). Assuming that K is invertible its inverse becomes We notice that the diagonal terms for ψ 1 and ψ 2 are unequal, which becomes important when we later introduce bosonic step operators related to the harmonic part of the full Hamiltonian.
Returning to the potential part of the Hamiltonian in Eq. (237) we rewrite it in the CM-basis in Eq. (239) and apply the standard procedure of rewriting using trigonometric identities as in Section III C 1, and requir-ingΦ =Φ 1 = −Φ 2 . Finally we expand the cosine terms to fourth order, assuming that we are in the transmon regime. From here on, we discard fast rotating terms, as they will vanish when employing a rotating wave approximation later anyway, see Section I E. After the dust settles, we arrive at the potential part of the Lagrangian in the transformed variables we have neglected all energy and coupling terms involving ψ CM as the ψ CM -degree of freedom will typically have an energy spectrum far from the rest. We have also removed all non-energy-conserving terms. We have defined the effective energies and coupling where E L = 1/4L and E L = 1/4L are the effective inductive energies. Note the asymmetry between the 1 and 2 mode. Ignoring the CM-mode we quantize the Hamiltonian and change into step operators using Eq. (52), with the impedances ζ i = 4E C,i /E i , where E C,i is the usual effective capacitive energy. The Hamiltonian takes the form where we have defined the anharmonicities We now wish to truncate to a three level system, also known as a qutrit, following the procedure presented in Section I F 2. We choose the zero point energy to be the |0 0| states. This is contrary to the qubit where we usually chose the zero point energy to lie in between the two states. The diagonal part of the Hamiltonian becomeŝ where the energies are given as from which we clearly see the effect of the anharmonicity. The transversal interaction part, which swaps excitation between the two qutrits, is H X =J X (|01 10| + 2 |12 21|) where the coupling constants are given as Note that there is also a |11 20| term, however, this is suppressed due to the anharmonicity and thus we can removed it using the RWA from Section I E. Finally the longitudinal interaction part of the Hamiltonian iŝ where S z = |0 0| + 3 |1 1| + 5 |2 2|, which is a generalization of the qubit σ z 1 σ z 2 longitudinal coupling.

External driving of the modes
We wish to control the two modes using external microwave driving following the procedure presented in Section I G 1. We drive the three original modes, φ a , φ b , and φ c , of the system. This gives the three additional terms for the Lagrangian where V i (t) is the external microwave driving. We want to effectively couple the ψ 1 and ψ 2 modes to (independent) external fields. Because these are specific linear combinations of of φ a , φ b and φ c , the couplings should be obtainable by coupling to these fields in the correct way and rewriting the system a little. After rewriting the terms in Eq. (251) a bit using the transformation in Eq. (239), we can write this as the following requirements where ω ext i is the external driving of the ith qubit. Note that we do not include a phase in the driving for simplicity. From Eq. (252b) we quickly realize that C c = 2C a , which leaves three equations with three unknowns. These equations can be solved V a (t) = A 1 cos(ω ext 1 t) + A 2 cos(ω ext 2 t), (253a) V c (t) = −A 2 cos(ω ext 2 t), and inserted back into L ext . Expanding and collecting the terms, and ignoring irrelevant offset terms independent of the phases, this leads to a total kinetic Lagrangian whereK = K + K ext is the adjusted capacitance matrix in the CM frame, where K ext = V T C ext V is the contribution from the coupling to the external nodes with C ext = diag(C a , C a , 2C a ). Performing a Legendre transformation and changing into step operators, the driving term takes the form (255) We now define the Rabi frequency Ω i = C a 3 j=1 (K −1 ) (j,i) /2 √ 2ζ i . Truncating to the lowest three states the driving term takes the form (256) Depending on which transition we want to drive we must match the driving frequencies with the transition energy, e.g., ω ext 1 = ω 1,2 if we want to drive the |1 ↔ |2 transition of the first qutrit, see Fig. 22.
Such a system can, besides arbitrary one-qutrit gates and generalized controlled-NOT gates, implement both a single-qubit and two-qubit non-adiabatic holonomic gates [139,140].

F. Multi-body interactions
The smallest example of multi-body interaction must consist of four nodes, as we can always decouple the CM mode leaving three true degrees of freedom. Consider the circuit in Fig. 23. If we approximate the Josephson junctions as linear inductors, we quickly realize that the capacitive and inductive matrix can be diagonalized simultaneously. We therefore first consider the capacitive subnetwork, which, following the method in Section I B yields a capacitance matrix To begin with we set the diagonal capacitance C d = C, which yields the modes v CM = 1 2    with eigenvalues λ CM = 0 (as we have not grounded any node), λ 1 = λ 3 = 4C, and λ 2 = 2C. Note how the choice of identical capacitances ensures that the v i are independent of the capacitances. These modes correspond to charge oscillating across the diagonal between nodes 1 and 3 (v 1 ), oscillation across the sides of the square between nodes 2 and 4 (v 2 ), and finally an oscillation involving the entire circuit between the nodes 1 and 3, and the nodes 2 and 4 (v 3 ). We can see that changing the capacitance of the diagonal branch does not disturb the eigenmodes. The fact that this is the only branch where the capacitance can be changed is an intuitive result when we consider the modes in terms of oscillating charge. We can think of the mode v 1 as the only one involving the diagonal branch.
Changing the diagonal capacitance to C d = C, we only change the first mode, and the diagonalized capacitance matrix takes the form From this point on we remove the center of mass coordinate. Consider now the inductive subnetwork of the circuit in Fig. 23, it yields the potential energy where we have changed to the diagonal basis. With some algebra this can be reduced to From this we see that the diagonal Josephson junction, with E d , becomes non-interacting because there is a mode corresponding exactly to this branch. We also see that the remaining four Josephson junctions with E create a three-body interaction. When expanding the cosines in the interaction we get both corrections to the frequencies and two-and three-body couplings, which are the only interactions in the system. Note that the diagonal branch can be removed from the system without changing the dynamics of the system. By introducing external fluxes, we can tune the triplecosine term to involve odd terms as well. The cosine terms themselves only result in products of even powers of the ψ i 's, but with flux threading the circuit a cosine could even be tuned into a sine, making the contributions from the corresponding mode completely odd. This opens up the possibility for further multi-body couplings achieved through the normal modes.

External coupling to eigenmodes
If we wish to couple an eigenmode-circuit into a larger configuration, we need to couple the eigenmodes to the external degrees of freedom. Such external degrees of freedom could also be used to control or measure the system. While a non-transformed node flux can be controlled by coupling a single control line to the corresponding node (see Section I G), we must employ several control lines in order to couple to the eigenmodes. For concreteness we consider the above circuit in Fig. 23 transformed to its eigenmodes. We now want to capacitively couple the ψ 1 degree of freedom to an external control line, without coupling to the two remaining degrees of freedom.
We therefore couple each node in the (non-transformed) circuit via identical capacitors of capacitance C ext to external micro wave driving V i (t). This results in the following additional terms in the Lagrangian Writing this in terms of the electrical modes, expanding the parenthesis and throwing away constant terms, we have We now want to only couple to ψ 1 , so we chooseV 1 = −V 3 andV 2 =V 4 = 0. This yields L ext = C ext 2 ψ 2 CM +ψ 2 1 +ψ 2 2 +ψ 2 3 − 2 √ 2ψ 1V1 (264) The four first terms simply contribute to the diagonal of K, and can be viewed as corrections to the energy of the modes, and the last term is exactly an interaction term between ψ 1 and the external V 1 (t). Note that we could have simply coupled to the 1 and 3 node in order to get the same result, which we could have guessed from v 1 in Eq. (258). Also note that the center of mass mode now have a non-zero eigenvalue, which is due to the fact that all the nodes are now coupled to the ground.

IV. SUMMARY AND OUTLOOK
In this tutorial we have presented various methods used when analyzing superconducting electrical circuits. We have summarized the methods in Fig. 24, where we have included both the basic method and more advanced alternative methods. In general it is possible to apply the basic method in Section I B for finding the Lagrangian of any circuit, when not including dissipation or mutual inductance, while the method in Section II B presents an alternative to the basic method. If mutual inductance is to be included, the method in Section II B must be used in order to obtain the Lagrangian. Having found the  Figure 24. Overview of the methods presented in this tutorial. Blue blocks indicates basic methods presented in Section I. Yellow blocks indicate optional steps, while red blocks indicates advanced methods presented in Section II. Round blocks is assertions that must be satisfied before advancing in the flowchart.
Lagrangian one can change the node flux basis as in Section I B 3 or keep it in the original basis. The Hamiltonian can be found using Legendre transformation and then quantized as in Section I C 1. After asserting that the system is only weakly anharmonic it can be rewritten into interacting harmonic oscillators perturbed by the anharmonicity, as in Section I D. Alternatively one can decide on exact truncation as in Section I F 3. After changing to step operators the rotating wave approximation can be applied if needed, as in Section I E. If the anharmonicity is large enough the system can finally be truncated into qubits or qudits using either the methods in Section I F or Section I F 3. If the dissipation is included in the analysis as in Section II C and the system is truncated to twolevels, then the decoherence time can be found using the results in Section II C 6.
We note that the methods presented here are in no means exhaustive in regards to circuit analysis. Classical electrical circuit analysis has been performed for decades by both physicists and engineers, and much more information about this can be found in the existing literature. The methods presented here should therefore not be seen as a limit to what can be done with superconducting circuits, but merely as a starting point for researchers new to the field of superconducting electrical circuit analysis. So where to go from here? If you want to explore controlling and measuring superconducting circuits we recommend Ref. [61] which discusses the coupling to microwave resonators. For more information on the method in Section I B and advanced method in Section II B see Refs. [50] and [58], respectively. Both also discuss how to analyse dissipation. For each of the examples, we reference the original research, and finally for a recent overview of the field see Ref. [11] which reviews recent state-of-the-art concepts.

ACKNOWLEDGMENTS
We wish to thank our numerous experimental and theoretical collaborators and colleagues for discussions and collaborations on superconducting circuits over the past few years, in particular D. Pestrosyan, M. Kjaergaard, W. Oliver, S. Gustavsson, and C. K. Andersen. Thank you also to all the students that have challenged us to provide clear and introductory explanations in this exciting field. They are the real reason that we undertook the task of writing this tutorial. This work is supported by the Danish Council for Independent Research and the Carlsberg Foundation.