Measurement-Induced Phase Transitions in the Dynamics of Entanglement

We define dynamical universality classes for many-body systems whose unitary evolution is punctuated by projective measurements. In cases where such measurements occur randomly at a finite rate $p$ for each degree of freedom, we show that the system has two dynamical phases: `entangling' and `disentangling'. The former occurs for $p$ smaller than a critical rate $p_c$, and is characterized by volume-law entanglement in the steady-state and `ballistic' entanglement growth after a quench. By contrast, for $p>p_c$ the system can sustain only area-law entanglement. At $p = p_c$ the steady state is scale-invariant and, in 1+1D, the entanglement grows logarithmically after a quench. To obtain a simple heuristic picture for the entangling-disentangling transition, we first construct a toy model that describes the zeroth Renyi entropy in discrete time. We solve this model exactly by mapping it to an optimization problem in classical percolation. The generic entangling-disentangling transition can be diagnosed using the von Neumann entropy and higher Renyi entropies, and it shares many qualitative features with the toy problem. We study the generic transition numerically in quantum spin chains, and show that the phenomenology of the two phases is similar to that of the toy model, but with distinct `quantum' critical exponents, which we calculate numerically in $1+1$D. We examine two different cases for the unitary dynamics: Floquet dynamics for a nonintegrable Ising model, and random circuit dynamics. We obtain compatible universal properties in each case, indicating that the entangling-disentangling phase transition is generic for projectively measured many-body systems. We discuss the significance of this transition for numerical calculations of quantum observables in many-body systems.


I. INTRODUCTION
When left unobserved, quantum systems tend to evolve toward states of higher entanglement [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18]. Unitary evolution of a many-body wave function, with a Hamiltonian or with quantum gates, typically drives it towards a state with volume-law scaling for the entanglement entropies of subsystems. By contrast, local measurements can reduce the entanglement in a quantum system by collapsing degrees of freedom. A measurement of a single spin-1=2, say, leaves that spin in a definite spin state and disentangles it from the rest of the system.
What happens to the entanglement in a quantum system when measurements occur repeatedly during the evolution at a fixed rate? For simplicity, let us model the local measurements as occurring at random times and locations, at a nonzero rate p per degree of freedom. Do such measurements collapse the many-body wave function into something close to a product state with area-law entanglement, or can volume-law entanglement survive?
Here, we answer this question for the simplest type of measurement, which is a projective measurement of a discrete degree of freedom. We show that both types of dynamics can occur, leading to volume-law or area-law entanglement, depending on the value of the measurement rate p. These two distinct dynamical phases, "entangling" and "disentangling," are separated by a continuous phase transition at a finite value of p ¼ p c (Fig. 1). (We call the latter phase disentangling because if a finite system is initiated in a volume-law entangled state, the dynamics will eventually reduce the entanglement to area law.) In this paper, we introduce two versions of this transition: the "generic" entanglement phase transition and a toy model for it that is amenable to exact analysis. We show in later parts of the paper that the generic transition occurs in models that are not fine-tuned, including physically sensible ones, and that it affects the dynamics of the von Neumann entanglement entropy and of certain equal-time correlation functions. First, however, we address the toy model for the transition in depth. While the toy model is in a different universality class from the generic transition, it captures many qualitative features of the phase diagram and the transition remarkably well.
The toy model is an exact description of the dynamics of the zeroth Rényi entanglement entropy (S 0 ) in a system with discrete-time dynamics that has a circuit representation [19].
For circuit dynamics without measurement, the wellknown "minimal-cut" formula gives S 0 exactly as a function of time so long as the dynamics is not fine-tuned (see Ref. [9] for a rigorous proof in one setting). We show that the minimal-cut formula still holds exactly when there are projective measurements-our key insight is that it should be applied to a network in which some bonds are "broken" by the presence of a measurement.
The minimal-cut representation of S 0 yields an effective classical optimization problem: finding the optimal cut through a bond percolation configuration. We use this mapping to characterize the scaling behavior of the entanglement (S 0 ) and mutual information (I 0 ) at and on both sides of the critical point of the toy model, which is at the bond percolation threshold p c . The growth of entanglement in the three regimes is illustrated schematically in Fig. 2 for a 1 þ 1D system initialized in an area-law state. The logarithmic entanglement growth at p c is a consequence of scale invariance, which also leads to power-law correlations of a certain type between distant spins. We show that the three types of growth in Fig. 2 also characterize the regimes of the generic problem.
The existence of a transition in the toy model has a very simple interpretation, which applies in any spatial dimension d. Namely, when the measurement rate exceeds p c , we effectively break enough bonds for the circuit to fall apart into disconnected pieces. Such pieces are disentangled from each other, and the circuit no longer mediates longrange correlations.
As we show below, the generic transition occurs at a value of p c that is smaller than the value suggested by the dynamics of S 0 . In other words, as p is increased, entanglement production ceases well before the circuit falls apart in the above sense. We diagnose the generic transition using the von Neumann entanglement entropy (and higher Rényi entropies). We focus on 1 þ 1D spin chains, where quantum simulations are feasible up to at least L ¼ 24 using matrix product states [20]. The results from the toy model guide our analysis of the data from these systems. Strikingly, many qualitative features of the toy model continue to hold, and we show clear evidence for a transition at a finite p c . The universality class of the transition, however, is distinct from classical percolation, as we show by computing the correlation length exponent close to the transition. In particular, as the transition is approached, the characteristic length scale and timescale diverge as ξ ∼ jp − p c j −ν and τ ∼ jp − p c j −νz , respectively, with ν ¼ 2.03ð5Þ and a dynamical exponent z that is consistent with z ¼ 1. We obtain consistent exponent estimates for two different models, including a deterministic Floquet circuit and a random unitary circuit (each with random measurements).
The specific models we study all have discrete time dynamics. While this discretization is important in order for the dynamics of S 0 to be well defined [21] (i.e., for the construction of the toy model), we do not expect that it will affect the existence or universality class of the generic transition that is manifest in physically meaningful quantities (such as the von Neumann entanglement entropy S 1 or the mutual information between separated spins). It is also possible to consider a continuous quantum measurement process, which is obtained as a limiting case of very frequent "weak" measurements [22,23]. The production of entanglement in this setting has been considered for free fermions in Ref. [24], where an arbitrarily weak measurement was found to lead to an area-law state. This result was explained in terms of a quasiparticle picture, making use of integrability [24]. We conjecture that nonintegrable models subjected to continuous measurement behave similarly to the models we study here.
We briefly discuss the outlook for an analytical description of the dynamical transition that we find, making connections with recent results for random unitary circuits [9,18,25,26] and random tensor network states [27,28]. FIG. 2. Schematic illustration of entanglement production after a quench from a product state in 1 þ 1D. The growth of bipartite entanglement entropy between the two semi-infinite halves of an infinite chain is shown. In the entangling phase (p < p c ; upper curve), the entanglement grows "ballistically" with time. At the critical point (p ¼ p c ; middle curve), the entanglement grows logarithmically. In the disentangling phase (p > p c ; lower curve), the entanglement saturates to a finite value. (Random fluctuations are averaged over.) Recently, Ref. [28] discussed an entanglement phase transition in a random tensor network state [27] as the bond dimension was varied. We discuss the possibility that the conformal field theory discussed there also describes the dynamical transition.
The entanglement structure of a quantum state has a direct bearing on how difficult it is to simulate the dynamics of that state using a classical computer [29]. Thus, the entanglement transition may have important implications for simulating quantum evolution with measurement, as we discuss in Sec. VI B.
Finally, we mention earlier work by Aharonov [30], who considered the possibility of a transition in which the state of a quantum computer is affected by decoherence. Aharonov used the percolation connectivity of the circuit to demonstrate that when decoherence events are sufficiently frequent, the mixed state of the qubits must be associated with a finite entanglement length scale. On the other hand, in the presence of active quantum error correction, a very low rate of decoherence can allow for nontrivial entanglement of qubits implementing a certain algorithm. We emphasize, however, that the transition envisaged in Ref. [30] is in the density matrix of a mixed state (with the mixing arising from environmental decoherence). In the problem we study here, on the other hand, there is no transition of this type: The mixed state obtained by averaging over measurements is trivial throughout the phase diagram, as we emphasize in Sec. II. The transition we introduce is in the entanglement structure of pure states.

II. MODELS AND SETTING
The dynamics we study consists of unitary evolution of a spin-1=2 chain interspersed with single-spin measurements. We use quantum dynamics in discrete time, where each time step involves the application of unitary gates to pairs of adjacent spins. This discretization allows us to define a nontrivial solvable toy model (the discrete-time dynamics of S 0 ), and it simplifies the numerical study of the generic problem. The dynamical protocol we describe could be modified in various ways, but we expect the universality class of our generic transition to be robust (see the discussion at the end of this section).
Our 1 þ 1D quantum circuits are arranged with a "running bond" configuration of unitaries, as in Fig. 3. We define units of time such that one time step involves applying one layer of unitaries (the time period of the circuit is two layers).
Measurement events take place randomly: After each layer of unitaries, each spin has a probability p of having its z component (S z ) measured. A detail regarding the boundary conditions is that Fig. 3 shows a layout in which the two boundary spins have two opportunities to be measured for each unitary that is applied to them: We use this layout in Sec. III as it is natural for our classical mapping, but for the quantum simulations in Sec. IV, the boundary spins have only one chance to be measured per unitary applied to them.
In addition to the randomness in the times and locations of measurements, which we have put in by hand, there is intrinsic randomness in the measurement outcomes, which occur with the usual probabilities jh↑jΨðtÞij 2 and jh↓jΨðtÞij 2 , where jΨðtÞi is the state prior to the measurement. After measuring, we project onto the value of S z obtained. The state must be renormalized after each projective measurement, so its dynamics are both stochastic and nonlinear. Nonetheless, the state remains pure. This pure state determines the probabilities for measurements, conditioned on the outcomes of previous measurements [31].
We simulate two types of quantum dynamics: (i) Random unitary dynamics, in which every "brick" in Fig. 3 is chosen randomly and independently from the unitary group (with the Haar measure). Studying 1 þ 1D random unitary circuits and related models has elucidated the coarse-grained dynamics of entanglement and quantum operators in more realistic systems [9,13,18,25,[32][33][34][35][36][37][38][39][40][41][42][43]. Here, we extend the model to dynamics with measurement. (ii) "Floquet" dynamics, in which the unitary circuit is deterministic and periodic in time (in the absence of measurements). We study this case in order to check that our results are not dependent on the randomness of the unitaries in case (i). The fixed unitary we use (the elementary brick for Fig. 3) is a product of two noncommuting unitaries. For spins at positions x and x þ 1, the unitary is (Ŝ i ¼σ i =2) where ϕ xx ¼ 0.3, ϕ zz ¼ 0.6, θ x ¼ 0.2, and θ z ¼ 0.4. This unitary operation defines a version of the Floquet Ising model with longitudinal and transverse fields. Our main tools for characterizing the dynamics are the Rényi entanglement entropies for subsystems A of the spin system, where ρ A ¼ TrĀjΨihΨj is the reduced density matrix of A. At n → 1, this definition reproduces the von Neumann entropy We measure all entropies in bits.
It is worthwhile to note that the entanglement entropy S n for n > 1 is constrained by the inequalities [44] where S ∞ ¼ lim n→∞ S n ¼ log 2 ð1=λ max Þ, and λ max is the largest eigenvalue of ρ A . These inequalities imply that any entanglement entropy S n with n > 1 can differ from S ∞ by at most a constant factor n=ðn − 1Þ, and consequently, all S n with n > 1 must have the same scaling with system size.
As we show below, this is borne out in our numerical results, which indicate a single transition for all S n with n ≥ 1.
We also study the Rényi mutual information I n ða; bÞ between two separated spins (labeled here a and b): I n ða; bÞ ≡ S n ðaÞ þ S n ðbÞ − S n ða ∪ bÞ: In a random system, the I n are useful measures of the strength of quantum correlations between spins a and b because they are independent of the choice of local bases for these spins [45]. We show below that, in the present case, they reveal a remarkable scale-invariant entanglement structure at the transition.
Let us briefly clarify what the dynamical transition does and does not imply. For simplicity, assume for a moment that the dynamics is deterministic except for the intrinsic randomness in the measurement outcomes. As noted above, the evolving quantum state remains pure. This pure state, which we temporarily denote jΨ ðo 1 ;…;o N Þ i, depends on the previous measurement outcomes o 1 ; …; o N , and it determines the probabilities for measurement outcomes at a particular time given these previous outcomes. This pure state must be contrasted with the mixed state that is obtained if we average over measurement outcomes. Let us denote the average over measurement outcomes by E o 1 ;… , and let hÁ Á Ái o 1 ;… be the expectation value of an observable in the state jΨ ðo 1 ;…;o N Þ i. The averaged density matrix is This mixed state ρ av evolves linearly, in a standard way. Generically, we expect it simply to evolve to a trivial infinite-temperature Gibbs state, for any nonzero rate of measurement. The dynamical phase transition we discuss is therefore not apparent in this object, and thus, it is not apparent in correlation functions such as that are straightforwardly averaged over previous measurement outcomes. As noted above, the transition is detected by more complex local correlation functions, for example, The squares inside the average imply that experimental measurement of such a quantity would be a severe statistical challenge, as a result of the need for extensive postselection. Therefore, a more likely application of our results is to questions of computational difficulty (see Sec. VI B).
To end this section, let us discuss the issue of robustness of our results to the choice of protocol. Recall that we study both a toy model that describes the transition of S 0 and the more physical (generic) transition that is captured by the higher Rényi entropies. The question of robustness is mostly of interest for the latter, but let us first comment on the former.
The dynamics of S 0 (specifically) is very sensitive to the fact that the dynamics is of circuit type. Note that S 0 is not typically a physically significant quantity [46] because of its extreme sensitivity to weak perturbations of the state. Nevertheless, within the setting of circuit dynamics, the universal results we find for S 0 are robust for that quantity. Our classical percolation mapping is exact for any circuit of the form shown in Fig. 3, irrespective of the choice of unitaries in the circuit, so long as they are not fine-tuned.
While a full presentation of results is left for later sections, we note here that, in terms of the physical transition, our results suggest that protocols (i) and (ii) listed above are in the same universality class. In Table I, we show the location of the critical point (which, of course, depends on the nature of the unitary operator being applied and is therefore nonuniversal) and the correlation length exponent [48] obtained for each type of dynamics by studying the different Rényi entropies. For a given type of dynamics, different Rényi entropies with n ≥ 1 all give compatible estimates of p c , consistent with the idea that there is just a single physical transition in the entanglement structure.
The protocols we defined above could be varied in many ways. For example, one could replace circuit dynamics with Hamiltonian dynamics in continuous time (with measurements remaining discrete events). We expect this difference to be unimportant for the long-length-scale physics of the transition. Note, for example, that there is generically no difference in symmetry between the two situations. In the absence of measurements, continuous-time Hamiltonian dynamics (with a constant Hamiltonian) have time translation symmetry, but this symmetry is destroyed by measurements, which inevitably induce randomness.
The intrinsic randomness of measurement outcomes also leads us to conjecture that the universality class of the transition can persist for dynamics in which the randomness in the times and locations of measurements is removed since the dynamics remains random anyway. For example, for continuous-time dynamics, one could make measurements in some periodic fashion, with a variable period t 0 , and we expect a transition as a function of the dimensionless rate ℏ=ðt 0 JÞ, where J is the energy scale of the Hamiltonian. "Weak" measurements can also be substituted for projective measurements [24].
For the rest of the paper, we focus on the circuit dynamics (i) and (ii) and higher-dimensional generalizations.

III. ZEROTH RÉNYI ENTROPY AND CLASSICAL OPTIMIZATION
The zeroth Rényi entropy of entanglement between a subsystem and its exterior, S 0 , counts the number N of nonzero eigenvalues of the reduced density matrix: S 0 ¼ log 2 N. Since arbitrarily small eigenvalues are counted, S 0 can change discontinuously in response to a small perturbation, and as a result, we do not usually think of it as a physically significant quantity. Still, in an idealized system, it can be a useful conceptual starting point for thinking about the von Neumann and higher-order entanglement entropies [49].
The tool for calculating S 0 is the "minimal cut," a geometrical upper bound on entanglement in a tensor network which is a useful starting point for thinking about the scaling of entanglement in various situations ranging from holography [27,50,51] to unitary dynamics [6,9,13]. The minimal cut gives S 0 exactly for our dynamics (for both choices of unitaries in Sec. II), as would be expected from parameter counting and as we confirm numerically below [52].
We expect the mapping below for S 0 to be exact for the present circuit geometry for any non-fine-tuned choice of the unitaries in the network, whether random or deterministic. This mapping is also likely to describe higher Rényi entropies in special limiting cases, as discussed in Sec. VI C, though not generically.
In the absence of projective measurements, S 0 ðAÞ is given exactly by the number of links in the circuit that must be cut in order to separate the physical "legs" (external bonds at the top boundary) for the spins in A from those for the spins outside A. In the presence of projective measurements, there is a simple generalization of this picture, which is illustrated in Fig. 4.
When a projective measurement is made, the state of the measured spin at that point in its history is fixed, for example, to ↑. To express the wave function (for a given outcome of the measurement), it is then no longer necessary to sum over the spin index on this site. One can therefore think of a measurement as "breaking a bond" in the classical network.
If the broken bonds are sufficiently dense that one subsystem is completely isolated from the other-i.e., if FIG. 4. Mapping between circuit dynamics with measurement (left panel) and bond percolation on the square lattice (right panel). For the purposes of the minimal-cut picture, a projective measurement breaks a bond. The minimal cut may pass through broken bonds at zero cost. For the figure on the right, which is topologically equivalent to the one on the left, we represent each unitary as the vertex of a square lattice. Broken bonds are dashed (unoccupied), and unbroken bonds are solid (occupied). This configuration corresponds to bond percolation on the square lattice, with a probability 1 − p for a bond to be occupied. The minimal cut lives on the dual square lattice. one can make a cut that separates the two subsystems without passing through any unbroken bonds-then the two subsystems can have no entanglement. If no such cut is possible, then the zeroth Rényi entropy is equal to the number of unbroken bonds that must be cut in order to separate the two subsystems. The above picture applies for each set of measurement outcomes, which we should, in principle, average over using Born's rule. However, this picture shows that S 0 is independent of the measurement outcomes and depends only on their times and locations.
In this way, the problem of calculating S 0 for some subsystem becomes equivalent to an optimization problem in a classical percolation configuration. Some proportion p of bonds in a network are broken, and one must determine the minimal number of additional bonds that should be cut in order to separate the subsystem from the rest of the network. This mapping is illustrated in Fig. 4.
One can generalize this mapping to any number of dimensions. If the number of spatial dimensions is d, the minimal cut is a surface of d − 1 dimensions (see Sec. V). For the rest of this section, we restrict our attention to 1 þ 1D, where the minimal cut is a path. Problems of finding a minimal-cost path in a disordered medium are well studied in the mathematical literature under the name "first passage percolation" [53]. Reference [54] contains rigorous results for the present case, where steps of the path cost either zero or unity. Below, we give a heuristic discussion that is consistent with Ref. [54].
The classical percolation problem can be approached computationally using standard numerical algorithms, which we describe in Appendix B. Briefly, we simulate a rotated square lattice (as shown on the right-hand side of Fig. 4) with width L and depth t, of which each bond has a cost of either zero (with probability p) or unity (with probability 1 − p). Here, L is the number of spins in the spin chain, which is equal to the number of steps on the dual lattice required to traverse the network from left to right. The time t is equal to the number of layers of unitary operations applied, which is the number of steps required to traverse the network from top to bottom. (For example, Fig. 4 shows L ¼ 6 and t ¼ 5.) For a given random realization of the network, and for a given choice of subsystem, we search deterministically for the minimalcost pathway that separates the subsystem from the rest of the network. (Such a pathway may, in general, extend outside the network and may not be unique.) Note that S 0 is defined as the total cost of this minimal-cost pathway, which corresponds to the number of unbroken bonds that must be cut in the circuit in order to separate the subsystem from the rest of the network. All data presented in this section are averaged over many random realizations of the network for each choice of parameters.
In Fig. 5, we check the results of this approach against data from a full matrix product state simulation of the random unitary circuit (we modify the boundary condition of the percolation problem to match those used in the quantum simulation, Sec. II). One can see that the two sets of data are very close. A small systematic error appears in the quantum estimation of S 0 due to spurious small eigenvalues of the reduced density matrix that arise from numerical truncation error. This issue is discussed further in Appendix B 2.
In the remainder of this section, all results are taken from the classical percolation network simulation, which is significantly faster computationally.

A. Universal dynamics of S 0 ðtÞ
Given the percolation picture, one can understand the dynamics of the Hartley entropy S 0 as a function of time t using the following scaling arguments. Let the initial state at time zero be unentangled. We first study the growth of entanglement, between two halves of the system, in the limit where the system size L → ∞. In this case, the minimal cut meanders from the top to the bottom of the circuit, as illustrated in Fig. 4 [55].
One can understand the existence of two phases for S 0 by considering the properties of this cut deep in either of the two phases, where p is either close to zero or close to unity.
At small p, broken bonds exist only in small, isolated clusters. The minimal cut is therefore forced to pass through a number of unbroken bonds that is proportional to t with an Oð1Þ coefficient. This situation is shown in Fig. 6. In this phase, randomness in the location of measurements has a subleading effect on the entanglement, giving subleading Kardar-Parisi-Zhang fluctuations [56,57], as for purely unitary dynamics with randomness [9,18,42].
At p close to unity, on the other hand, broken bonds predominate over unbroken bonds, and there is a connected cluster of unbroken bonds that spans the entire system. Thus, at p > p c , the only contribution to the entanglement comes from passing through unbroken bonds located near the starting point of the cut. (For the bipartite entanglement, this starting point is at the top center of the network.) Once this small set of unbroken bonds has been traversed, the minimal cut can proceed arbitrarily far in the time direction using only broken bonds, and consequently, the average S 0 becomes independent of t at large t. This case is illustrated in Fig. 7.
Let us now consider scaling at and near the critical point. The percolation threshold for the square lattice is p c ¼ 1=2.
At p ¼ p c , the clusters of broken bonds have a scaleinvariant fractal structure. A key point is that adjacent clusters of some large size, of order l, approach each other to a distance of one lattice spacing (in fact, clusters that approach each other to unit separation do so at a large number of different places, of order l 3=4 [58]). Consequently, the minimal cut can pass from one large "empty chamber" to an adjacent one, of similar scale, for unit cost.
At the top of the network, where the cut is anchored, the minimal path typically passes into an empty chamber of size Oð1Þ. It then passes through a sequence of larger chambers until it reaches one of size OðtÞ and exits the system. Typically, at each step, the path can find a chamber that is larger by an Oð1Þ factor than the previous one. This progression is illustrated schematically in Fig. 8. Therefore, the total number of chambers in the sequence is only of order ln t, and This logarithmic scaling for a critical first passage is proved in Ref. [54]. The coefficient A, which can be thought of as an entanglement per scale, is universal as a result of the scale invariance of the process. Below, we estimate A ¼ 0.27ð1Þ. The typical size ξ of the empty chambers is finite for p ≲ p c , but it diverges with the correlation length exponent as p c is approached:  Figs. 6 and 7). The minimal cut passes through the sequence of white domains shown in blue and white. Writing the linear sizes of consecutive domains in this sequence as R 1 ; R 2 ; …, the ratio R iþ1 =R i is typically larger than 1 (see text), so for an i of order log t, the minimal cut reaches a cluster of OðtÞ size that borders the boundary and the sequence ends. Domains i and i þ 1 typically approach each other to within one lattice spacing, so the cost scales as the number of domains in the sequence.  Fig. 6). The clusters of broken bonds now percolate, forming the infinite white region. The cost of the minimal cut remains finite as t → ∞: Only a finite number of unbroken bonds need to be traversed to reach the infinite white cluster. Therefore, S 0 ðtÞ ∼ t 0 .
For classical percolation in two dimensions, the critical exponent ν ¼ 4=3. The part of the minimal cut within ξ of the top of the sample costs A ln ξ, as above. At farther distances from the starting point, the path travels through chambers with size of order ξ. Each chamber-to-chamber crossing involves passing through an order-unity number of unbroken bonds, and consequently, the minimal cut passes through a total number S 0 ∼ t=ξ of unbroken bonds; thus, In other words, the entanglement S 0 at p < p c grows without bound as a function of time, with a growth rate that vanishes as p → p c . At p ≳ p c , the minimal cut escapes to the infinite cluster of unbroken bonds after a section of length OðξÞ at the top of the lattice. On scales smaller than ξ, the critical scaling applies, giving a cost A ln ξ, so that The above results are all limiting cases of the general scaling form which can be obtained by imagining rescaling both t and ξ by a constant factor and repeating the considerations above. The asymptotics of F for large and small arguments are determined by Eqs. (10) and (11). Generalizations of Eq. (12) will be useful to us below in extracting exponents numerically. This sort of scaling form (with length in place of time) has also been derived for scaling of higher Rényi entropies in critical random tensor network states [28], by a different kind of reasoning. The scaling results of Eqs. (8)- (12) can be easily generalized to the case where the system size is finite and the time t ≫ L. In this case, the minimal cut travels a total horizontal distance L=2, moving from the top center of the lattice to one of the lateral edges. The above results hold if we replace t with L in Eqs. (8)- (12), except that the scaling function F in Eq. (12) is different. If t and L are both finite, the scaling form has an additional dependence on t=L. In general, a nonuniversal speed v would enter, but here it is fixed to unity by the symmetry of the square lattice between timelike and spacelike directions. We comment further on the symmetry between t and L in the following section.
The scaling expectations above can be tested using our numerical simulations. First, Fig. 9(a) shows S 0 ðtÞ for a range of values of p (in a system of fixed aspect ratio, L ¼ 4t). As expected, S 0 grows linearly in t when p < 1=2 and remains constant in the limit of large t at p > 1=2. The inset shows that our data are also consistent with the logarithmic dependence S 0 ∝ ln t at p ¼ 1=2.
In the entangling phase, we may define an asymptotic entanglement growth rate (for a quench from an area-law state) as well as an entropy density s 0 associated with the volume-law entanglement after saturation. The latter is given by The growth rate is given by where (by definition) v 0 is the "entanglement speed" for the zeroth Rényi entropy. The quantities v 0 s 0 and s 0 are plotted as functions of p in Figs. 9(b) and 9(c). Both vanish as p → p c , consistent with Eq. (10) (and the analogous equation for the opposite regime of aspect ratio), from which we expect for the entropy density at p ≲ p c . The equivalence between v 0 s 0 and s 0 as a function of p [seen in Figs. 9(b) and 9(c)] suggests an entanglement velocity v 0 ∼ 1, which is implied by the symmetry of the lattice. More generally, in the entangling phase, one can understand the coarse-grained dynamics of the entanglement using a coarse-grained minimal membrane picture similar to that in the unitary case [9,26,59]. This picture applies to more general initial states and more general choices of subsystem, and to general dimensionality.
Let us now consider how to make use of the scaling form Eq. (12) to extract p c and ν in numerics. In a system of fixed aspect ratio (we now take t ¼ 4L), we have the scaling form S 0 ðt; pÞ ¼ A ln L þ GðL=ξÞ, as noted above. This scaling means that if we subtract S 0 ðt; p c Þ from S 0 ðt; pÞ, we obtain a pure scaling function, without the logarithmic term: We should therefore see a scaling collapse if we plot the left-hand side as a function of ðp − p c ÞL 1=ν . Such a collapse is demonstrated in Fig. 10.
In the present case, p c and ν are known analytically, but this kind of scaling collapse will be useful in the next section, where the analogous quantities must be determined empirically. To get an idea of the precision of this process, here one can also attempt to do an unbiased search for the values of p c and ν that produce the best scaling collapse of the data. Our algorithm for this search is described in Appendix A. Briefly, this algorithm seeks to minimize an objective function that is equal to the sum of the square residuals of all curves S 0 ðp; LÞ − S 0 ðp c ; LÞ from their common mean at a given value of ðp − p c ÞL 1=ν , summed over all unique values of ðp − p c ÞL 1=ν that are present in the data set. A simple gradient descent search returns the values of p c and ν that minimize this objective function.
Performing such a search using the data in Fig. 10(a) yields p c ¼ 0.51 AE 0.01 and ν ¼ 1.24 AE 0.13, as listed in Table I. Closely matching results are obtained if one analyzes data for S 0 obtained from the "quantum" simulations of the random unitary circuit or Floquet dynamics, which are restricted to smaller size. These scaling collapses are shown below in Figs. 14(b) and Table I, and all are consistent with the exact values p c ¼ 1=2 and ν ¼ 4=3 for two-dimensional percolation. In Sec. IV, we apply our algorithm for data collapse to the higher-order Rényi entropies S 1 , S 2 , and S ∞ near the generic transition, where no such exact values are available.

B. Spatial correlations and long-range entanglement
The critical stationary state at p c has power-law spatial correlations that reflect its unusual scale-invariant entanglement structure. The simplest set of measures for these correlations is the set of Rényi mutual informations I n between a pair of distant spins a and b (Sec. II): In this section, we discuss I 0 , which is critical at the percolation critical point. The quantities I n with higher n are instead critical at the generic entanglement transition that occurs at smaller p. Note that I 0 can be computed numerically using minimal-cut configurations with three different boundary conditions corresponding to the terms in Eq. (17).
Let I 0 ðxÞ be the mutual information for a pair of spins separated by a distance x, averaged over realizations. For simplicity, consider an infinite spin chain that has been evolved for infinite time and thus is in the steady state corresponding to a particular p. Note first that I 0 ðxÞ decays exponentially with x on both sides of the critical point. The disentangled state is "close" to a product state, and connected correlations fall off rapidly with distance. In the entangling phase, subsystems are strongly entangled with their exterior, but the mutual information shared between any two small subsystems is negligible: The average of S 0 ða ∪ bÞ is exponentially close to that of S 0 ðaÞ þ S 0 ðbÞ. For example, consider p ¼ 0: The reduced density matrices thermalize to "infinite temperature," and all local correlations vanish. However, at the critical point, neither of these mechanisms destroys correlations.
The mutual information for a pair of spins separated by distance x, I 0 ðxÞ, is plotted in Fig. 11. (We use a chain of length L ¼ 3x that has been evolved for a time t ¼ L, rather than an infinite system, but this does not change the exponent below.) As expected, there is exponential decay for p > p c and p < p c . But at p ¼ p c , the data at large distances are consistent with This exponent is exact and follows from known results for percolation [60][61][62], as discussed next. Our percolation mapping also gives an intuitive picture for the scaleinvariant entanglement structure underlying Eq. (18). In more detail, there are three possible values for the mutual information in a given realization: I 0 ðxÞ ¼ 0, 1, 2. At large x, almost all configurations give I 0 ðxÞ ¼ 0. The probability of either of the nonzero values is Oðx −4 Þ. Situations that give finite I 0 are like those shown in Fig. 12. It is simplest to consider cases where I 0 ¼ 2. These occur when the topology of the percolation configuration is as shown in Fig. 12 (upper panel): There is a percolating cluster of unbroken bonds connecting the two spins, and this cluster does not touch the boundary elsewhere. The probability of such a configuration scales as above [60][61][62][63]. A configuration with I 0 ¼ 1 can be obtained by adding a narrow bridge of occupied bonds as in Fig. 12 (lower panel): One can argue [64] that the probability of such a configuration also scales as x −2Δ . This idea is borne out by our numerics: The ratio of the probabilities of the two values of I tends to an Oð1Þ constant as x → ∞.

IV. GENERIC DYNAMICAL TRANSITION
For the entropies S n with n > 0, there is no mapping to classical percolation, at least in a generic model. Nonetheless, we find that many qualitative features of In the former case, the cost of the minimal cut S 0 ða ∪ bÞ ¼ 0, and in the latter case, it is S 0 ða ∪ bÞ ¼ 1; i.e., the obstacle marked in the upper figure is of minimal width. In both cases, S 0 ðaÞ ¼ S 0 ðbÞ ¼ 1. The inset shows the same data plotted on a semilogarithmic scale. The dashed straight lines in the inset indicate exponential decay, which describes the curves at p ≠ p c . All data correspond to system size L ¼ 3x, with the two spins located at positions L=3 and 2L=3. Data are averaged over 3 × 10 6 random realizations, and the evolution time is t ¼ L. Error bars indicate 1 standard deviation divided by the square root of the number of realizations. the toy model carry over to the physical entanglement dynamics, including the existence of a finite threshold p c separating entangling and disentangling phases, and a nontrivial scale-invariant state at p c . We show below that the threshold p c is lower than that in the toy model: For example, in the random unitary circuit, we find p c ¼ 0.26 AE 0.08, compared to p c ¼ 1=2 in the toy model. Therefore, there is a regime of measurement rate given by p c < p < 1=2 in which S 0 grows linearly with time and the minimal-cut picture suggests a volume-law entanglement, but the von Neumann and higher-order entropies, in fact, obey area-law scaling [65]. We also find an exponent ν that is different from 4=3, implying that the critical behavior for S n with n ≥ 1 is in a universality class that is distinct from two-dimensional percolation.

A. Dynamics of S n at the generic transition
Our main tool for studying the dynamics of the entanglement is a numerical simulation of the unitary circuit with measurements (we use the ITensor package to manipulate matrix product states [20]). Details of these simulations are provided in Ref. [9] for the case without measurements. We modify these simulations only by the stochastic application of projective measurements after each unitary operation. The outcome of each measurement is chosen randomly with a probability determined by the Born rule. For each value of p and each type of simulation dynamics (either random unitary or Floquet), results for the entanglement are averaged over many random realizations.
An example of our simulation results is plotted in Fig. 13(a), which shows the von Neumann entropy S 1 for the case of random unitary dynamics, as a function of p, for different system sizes. At small p, the value of S 1 depends strongly on system size, suggesting a volume-law entanglement. At large p, however, data from different system sizes collapse onto a single value, suggesting arealaw behavior.
We demonstrate that there is indeed a transition between two phases using a scaling collapse of the type described at the very end of Sec. III A. We show in Sec. III that at the transition of the zeroth Renyi entropy, we have the scaling form S 0 ¼ A ln t þ Fðut=ξ; ut=LÞ (here, u is a nonuniversal speed, which in the lattice model of Sec. III, is fixed to u ¼ 1 by symmetry). Our results for the generic transition are consistent with the same scaling form for the higher entropies, S n ¼ A n ln t þ F n ðvt=ξ; ut=LÞ. The value of p c is different for the generic transition, as are universal constants such as the correlation length exponent ν; we also allow for dependence of A n and F n on the Rényi index n. We give evidence below that the dynamical critical exponent at the transition, z, is unity; thus, x and t behave the same way under rescaling, and our assumption that t=L is the appropriate scaling variable is justified.
This scaling form implies that for a system of fixed aspect ratio, the difference S n ðL; pÞ − S n ðL; p c Þ depends on L and p only through the combination ðp − p c ÞL 1=ν , which enables us to perform a numerical scaling collapse. The values of p c and ν are estimated using the algorithm described in Sec. III A and detailed in Appendix A.
The resulting scaling collapse is shown in Fig. 13(b) for the von Neumann entropy in the case of random unitary dynamics. As noted in Table I, the values of p c and ν obtained are p c ¼ 0.26 AE 0.08 and ν ¼ 2.01 AE 0.10.
The full set of scaling collapses, for both types of dynamics and for Rényi indices n ¼ 1; 2; ∞ (and also for n ¼ 0, which is in the different universality class discussed in the previous section) is shown in Fig. 14. In all cases, the scaling collapse is of a similar quality to that in Fig. 13(b). The corresponding values of p c and ν are listed in Table I. For all the "physical" entropies (n ¼ 1, n ¼ 2, n ¼ ∞), we find consistent p c estimates for a given type of dynamics, and we find consistent ν estimates for both types of dynamics. An uncertaintyweighted average of all six measured values at n > 0 gives The functional form of the scaling functions at large positive and negative values of the scaling variable also appear to be compatible with what is implied by matching to linear growth of entanglement in the entangling phase and saturation in the disentangling phase. For a given n ¼ 1, 2, or ∞, the shape of the scaling functions looks slightly different for the two dynamical protocols (Floquet and random unitary). However, note that the scaling function depends on the aspect ratio, which differs between the two cases (note that the aspect ratio should be thought of not as t=L but as ut=L, where u is a nonuniversal speed that can differ for the two protocols).
Our scaling analysis above assumes that the characteristic length scale ξ and the characteristic timescale τ diverge with the same power of jp − p c j, rather than behaving as ξ ∼ jp − p c j ν L and τ ∼ jp − p c j ν t with distinct ν L and ν t , and a dynamical exponent z ¼ ν t =ν L different from 1. Evidence for z ¼ 1 comes from comparing results for the entanglement in the limits t → ∞ and L → ∞. We approximate these limits by extrapolating the data for random unitaries at fixed L as a function of t and then separately extrapolating the data at fixed t as a function of L. A separate scaling analysis is performed for each set of extrapolated data, and the resulting estimates for the critical exponents are ν L ¼ 1.99 AE 0.20 and ν t ¼ 2.00 AE 0.37. These are consistent with each other and with the other results in Table I.

B. Two-point spatial correlations
Equal-time correlations at the critical point can also be probed using the Rényi mutual information, as discussed in Sec. III B [66]. As in the classical case, we consider the quantity I n ða; bÞ, defined by Eq. (5), where the sets a and b constitute single spins separated by a spatial distance x.
[The quantity I 2 ðxÞ is directly related to the spin-spin correlation function in a way that is described in [45].] In Fig. 15, we plot I n ðxÞ close to the critical point of the random unitary dynamics for n ¼ 1, 2 and n → ∞. In order  Table I. to minimize finite-size effects, we choose the system size to scale with x at large x, setting L ¼ 2ðx þ 1Þ and t ¼ L. The data correspond to system sizes L ¼ 8, 16, 24, 32, and 40, and the corresponding separations are x ¼ 3, 7, 11, 15, and 19. The data are consistent with power-law scaling at the critical point. For comparison, the black dashed line in Fig. 15 shows I ∝ x −4 , as in classical percolation. Away from the critical point, I n ðxÞ shows exponential decay: See Fig. 15 inset, which shows data for p ¼ 0.4, and the discussion in Sec. III B.

V. HIGHER DIMENSIONS
The mapping between S 0 and classical percolation implies that there is a transition in S 0 in any number of dimensions. In this section, we discuss the transition in higher dimensions, focusing, for simplicity, on S 0 only. In other words, we discuss the higher-dimensional version of our toy model.
The mapping of S 0 to the optimal cost of a cut generalizes directly from the 1 þ 1D case. For example, Fig. 16 (left diagram) shows one choice of circuit geometry in 2 þ 1D. This leads to a bond percolation problem on the lattice shown in the right panel. In such higher-dimensional situations, the minimal cut is a membrane whose cost is equal to the number of unbroken bonds that pierce it. To compute the zeroth entanglement entropy S 0 ðAÞ of a region A, one must find the minimal membrane separating the legs at the top in A from those inĀ.
As in the 1 þ 1D case, there is a phase transition between an entangling phase at p < p c and a disentangling phase at p > p c . The disentangling phase only has area-law entanglement in the steady state, while the entangling phase has linear-in-time entanglement growth and volume-law entanglement in the steady state. A membrane picture for the case without measurements was introduced in Ref. [9] and applies similarly here. The coefficient of the volume law vanishes as p → p c from below, as in 1 þ 1D.
The transition is at the bond percolation threshold for the lattice shown [67]. Recall that in our notation, p is the probability that a bond is broken. When p > p c , the unbroken bonds do not percolate. It is then easy to see that the cost per unit area vanishes for a large minimal membrane, leading to area-law entanglement in the steady state. When p < p c , the unbroken bonds percolate. As a result, the membrane must cross a number of unbroken bonds that scale with the surface area. This scaling leads to the properties of the entangling phase mentioned above.
One difference from 1 þ 1D is that this mapping indicates that the critical state exactly at p c is likely to show an area law for S 0 in spatial dimensions d > 1 [72]. This situation is reminiscent of ground states of higherdimensional critical systems described by conformal field theories, which show area-law entanglement, in contrast to the 1 þ 1D case where they show logarithmic entanglement [74]. Here, the dynamical transition is between area-law and volume-law phases, so it is unlike a ground-state phase transition, which is between area-law phases.

A. Summary
This paper has presented a new dynamical phase transition in the structure of evolving quantum wave functions. The generic phase diagram as a function of the measurement rate p per degree of freedom is proposed in Fig. 1, with an "entangling" phase at low p and a "disentangling" phase at large p. The existence of a critical measurement rate p c means that one can induce a scaleinvariant entanglement structure simply by tuning the frequency of measurements, without any fine-tuning of the Hamiltonian.
It should be emphasized, however, that in order to observe the distinction between the two phases, it is essential to consider histories of the system with particular outcomes of the individual measurements, rather than simply averaging the density matrix over all possible measurement outcomes. The transition is not apparent in this averaged density matrix, which, for any measurement rate p > 0, will generically just tend to the infinitetemperature density matrix. We comment on possible practical implications of our results in Sec. VI B below.
It is perhaps instructive to comment on the failure of a naive argument that would seemingly imply area-law entanglement at all p > 0. Consider the 1 þ 1D case. In the absence of measurements, entanglement is produced at a nonzero rate. It is tempting to imagine that when the entanglement S between two (say, semi-infinite) subsystems is large, there is an associated length scale of order l ∼ S, such that measuring a spin within a distance of order l of the division between the subsystems eliminates an FIG. 16. A 2 þ 1D analogue of the correspondence in Fig. 4. Left diagram: A simple choice of circuit geometry in 2 þ 1D. Some unitaries (bricks) are hidden in the figure, so as to expose just six in each layer. Right diagram: Each unitary corresponds to a node of the four-coordinated lattice shown. Measurements break bonds of this lattice. The minimal cut (not shown) is a two-dimensional sheet whose cost is the number of unbroken bonds it bisects.
Oð1Þ amount of entanglement, while measuring spins outside this region has a negligible effect. This reasoning would suggest that the steady-state entanglement is determined by balancing the Oð1Þ production rate against a decay rate of order pl ∼ pS given by the total rate for measurements in this region. This balance would yield area-law entanglement (of order 1=p) even for small p.
The failure of this argument can be seen by thinking about the entanglement using the minimal-cut picture. Consider, for example, the effect of a single measurement within the l-sized region. When the system size L → ∞ and the time t is finite, this measurement typically has a very small effect on the entanglement because it results in a broken bond (at the final time) that is far from the path of the optimal cut, which is the shortest path to the initialtime boundary. The situation is different at asymptotically late times in a finite system; the minimal cut then goes sideways to the spatial boundary and may take advantage of the broken bond. In this regime, measurements (in the smaller subsystem) have an Oð1Þ effect on the entanglement.

B. Implications for simulations of quantum systems
Our results may have useful implications for numerical simulations of many-body quantum systems. In particular, the entangling-disentangling transition is likely to be relevant to situations where we need to simulate evolutions (of pure quantum states) that involve measurements or effective measurements.
First, imagine that we wish to find the state at time t, denoted jΨ ðo 1 ;…;o N Þ i, given knowledge of the outcomes o 1 ; o 2 …; o N of the sequence of earlier measurements. We assume that the Hamiltonian and initial state are also known and that the number of measurements is extensive in both space and time. In 1 þ 1D, matrix product methods may be used for the evolution (in principle, similar considerations arise in higher dimensions). The computational difficulty of these methods depends on the amount of entanglement across cuts that divide the system into two parts [29,75]. Therefore, which side of the entangling-disentangling transition the system is on becomes crucial. In the entangling phase (fewer measurements), a matrix product representation requires a bond dimension that scales exponentially with the system size, and simulations quickly become unfeasible with growing system size. On the other hand, in the disentangling phase, where the von Neumann entropy saturates to a finite value, we can expect rapid convergence as a function of bond dimension, so the dynamics is computationally tractable.
A natural context for such problems is the simulation of open quantum systems [76]. The dynamics of open systems is effectively nonunitary due to interaction with the environment. For example, systems of cold atoms or molecules may exchange photons with the environment or may be subject to environmental noise that must be averaged over. Formally, one can think of the environment as measuring the internal quantum states of the particles, except that the outcomes of these measurements are not known and should be averaged over, yielding a mixed state. A direct calculation of the time dependence of quantum expectation values would require one to numerically evolve the full density matrix, which can be prohibitively difficult since the number of elements in the density matrix scales as the square of the Hilbert space dimension.
Because of this limitation, an important tool for calculations in open systems is the method of "quantum trajectories," or "quantum jumps" [77][78][79][80][81][82][83][84][85][86][87] (see Refs. [88,89] for reviews). In this method, one calculates the evolution of one single pure state at a time, choosing at each instance of "measurement" a single random outcome. The time dependence of an observable's expectation value may then be found by computing the expectation value in each simulated pure state and then averaging over many such random trajectories. In the conceptually simplest case, the effective dynamics of the pure state involves only unitary evolution and measurements. [More generally, additional non-Hermitian terms are required in the Hamiltonian (see, e.g., Refs. [88,89]); the effect of such terms on the entanglement structure deserves further study.] As noted above, the computational difficulty of such calculations is determined by the amount of entanglement in individual trajectories [90]. In this paper, we have shown that there is a sharp phase transition between different regimes, implying a well-defined easy-to-hard transition for such numerical simulations as a function of the rate of dissipation to the environment. Interestingly, the logarithmic growth of the entanglement at the critical point implies a power-law scaling of computational difficulty; this power-law scaling is also advantageous for numerical investigations of the entanglement phase transition.
This phenomenology of different regimes of entanglement and corresponding computational difficulty is consistent with numerical simulations presented in Ref. [97]. The authors of Ref. [97] studied the time evolution of the entanglement in quantum trajectories of the Bose-Hubbard model at different rates of dissipation to the environment. When the dissipation was low, the entanglement was seen to grow as a function of time before saturating at a systemsize-dependent value. At high dissipation, on the other hand, the entanglement quickly saturated to a small, finite value. The authors also found that the quantum trajectory method becomes more efficient computationally when the dissipation is high.
A scaling analysis of how the entangling-disentangling transition affects computational hardness might be interesting. This analysis may involve studying the critical scaling of the Rényi entropies S n with 0 < n < 1, which are important in the analysis of convergence of matrixproduct-state algorithms [29].

C. Universality classes
In this paper, we have studied a toy model for the entangling-disentangling transition as well as what we propose is the generic version of this transition. While similar in many respects, these problems are in different universality classes. In certain limits, it should be possible to study a crossover between these universality classes.
In principle, studying the entropies S n as a function of n with 0 < n < 1 might reveal such a crossover. However, this does not give an obvious starting point for an analytical treatment.
A more promising direction may be to attempt to expand around a limit of large local Hilbert space dimension. We have focused our discussion on chains or lattices of spin-1=2. However, the random-unitary-dynamics-plusmeasurement protocol we have discussed may be generalized to "spins" with local Hilbert space dimension q ≥ 2. If the minimal-cut formula holds exactly in the limit q → ∞ for random unitary dynamics plus measurements, as it does for random unitary dynamics at infinite q [9,18] and for tensor networks with infinite bond dimension [28], then taking q → ∞ is an alternative way to obtain the effective classical optimization problem in Sec. III, this time for all S n and not just S 0 [98]. Therefore, in the regime 1 ≪ q < ∞, we expect to be able to probe a crossover between the toy model universality class of Sec. III and the generic universality class of Sec. IV.
When q is large but finite, there will then be a large crossover length scale ξ Ã , with the universal properties of the classical optimization problem visible at smaller scales and those of the generic universality class visible at larger scales [99]. Following the flow to scales greater than or close to ξ Ã is likely to be hard, but it should be possible to understand the initial instability. Previous results for unitary circuits and tensor networks [18,25,27,28,32] are suggestive of a possible mechanism. In simple limits, those mappings involve an effective statistical mechanics of domain walls that have both "energy" and configurational "entropy" (not to be confused with the physical entanglement entropy). The crossover is likely to occur when configurational entropy becomes comparable with energy for these domain walls. For q ≫ 1, energy dominates and is given by ln q times the length of the minimal cut. However, entropy becomes significant at large scales l. Note that a segment of domain wall of minimal cost (ln q) connecting two adjacent white clusters of scale l (Fig. 8) has Oðl 3=4 Þ choices for the connecting bond. This scaling suggests that we should compare ln q with ln l 3=4 , suggesting the crossover length scale ξ Ã ∼ q 4=3 . This conjecture, however, demands a proper calculation.
A natural question, which we have not addressed here, is whether there is a field theory description of the renormalization-group fixed point governing the entangling-disentangling transition. A starting point may be the replica trick, which has been used in unitary circuits and random tensor networks [18,28] to construct mappings to (replica limits of) effective classical spin models in which the degrees of freedom are permutations [27,100].
Notably, Ref. [28] constructed a replica description of a "holographic" random tensor network state (projected entangled pair state) and used it to argue that there is a phase transition between area-law and volume-law phases of the tensor network state. While the transition itself is hard to address as a result of the replica limit, the assumption of a single transition described by a conformal field theory leads to logarithmic scaling of the entanglement with subsystem size and scaling forms analogous to Eq. (12). The replica description also gives a formal explanation for why all the Rényi entropies become simultaneously critical in the tensor network: They correspond to distinct correlation functions in the same conformal field theory [28].
Above, we found that the dynamical exponent for our dynamical transition is consistent with z ¼ 1, so it may be that the dynamical transition is governed by the same conformal field theory as the tensor network state [28], with one coordinate interpreted as time. This possibility is an interesting subject for future numerical simulations.

D. Outlook
The results we have presented here suggest a number of further directions that are worth exploring. Most directly, it would be useful to make a more detailed simulation study of the quantum problem. While we have pointed out that the quantum and classical problems are similar in the sense of having a scale-invariant critical point at a nonzero measurement rate, with qualitatively similar scaling forms, there are universal distinctions between them that are worth exploring. One starting point is the more detailed behavior of the mutual information I n ðxÞ as a function of separation x.
It would also be enlightening to test whether, as suggested in Sec. II, the universality class of Sec. IV applies even for models without any randomness in the choice of when and where to measure. We could obtain a convenient circuit model by fixing the locations of measurements and using the strength of the interaction in a Floquet unitary to drive the transition.
The classical problem may also be interesting to explore further, particularly in higher dimensions. In 2 þ 1D, for example, finding the entanglement S 0 amounts to searching for a minimal surface, which is an interesting statistical mechanical problem. A numerical study might give insight into the entangling-disentangling transition in higher dimensions, where quantum numerics are challenging.
Finally, it will be interesting in the future to examine more subtle features of the evolving states. For example, in what respects is the entangling phase similar to unitary entangling dynamics and in what respects is it dissimilar? (Note, for example, that while unitary dynamics does not decrease entropy, the entangling dynamics at 0 < p < p c can reduce the entropy of a volume-law state of a finite system if its initial entropy density is higher than the steady-state value.) How is the membrane picture for entanglement growth [9,14,26] modified by weak breaking of unitarity? Finally, one could also ask how the complexity of the evolving state (the minimal number of local operations required to generate it from a product state [101][102][103][104][105][106][107][108][109][110][111][112]) behaves as the entangling-disentangling transition is traversed. Note added.-We would like to draw the reader's attention to two related parallel works-by Chan, Nandkishore, Pretko, and Smith [113]; and by Li, Chen, and Fisher [114]-which appeared recently.

APPENDIX A: SCALING ANALYSIS
In Sec. IV, we found estimates for the critical measurement rate p c and the correlation length exponent ν by searching for scaling collapse among curves S n ðp; LÞ − Sðp c ; LÞ as a function of the single variable ðp − p c ÞL 1=ν . Our algorithm for performing this search is as follows.
For a given value of p c and ν, one can define an objective function Rðp c ; νÞ, which should be minimized by the search procedure, as follows. First, we estimate the value Sðp c ; LÞ for each system size L by using linear interpolation between the closest points on either side of p c . We then calculate the scaling variable x ¼ ðp − p c ÞL 1=ν for each value of p and L in the data set. The result is a family of curves y L ðxÞ, one curve for each system size, where y L ¼ Sðp; LÞ − Sðp c ; LÞ. The objective function R is then defined as the sum of the mean-squared deviations of each curve from their common mean, summed over all unique points x i in the data set. In other words, Here, y L ðx i Þ indicates the value of y L at the point x i ; if this value has not been calculated explicitly, then it is estimated by linear interpolation. Note thatȳðxÞ is the average of y L ðxÞ over all system sizes L. If the point x i lies outside the range of values of x that have been simulated for some curve y L ðxÞ, then this term is not included in the sum. In other words, when calculating the dispersion between curves at some value of x, only those curves for which there are data at the given x are taken into account. Given a set of simulation data and an estimate for p c and ν, one can evaluate the objective function Rðp c ; νÞ numerically. We then search numerically for the values of p c and ν that minimize the objective function. This search is done using simple gradient descent and is implemented in MATLAB. Care was taken to begin the search at different initial guesses for p c ∈ ð0; 1Þ and ν ∈ ð0; 10Þ to ensure that the solution found for each data set was globally optimal.
In order to estimate the uncertainty in our results for p c and ν, we examine how our results change when the amount of data included in the scaling analysis is intentionally reduced. In particular, we iteratively run the scaling analysis for a variety of reduced data sets, in which all data corresponding to system sizes larger than some value L 0 ≤ L max have been removed. (Here, L max denotes the maximum system size in the data.) The value of L 0 is varied from L max =2 to L max . A rough estimate for p c or ν in the limit of L 0 → ∞ can be made by making a linear fit of p c or ν as a function of 1=L 0 and taking the value of the y intercept. The uncertainty in p c or ν is estimated as the difference between the value of p c or ν at L 0 ¼ L max =2 and the extrapolated value at L 0 → ∞.
The results of our search algorithm and the corresponding uncertainties are listed in Table I.

Classical percolation simulation
We use the following deterministic algorithm in order to find the minimal cut S 0 through a classical networknamely, the dual square lattice illustrated on the right-hand side of Fig. 4-starting at some origin site o on the dual lattice. First, we define the adjacency matrix A for sites on the dual lattice, whose elements A ij are such that A ij ¼ 1 if site i can be reached from j by a single step (regardless of whether that step traverses a broken or an unbroken bond), and A ij ¼ 0 otherwise. The matrix A is entirely defined by the topology of the lattice and has no randomness. We also define the connectivity matrix C, which depends on the times and locations of the measurements, and is defined so that C ij ¼ 1 when the bond between i and j is broken and C ij ¼ 0 otherwise. Whether a site j can be reached from a site i without passing across an unbroken bond is determined by the "wetting matrix" W, defined by In practice, one need only calculate the Nth matrix power of the connectivity matrix, C N , where N ∼ Lt is the number of bonds in the network.
Let v be a vector indicating which sites are "wetted" by the percolation process. The algorithm to find the minimal cut begins with v o ¼ 1, where o indicates the index of the origin site, and v i ¼ 0 for all other sites i ≠ o. Let g indicate the index of the goal site; when calculating S 0 for a group of spins that includes the beginning or end of the chain, g represents either the lateral or bottom boundaries of the lattice and is adjacent to all sites along those boundaries. If ðWvÞ g > 0, then the goal site is wetted by the origin site without the need to cross any unbroken bonds, and S 0 ¼ 0. Otherwise, one can use the following iterative process to find S 0 : (1) Replace v with a vector indicating the set of all wetted sites, v → Wv. (2) Add to the set of wetted sites those adjacent sites that are not wetted, v → Av. (3) If v g > 0, then the goal site has been reached. If not, return to step 1. The minimal-cut cost S 0 is equal to the number of times that steps 1 and 2 must be repeated before v g > 0.

Quantum simulation
To simulate the quantum-state evolution, we use exact time evolution of a matrix product state. The matrix product state is manipulated using the ITensor library [20]. The unitary circuit is divided into two-site unitary operations acting on pairs of adjacent spins, as shown in Fig. 3 and described in Ref. [9]. The two-site gates are applied, first over all odd-numbered bonds (where the first bond on the left is numbered 1) and then over the even bonds.
As described in Sec. II, we consider two evolution protocols. The first, random unitary dynamics, utilizes two-site random Haar gates. The gates are produced by randomly drawing four vectors from a Gaussian distribution over the complex numbers. Then, using the Gram-Schmidt procedure, we obtain the desired unitary.
The Floquet dynamics, on the other hand, has no inherent randomness except for the locations and outcomes of the measurements. Here, we apply the two-site gate, Eq. (1), in the same order as in the random dynamics.
After a unitary is applied to a pair of spins, each of the two spins may be measured, each with probability p. If a spin is measured, the outcome probabilities p ↑ ¼ jh↑jΨðtÞij 2 and p ↓ ¼ 1 − p ↑ are determined. The state is then projected into a well-defined spin state by applying the operator ð1 AEσ z Þ=2 with probability p ↑ for the up state and p ↓ for the down state, and then renormalized.
For system lengths of even bond length, the entropy is computed each time a full double layer of unitaries is applied (i.e., after the unitaries are applied to the even-numbered bonds). On the other hand, for systems with an odd number of bonds, we compute the entropy after a single layer of unitaries is applied to the oddnumbered bonds.
To compute S 0 , we use the fact that ITensor naturally minimizes the number of eigenvalues by discarding any redundant bond dimension. No truncation threshold is set for the number of eigenvalues or their size (thus, they are cut off by the numerical precision). For that reason, a certain number of spurious eigenvalues, with value of order of the numerical precision, are, in general, retained, producing a systematic (positive) error in the calculation of S 0 . In practice, however, we find that this systematic shift of the quantum simulation relative to the classical simulation is never larger than a few percent at any value of the simulation parameters that we examined. Extending the simulation to much longer times or system sizes may increase the numerical inaccuracy of S 0 . We note, however, that the higher-order Rényi entropies do not suffer from this same source of inaccuracy since small eigenvalues contribute very little to them.
The typical number of realizations used to compute the entropy is 10 000. However, because of run-time constraints, we often used fewer realizations for longer system sizes and small values of p. For example, for t ¼ 24 and p ¼ 0.1, we used only 500 realizations, and this is reflected in the larger error bars. Also note that, throughout the paper, error bars indicate 1 standard deviation of the result, as determined by the distribution of the result over all realizations, divided by the square root of the number of realizations.
To compute the mutual information I n ða; bÞ for spins a and b, we need to compute the entanglement entropy of two disconnected regions, which is not straightforwardly implemented in the ITensor library. We decompose the reduced density matrix of the two spins into a sum of Pauli operators ρ a;b ¼ 1 4 X α;β¼1;…;4 where the coefficients are given by and where ρ is the total density matrix. We note that here a much greater number of realizations is required (about 100 000) because the mean value of the mutual information is controlled by large values, which rarely occur.