Complexity of Magnetization and Magnetic Simplification

We use the complexity=volume (CV) prescription to study the effect of a magnetic field on the computational complexity for states in the gauge theories dual to two different gravitational models. In one of these theories the complexity increases with the intensity of the magnetic field, while in the other a more interesting behavior is discovered, resulting in a phenomenon that we term magnetic simplification. The relevant difference between the two theories is that the content of the second includes a scalar operator with a nonvanishing vacuum expectation value. This leads us to conclude that the direct impact of the magnetic field is to increase the complexity of a state, but it can indirectly lower it by diminishing the complexity associated to additional degrees of freedom when these do not vanish across the space. We additionally compare the results obtained working in the full ten-dimensional backgrounds and in their effective five-dimensional truncations, exhibiting that the question is still current about which surface, whether the uplift of the 5D extremal hypersurface or the extremal surface in 10D, should be used in the CV prescription.

We use the Complexity=Volume (CV) prescription to study the effect of a magnetic field on the computational complexity for states in the gauge theories dual to two different gravitational models. In one of these theories the complexity increases with the intensity of the magnetic field, while in the other a more interesting behavior is discovered, resulting in a phenomenon that we term magnetic simplification. The relevant difference between the two theories is that the content of the second includes a scalar operator with a non-vanishing vacuum expectation value. This leads us to conclude that the direct impact of the magnetic field is to increase the complexity of a state, but it can indirectly lower it by diminishing the complexity associated to additional degrees of freedom when these do not vanish across the space. We additionally compare the results obtained working in the full ten dimensional backgrounds and in their effective five dimensional truncations, exhibiting that the question is still current about which surface, whether the uplift of the 5D extremal hypersurface or the extremal surface in 10D, should be used in the CV prescription.

I. INTRODUCTION
Recent studies on the emergence of spacetime, in the context of the AdS/CFT correspondence [1], have relied on the geometrization of quantum information quantities. Examples of this include the entanglement entropy, whose holographic dual is the area of an extremal surface in the bulk [2][3][4][5], the entanglement of purification, which is dual to the area of the minimal crosssection of the entanglement wedge [6], and the computational complexity [7][8][9][10][11](or quantum circuit complexity), which is the main focus of this manuscript. Roughly speaking, the complexity C of a given state |ψ is the minimum number of quantum gates required to produce said state from a particular reference state |R .
There are two main holographic candidates to be dual to the computational complexity of the gauge theory state (although recently it has been argued that there are many other possible candidates [12,13]). The first one is the "Complexity=Action" (CA) conjecture [9][10][11], which relates the complexity to the on-shell action of the gravitational theory evaluated in a bulk region known as the WheelerDeWitt patch. The second one is the "Com-plexity=Volume" (CV) conjecture [7,8], in which the complexity of the state is related to the volume of a certain extremal region in the bulk. More precisely, if we are interested in the computational complexity C of a given gauge theory state |ψ(τ ) at a time τ we can obtain it from the expression C(|ψ(τ ) ) = max Σ Vol(Σ) where Σ is a codimension-1 hypersurface in the bulk that intersects the boundary in the timeslice τ , G N is Newton constant and L is an arbitrary length scale which we will take to be the AdS radius. Much progress has been made to better understand both recipes for holographic complexity, such as studying the time evolution of the complexity [14][15][16][17][18][19] and its relation to the so-called Lloyd's bound [20], inspecting the UV divergences that arise in the bulk computation [21,22], the inclusion of quantum bulk effects [23], noncommutative gauge theory [24], the effects of the presence of a conformal anomaly [25], and many other works.
However, several aspects of the holographic complexity remain ambiguous. One of said aspects is the choice of the reference state |R in the definition of the CV conjecture (1). It would be natural to consider the vacuum |0 as reference state, but direct bulk computations show that in general C(|0 ) is non-vanishing, so this is not such an obvious candidate. Nonetheless, it is possible to get a measure of the complexity of creating a given state |ψ from the the vacuum |0 by computing the so-called complexity of formation, which is a vacuum-subtracted version of the complexity when both states are defined at τ = 0. The holographic complexity of formation was first studied in [26]. In that work the authors investigated what is the additional complexity involved in forming an entangled thermofield double state (TFD) compared to preparing each of the two individual CFT's in their vacuum state. According to the holographic dictionary [27], the bulk dual to a TFD state is a two-sided arXiv:2301.08261v2 [hep-th] 26 Jul 2023 eternal black hole. The authors of [26] compute the holographic complexity of AdS black holes in different dimensions and with different horizon topologies using both the CV and CA prescriptions. In all the cases studied there, C F turned out to be positive or zero, never negative. In fact, one could argue that the complexity of formation needs to satisfy as the vacuum should be the "simplest" state on any theory, with the equality occurring only if |ψ = |0 . The positivity of C F was investigated in particular spacetimes [28,29] and perturbations thereof [30,31], and it was proven to be true in [32] for the CV conjecture if the bulk geometry meets certain conditions. More specifically, if the state |ψ is dual to an asymptotically AdS d+1 spacetime and the latter satisfies the Weak Curvature Condition (WCC) which in Einstein gravity is equivalent to the Weak Energy Condition (WEC) with T µν the stress energy tensor of the theory, then the vacuum is the least CV complex state. The WCC requirement is essential, as the examples found in [29,31] that give C F < 0 also violate it. However, it was later shown in [33] that any solution to type IIB and elevendimensional SUGRA satisfies the WCC. In other words, even if the dimensional reduction of a specific asymptotically AdS d+1 × K spacetime over the compact manifold K violates the WCC, the inclusion of this directions restores it. While at first glance this may suggest that the positivity of C F always should hold from the higher dimensional point of view, this is not the case. The first point to consider is that there is not an obvious and unique way to extend the CV prescription to include the compact directions. We have at the very least two natural generalizations: 1. The volume of the maximal hypersurface Σ f ull in the full AdS d+1 × K spacetime 2. The volume of the hypersurface Σ up defined in the full AdS d+1 × K spacetime as the uplift of the hypersurface Σ which has maximal volume in the asymptotically AdS d+1 part of the bulk.
Note that in general Σ f ull = Σ up , as the uplifted hypersurface that was maximal in AdS d+1 does not need to still be maximal in AdS d+1 × K. Hence in general and these generalizations can yield different results. The positivity of the complexity of formation coming from this two candidates was studied in [33] in a special scenario where they coincide, and in which they conclude that neither can reliably avoid negative values of C F even when the WCC is satisfied from the high dimensional point of view. The reason is that the inclusion of the compact directions violates the assumption of asymptotically AdS d+1 boundary conditions used in the proof given in [32]. The gravitational asymptotically AdS 4 ×S 7 backgrounds considered in [33] are all part of a consistent truncation of eleven dimensional SUGRA over the S 7 . The truncated theory features a tachyonic scalar field with mass above the Breitenlohner-Freedman (BF) bound, which causes the violation of the WCC (equivalent to the WEC in this case) from the lower dimensional point of view. The complexity of formation coming from both the truncated theory and the full eleven-dimensional one turn out to be negative. Motivated by the findings in [33], here we study the positivity of the complexity of formation from both the lower and higher dimensional points of view in two particular gravitational models. We do this by analysing the vacuum subtracted complexity when the states are defined not only at τ = 0, but at any finite time, with a quantity that we term 'evolving complexity'. Both gravitational models come from the consistent truncation anzats of SUGRA IIB solutions given in [34], making both optimal candidates to include the compact directions in the computation of the holographic complexity. The first of this is the D'Hoker and Kraus [35] model, which we will refer as DK for short, dual to finite temperature SYM N = 4 in the presence of an external magnetic field. The second one is theÁvila and Patiño [36] model, which we will refer as AP for short, also dual to finite temperature SYM N = 4 in the presence of an external magnetic field, but with the addition of a non zero vacuum expectation value (VEV) for a single trace scalar operator of scaling dimension equal to 2. An important feature of the AP model is that, at any given value for the source of the scalar operator, there exists a maximum magnetic field intensity B c that the plasma can tolerate, becoming unstable for higher values. Below B c there are two branches of solutions for any given magnetic field intensity, with one of them thermodynamically preferred over the other.
We will show below that in the DK model, the uplift of the hypersurface in the five dimensional truncation is also extremal in the full ten dimensional background, and therefore the two manners described above to include the compact directions are equivalent, and furthermore, the result of applying the CV prescription in either the five or ten dimensional theories is the same up to a constant factor of no consequence. On the contrary, in the AP model, the uplift of the extremal hypersurface in five dimensions is not only not extremal in ten dimensions, giving place to the discussion about how to incorporate the compact space, but also the complexity obtained using the uplift in ten dimensions is not simply proportional to the one extracted from the five dimensional truncation, providing no argument to prefer this strategy over the other.
Our results show that while in the case of the DK model the complexity of the state increases with the magnetic field, which is consistent with the intuition that it is harder to create a state with a finite B compared to one with a vanishing B, in the case of the AP model the story is not so simple. As it could be arguably expected, the states in the thermodynamically unstable branch are less complex than the one without a magnetic field (but identical in every other aspect to any state of the branch). Surprisingly though, there are states in the stable branch that are less complex than the B = 0 state, occurring at a range of magnetic field intensities B s < B < B c close to maximum that the background can bare. We call this phenomenon 'magnetic simplification', and in order to study it we introduce a vanishing magnetic field subtracted version of the complexity, which we call 'complexity of magnetization', defined as the complexity associated to magnetizing a given state. Given that the main difference between the DK and AP models is the inclusion of the scalar operator with a non vanishing VEV on the later, this leads us to conclude that the direct impact of the magnetic field is to increase the complexity of a state, but it can indirectly lower it by diminishing the complexity associated to additional degrees of freedom when these do not vanish across the space.
The manuscript is organized as follows. In Sec. II we review the construction of the DK and AP models, explaining how both are part of the same general truncation anzats. In Sec. III we show how to compute the complexity by means of the CV prescription for both models from the five-dimensional perspective and present the numerical results, while in Sec. IV we do the same from the ten-dimensional point of view. We close by discussing our results in Sec. V. Some of the more technical details of our computations are contained in a series of appendices.

A. General truncation anzats
The family of solutions that we consider in this work are part of the general truncation anzats given in [34]. We consider solutions to ten-dimensional SUGRA IIB in which the metric and the self-dual five-form are the only fields that are turned on. Upon reduction, the fivedimensional fields are the the metric g µν , three Maxwell fields A i and two scalar fields ϕ j . The explicit form of the self-dual five-form is irrelevant for the present work, while the ten-dimensional line element is given by where ds 2 5 is the line element of the truncated theory, L is a parameter with units of length that corresponds to the AdS 5 radius, the µ i coefficients are given by µ 1 = sin θ, µ 2 = cos θ sin ψ, µ 3 = cos θ cos ψ, (9) the wrapping factor ∆ by with and the a i must satisfy We are using Hopf (toroidal) coordinates [37,38] on the compact directions, which means that Substitution of the reduction anzats in the SUGRA IIB equations of motion gives five-dimensional equations of motion that can be derived from the five dimensional effective action Next we list the solutions to (14) that we will study in this paper. In what follows we will set L = 1 without loss of generality.

B. Vacuum
The first solution is AdS 5 × S 5 , which is dual to the vacuum state of SYM N = 4. As such, we will refer to it as the 'vacuum solution'. In this background all the scalar and Maxwell fields are turned off while the ten-dimensional line element is taken to be ds 2 10 = ds 2 5 + dθ 2 + sin 2 θdφ 2 1 + cos 2 θdΩ 2 3 .
In Hopf coordinates the S 3 line element is written as while the Poincare AdS 5 line element is We have chosen coordinates such that (t, x, y, z) are the SYM theory directions and r is the AdS 5 radial coordinate, with the boundary located at r → ∞.

C. Finite temperature
Another important solution is the Black D3-brane geometry, which is dual to SYM N = 4 at finite temperature T . Once again the scalar and Maxwell fields are turned off and the ten-dimensional line element is given by (16) and (17). The difference is that the fivedimensional line element is now This geometry features a black hole whose event horizon is located at r = r h , and its temperature, given by dictates the one of the quantum state as well.
We are not using the standardr radial coordinate and instead we employ a scaled and translated version r defined by the relationr = r + r h 2 . The reason is that the next two families of solutions are naturally written in this coordinate. Of course, (19) reduces to the vacuum solution (18) when we take T = 0.

D. DK model
Next is the family of solutions constructed by D'Hoker and Kraus [35], which we will refer as the DK model for short. This model is dual to SYM N = 4 at a finite temperature T in the presence of a magnetic field B. This can be recovered from the general truncation anzats by setting and thus the ten-dimensional line element takes the form The presence of the Maxwell field A deforms the 3-sphere in such a way that a 3-cycle with line element is obtained. On the other hand, the anzats for the line element of the non-compact part of the spacetime is while the only Maxwell field of the truncation is taken to be We are using the same coordinates as in the vacuum and black D3 solutions, and the construction is done so that like in those cases, every element of this latter family of backgrounds features a horizon located at r h where the metric function U (r) vanishes. The metric asymptotes precisely AdS 5 at the boundary r → ∞ for any B and T , with the former matching the magnetic field intensity in the dual gauge theory. Thus every member of the family is characterized by the values of its magnetic field intensity B and temperature T , which suggests labelling each solution by the dimensionless ratio B/T 2 . However, the DK model features a conformal anomaly for any nonvanishing magnetic field intensity, which introduces another length scale at the quantum level in the gauge theory side. As a consequence not all dimensionless physical observables are functions of B/T 2 alone. We have previously shown in [39] that, when computed by means of the CA conjecture, the holographic complexity is insensitive to the conformal anomaly in the DK model. We will investigate if this is also the case for the CV conjecture in the following sections. The only known analytical members of the DK model are the black D3-brane solution for B/T 2 = 0 and BTZ×R 2 for precisely B/T 2 = ∞. For any intermediate values of B/T 2 it is necessary to resort to numerical methods to solve the equations of motion. The explicit integration procedure that we follow is explained in detail in [40] for the solutions outside the event horizon and in [39,41] for the solutions inside the horizon.

E. AP model
Finally we have the family of solutions constructed bý Avila and Patiño [36], which we will refer to as the AP model for short. This background is also dual to SYM N = 4 at finite temperature T in the presence of a magnetic field B, but with a non-vanishing VEV (which is a function of B and T ) for a single trace scalar operator. The model is obtained from the general truncation anzats by taking with which in turn means that and the wrapping factor is given by The ten-dimensional line element is given by where the 3-cycle line element dΣ 2 3 (A) depends on the one Maxwell field of the truncation and is given by On the other hand, the anzats for the line element of the non-compact part of the spacetime is once again while the Maxwell field is taken to be and the only scalar field of the truncation depends solely on the radial coordinate Every element of the family features a black hole, with a horizon located at r = r h where the metric function U (r) vanishes, and asymptotes AdS 5 at the boundary r → ∞. Under this circumstances the magnetic field intensity B coincides with the one in the dual gauge theory. Given that the equations of motion coming from (14) are highly non-linear, their solution must be obtained numerically for any non-vanishing intensity of the magnetic field. The general integration procedure in the region outside the horizon is described in detail in [36], while for the inner region we describe it in App. A. Notably, the equations of motion require a non-constant scalar field ϕ(r) for any non-vanishing magnetic field, which means that the DK model cannot be recovered from the AP for B other than zero, in which case both reduce to the black D3-brane.
The near boundary behavior of the scalar field ϕ is which means that it saturates the BF bound [42,43] and it is dual to a single trace scalar operator O ϕ of scaling dimension equal to 2. According to the holographic dictionary, ψ 0 is dual to the source of the operator and ϕ 0 to its vacuum expectation value O ϕ [43]. From the gauge theory perspective, it makes sense to specify the source of the operator and then compute the vacuum expectation value that it generates in response to such source. It was shown in [36] that for any given source ψ 0 there exists a critical magnetic field intensity B c that the plasma can tolerate, becoming unstable for higher values. From the dual gravitational perspective, beyond this critical value B c , the geometries develop a naked singularity. Below B c there exist two branches of solutions for any given B/T 2 that differ in the value that O ϕ /T 2 takes. One of these branches was exhibit to be thermodynamically preferred over the other [36], since the one with the higher value for O ϕ /T 2 corresponds to a state with negative specific heat, higher free energy and lower entropy than the other, showing that the solutions with smaller O ϕ /T 2 are thermodynamically preferred. Throughout this manuscript we will fix the source ψ 0 to 0, which means that the maximum magnetic field intensity that the background can bear is given by B c /T 2 11.24.
The original motivation for the AP model was to find a feasible way to easily add fundamental degrees of freedom by means of the embedding of D7-branes in the probe limit. This objective was achieved in [25,44], where it was proven that the interplay between the magnetic and scalar fields leads to a very interesting thermodynamic behavior for the fundamental matter. The two properties of the metric associated to (31) that permit an easy embedding of a D7-brane on it are that its components do not depend on the angular coordinate φ, and that the direction that the latter coordinate represents remains orthogonal to the rest of the spacetime. The inclusion of the scalar field ϕ was crucial for this to happen.
Finally, another important thing to note is that the AP model, just like the DK model, possesses a conformal anomaly for any B = 0. We will investigate if this has any effect on the CV computation in the following sections.

A. CV computation
In this section we will discuss how to compute the computational complexity for the two models described above when studied from the perspective of the truncated 5dimensional theories. First we explain how to compute the complexity by means of the CV prescription in this class of bulk geometries. According to the CV conjecture the computational complexity C of a given gauge theory state |ψ(τ ) at time τ is given by the volume of the maximal codimension-one hypersurface Σ anchored at the time slice defined by t = τ at both the left and right boundaries. The concrete expression is where G N is Newton constant and L is an arbitrary length scale which we will take to be the AdS radius.
In FIG. 1 we show an example of one of these hypersurfaces in the Penrose diagram for the class of geometries that we consider. The details of how to construct said Penrose diagram can be consulted in App. B. The line element of every geometry in both the DK and AP model can be written as In this coordinate system the desired codimension-one hypersurface Σ can be parameterized as x µ (ξ a ), where µ runs across all the five directions of the bulk and a runs across the four coordinates on the hypersurface. While this describes the most general embedding, the metric (38), being diagonal with elements that depend on the radial coordinate alone, allows to chose the parametrization ξ a = (r, x, y, z) and x µ (ξ a ) = (t(r), r, x, y, z). With this choice the line element of the induced metric on the hypersurface Σ is given by where the prime denotes the derivative with respect to r, and the volume of Σ can be computed as where we have factorized the volume V x coming from the gauge theory spatial directions and the integration over r runs from the minimal radius r m in the middle of the Penrose diagram (at t = 0) to a regulator at r ∞ near the boundary (hence the overall factor of 2). To obtain the precise result we will take the limit r ∞ → ∞ at the end of the calculation. According to the CV prescription we need to maximize the volume (40). Extremization yields the following equation of motion for t(r): which can be integrated to give or equivalently where E is a conserved quantity. The hypersurface Σ needs to connect the boundary at the left with the one on the right without developing a conical singularity in the middle of the Penrose diagram at r = r m (see FIG .  1). This is achieved by demanding that the derivative of t(r) diverges at r m , which by means of (42) fixes the value of the constant E to For any given r m there is only one solution with the constant E set by (44) that satisfies t(r ∞ ) = τ on both sides of the geometry, hence effectively we have E = E(τ ) and r m = r m (τ ). After substitution of (42) in (40) we obtain the volume of the maximal hypersurface Σ as a function of τ where the limit r ∞ → ∞ is meant to be taken.
In order to obtain the explicit dependence of Vol(Σ) on τ we need to solve (42) for t(r). Given that, as explained in Sec. II, in general the backgrounds that are part of either the DK and AP models are constructed numerically, the solution for t(r) for τ = 0 needs to also be computed by numerical methods (with t(r) = 0 for all r being the only analytical solution). The integration procedure that we followed in practice began by solving (41) as a Frobenius expansions around r m . Given that we look for solutions that satisfy t(r m ) = 0 and t (r m ) = ∞ the series turns out to be where any coefficient t can be determined using the equation of motion up to the necessary order. Of particular importance is the explicit expression for t because from it, and given that U (r m ) < 0, we can conclude that obtaining a real valued solution restricts r m to the interval r min < r m ≤ r h , where the minimal possible radius r min is given by the solution to the equation In the case of the DK and AP models, this minimal radius is a function of both the magnetic field intensity B and the temperature T . Once the coefficients t (m) i are known to the desired order, Eq. (46) can be used to provide initial conditions for the numerical integration of (41) starting at r = r m + , with r m , and only up to r = r h − , as the horizon is another singular point of the equation of motion. A series expansion of (41) near r h reveals that t(r) goes like where any t (h) j for j ≥ 2 can be written in terms of t from the behavior of the numerical interior solution t(r) near r h , substitute these values in (49), and used the resulting series to provide initial conditions for the exterior numerical integration starting at r = r h + and up to r = r ∞ . After mirroring this result for the left side of the Penrose diagram, this procedure allows us to piecewise construct the solution t(r) for any r. Finally, we extracted τ from the numerical solution as t(r ∞ ) = τ and then obtained the relations r m (τ ) and E(τ ).
The computation for the vacuum state requires its own discussion, as the integration procedure we just described does not apply even if (42) does. Obtaining the complexity of preparing both the left and right gauge theories in their vacuum state requires working with two separate copies of the Poincare AdS 5 bulk geometry, where the maximal hypersurfaces Σ 0 are those with constant time, given by t(r) = τ which is equivalent to setting E = 0 for any τ in (42). This in turn implies that the maximal volume (45) for the vacuum state is given by showing that the volume is independent of the boundary time τ . In FIG. 2 we present one copy of the Poincare AdS 5 bulk geometry with an example of a maximal hypersurface Σ 0 . It is important to note that for any of the geometries in both the DK and AP models, Vol(Σ) is divergent when the boundary regulator r ∞ is removed. Substitution of the near boundary expansions of the metric fields given in App. C for either the DK or AP models in (45) gives which diverges in the limit r ∞ → ∞. Note however that the previous expression can be rewritten as withr ∞ = r ∞ + U 1 /2. Given that when r ∞ → ∞ we haver ∞ = r ∞ , formally using either regulator at the boundary will give the same result for Vol(Σ) once the limit has been taken. Usingr ∞ in Vol(Σ) and r ∞ in Vol(Σ 0 ) explicitly shows that the vacuum subtracted volume Vol(Σ) − Vol(Σ 0 ) is finite in the limit r ∞ → ∞, that is, when both regulators are removed. This mathematical trick is necessary because of the choice of radial coordinate for both the DK and AP models.

B. Results
The numerical procedure detailed above allows us to use (45) and (2) to find the computational complexity C of any state in the gauge theory dual to either the DK or AP models as a function of the three independent gauge theory parameters B, T and τ as However, our numerical results show that the dimensionless ratio C/T 3 only depends on the dimensionless quantities B/T 2 and T τ , in terms of which the results ahead will be reported. Although at first sight this might seem trivial, this is not the case because, as explained in Sec. I, both models feature a conformal anomaly that introduces an arbitrary energy scale µ at the quantum level.
In other words, our results explicitly show that the complexity computed by means of the CV prescription is insensitive to the conformal anomaly, at the very least for these specific models. We have previously shown in [39] that this is also the case when using the CA prescription for the DK and the Mateos-Trancanelli anisotropic models.
As previously explained, we are interested in the vacuum subtracted version of the complexity defined at any given τ , a quantity that we call the evolving complexity C E , which is given by Note that this quantity remains finite in the r ∞ → ∞ limit by virtue of (52), and that it reduces to the well known complexity of formation for τ = 0. In FIG. 3 we show C E for the DK model as a function of B/T 2 for two different fixed values of T τ . It can be seen that C E is a monotonically increasing function of B/T 2 and that it is always positive C E > 0 for the two values of T τ that we display. The interpretation of this result is that, at least intuitively, it becomes harder to create a state with a magnetic field starting from the vacuum as B/T 2 increases. From FIG. 3 we can also see that the evolving complexity increases as the boundary time passes, as C E is larger for T τ = 1 than it is for T τ = 0. This effect can be better appreciated in FIG. 4, where we show C E as a function of T τ for various values of B/T 2 . It can be seen how C E monotonically increases when T τ grows, in such a way that for late times it does it at a constant rate. With higher magnetic fields, the complexity grows even more, but keeps the same behavior, always increasing at a constant rate when T τ goes to infinity, which is the expected late time behavior of the computational complexity [7,8,[14][15][16][17][18][19]39]. While we arrived to this conclusion by inspecting the full time dependence of C V , as a confirmation of our numerical procedure we present an alternative derivation of the τ → ∞ limit of dC V /dτ explicitly App. D In FIG. 5 we show C E as a function of B/T 2 at fixed values of T τ for the AP model. As explained in Sec. I, in the AP model two branches of solutions exists for any 0 < B < B c , with one being thermodynamically preferred over the other. In the following plots we will denote the thermodynamically stable branch of states as a continuous line, while we will use a dashed line to indicate the latter. From FIG. 5 it can immediately be seen that C E is not a monotonic function of the magnetic field. As we increase the dimensionless quantity B/T 2 , both, the complexity of formation (T τ = 0) and its evolution at T τ = 1 grow until they reach a maximum value. Interestingly, this maximum occurs for a magnetic field intensity lower than the critical one B c /T 2 ≈ 11.24 for the two values of T τ displayed. Further increasing the magnetic field intensity causes C E to decrease in such a way that there are some states that satisfy C E (B, T, τ ) < C E (0, T, τ ) still within the stable branch, which can be stated as the system ongoing a 'simplification' of sorts. In contrast to the DK model, in the AP model it is easier to create a state with a very intense magnetic field starting from the vacuum than it is to create one with a less intense magnetic field. In FIG. 6 we show C E as a function of T τ for various B/T 2 . First of all it can be seen that, just like in the case of the DK model, the evolving complexity monotonically increases with T τ , in such a way that for late times it does at a constant rate for all the intensities of the magnetic field used in the plots, which is the expected behavior. Second, we confirm that indeed some of the states in the stable branch are such that for some T τ satisfy C E (B/T 2 , T τ ) < C E (0, T τ ) as, for example, the orange continuous curve corresponding to the stable state with B/T 2 = 11.18 is below the black curve corresponding to B/T 2 = 0 for late T τ . This puzzling behavior rises the following question: is this simplification effect caused by the presence of the magnetic field alone or can it be attributed to the in-terplay that it has with the scalar field? In order to answer this we would like to subtract the contribution coming from the temperature from the complexity. We call this quantity the 'complexity of magnetization' C M of the state |B, T, τ , defined as C M (|B, T, τ ) := C(|B, T, τ ) − C(|0, T, τ ). (55) Intuitively, C M measures how difficult it is to prepare a state with a certain magnetic field and temperature at time τ starting from a state with the same temperature and at the same time, but with no magnetic field. In terms of the CV prescription, the gravity formula for C M is where Σ T is the maximal hypersurface anchored at fixed boundary time T τ in the Black D3-brane background, which corresponds to the B/T 2 = 0 solution for both the DK and AP models. Note that, just like the evolving complexity, C M is finite in the limit r ∞ → ∞ by virtue of (52). We show the complexity of magnetization for the DK model as a function of B/T 2 at fixed T τ in FIG. 7, and as a function of T τ at fixed B/T 2 in FIG. 8. From the first one, we can see that C M is a monotonically increasing function of the magnetic field intensity for fixed T τ and that it is a positive quantity for all the explored values of B/T 2 . From the latter we can see a similar behavior, meaning that the complexity of magnetization always increases as T τ grows. Also note that, for the magnetic field intensities displayed in FIG. 8, C M remains positive as T τ grows and that it increases at a constant rate as T τ goes to infinity.
In the case of the AP model, the complexity of magnetization reveals a more interesting behavior. In  in the unstable branch is less complex than the thermal B = 0 state, as for these we have that C M < 0. However, as anticipated from the previous analysis of the evolving complexity, some of the states on the thermodynamically preferred branch also satisfy C M < 0. While the previous behavior is shown explicitly for the two values of T τ considered in FIG. 9, we can see that it is shared for other boundary times as well. In FIG.  10 we show the complexity of magnetization as a function of T τ for various values of the magnetic field. From this it can be seen that the states on the unstable branch have negative C M for any T τ , and that the complexity of magnetization grows at a constant rate as T τ increases. Notably the same is true for some of the states in the stable branch. For example, the continuous orange (bottom) curve in FIG. 10 corresponding to the stable state at B/T 2 = 11.18 satisfies C M < 0 for T τ > 0.6.
From the previous discussion we can conclude that indeed the interplay between the magnetic field and the scalar field leads to a negative complexity of magneti-zation C M , a phenomenon that we call 'magnetic simplification'. This occurs for states with a magnetic field intensity such that B s /T 2 < B/T 2 < B c /T 2 , where the simplification intensity B s depends on the time T τ at which we are defining the state.

IV. COMPLEXITY 10D
As we previously mentioned, there is no obvious generalization of the CV prescription to allow the inclusion of the compact directions in ten dimensions. However, two natural options are: (1) uplift the maximal volume Σ in 5D to Σ up in 10D and (2) find the maximal volume slice in the full 10D geometry Σ f ull .
The volume of the hypersurface Σ 10 can be computed in a similar way to the five dimensional case. Now the coordinate system for the codimension-one hypersurface will be parameterized as x µ (ξ a ) where µ runs across the full ten dimensions of the bulk and a across the nine directions of the hypersurface. We will again use the symmetries of the system to simplify the embedding.
It is because of this factorization of the compact directions that we can integrate them immediately, leading to a π 3 constant factor that does no affect the extremization process of the hypersurface. This factor accounts for the complexity associated to the internal degrees of freedom encoded in the compact part of the space, not included in the effective five dimensional treatment, that in this particular case turn out to be independent of the energy scale given by the radial coordinate. The explicit expression we obtain is where we have again factorized the volume V x coming from the gauge theory spatial directions exhibiting that this volume is equal to the one obtained in (40) times π 3 . This is an interesting result, as it shows that in this particular case, the CV conjecture yields the same behavior using either the 5D truncation or the full ten dimensional background. This is explicitly seen by computing both Σ up and Σ f ull . We find Σ up by substituting in (60) the t(r) obtained in the five dimensional case (43). Since this expression is proportional to (40), we will find the same behavior as in five dimensions. On the other hand, we compute Σ f ull by looking for the t(r) which extremizes (60). However, we already know that the solution found in section III A is the one that takes (60) to its extremal value. We conclude that for the DK model, Σ f ull = Σ up and, given that the volume of both is just the volume of Σ times π 3 , the results for the complexity in ten dimensions can be trivially read from the ones in five dimensions presented in SEC. III B. We will omit the corresponding plots as they provide no new information.
It is worth noticing that this is not a general behavior for the complexity, as exemplified by the AP model where Σ f ull = Σ U p . This is because in the line element (31) the compact and non-compact directions mix in a non-trivial manner, preventing the dependence on θ in particular from being integrated out, and making an exclusively r dependent embedding not general enough to reach a true extremal value for the volume of Σ 10 . Thus, the parametrization cannot be the same as in the DK model. In its place, we choose ξ a = (r, x, y, z, θ, ψ, φ 1 , φ 2 , φ 3 ) and x µ (ξ a ) = (t(r, θ), r, x, y, z, θ, ψ, φ 1 , φ 2 , φ 3 ), noticing that now t is a function of r and θ. With this selection we obtain where t r and t θ respectively represent the derivatives of t with respect to r and θ. We see that the expression for L 10 in the AP model is relevantly different from the one in the DK model (40). As anticipated, there is now an explicit dependence of the compact direction θ inside the integral, that in general cannot be integrated on its own as in (60). In view of the above, we cannot expect the ten dimensional behavior to be the same as the five dimensional one, and computing both Σ up and Σ f ull will exhibit the details in the discrepancy. In order to compute Σ up , we uplift the hypersurface obtained in the consistent truncation, substituting our five dimensional solution for t(r), given by (43), resulting in the following volume where Determining Σ f ull requires the extremization of Vol(Σ 10 ), however, as can be seen in Appendix E, the partial differential equation for the embedding that appears as part of the process is non-linear, second order, not-separable, and has alluded all our efforts, analytic, numeric, or hybrid, to solve it. Nonetheless, it seems like the physical conclusion that matters the most is that the current background is such that Σ f ull differs from Σ up . To prove this it suffice to postulate t(t, θ) as a function t(r) of r alone, which reduces the embedding equation to where EoM 5 can be read from (42) as the quantity that must vanish to satisfy the equation of motion in the five dimensional case. We see now that the five dimensional solution given by EoM 5 = 0 would only solve the ten dimensional equation if either t = 0 or t = 1/U , which is respectively equivalent to taking E = 0 or E → ∞ in equation (43). According to the CV conjecture, neither of these two solutions lead to hypersurfaces that can be used to compute the complexity, since the one described by t = 0 does not connect the two boundaries smoothly, except for the very particular case t = 0 ⇒ τ = 0, while the one generated by t = 1/U is null and therefore fails the requirement to be space-like. This explicitly shows that in the AP model Σ f ull cannot be simply obtained by uplifting the five dimensional result, making it different from Σ up , and leading us to conclude that the complexity computed with one of these two hypersurfaces will not coincide with the one that results from using the other, except for τ = 0. Furthermore, and from a wider perspective, since ∆ and cos 2 (θ) are linearly independent as functions of θ, Eq. (64) shows that the only functions of r alone that solve the complete embedding equation are those we already mention, and therefore any other must also depend on θ.
Finding this larger family of solutions is beyond the scope of this paper, and consequently in what follows we will limit our analysis to the volume of the Σ up hypersurfaces.

A. AP model: results for Σup
The numerical calculations in this case show once again that the dimensionless ratio C/T 3 depends only on the two dimensionless parameters B/T 2 and T τ , indicating that our results are still insensitive to the conformal anomaly. To do any further comparison it is necessary to divide out the factor of π 3 by which the volume of the ten and five dimensional hypersurfaces differ at B = 0, and that, as previously stated, is associated with the complexity of the internal degrees of freedom encoded in the volume of the compact dimensions. In FIG. 11 we plot the evolving complexity computed using the volume of Σ up in the ten dimensional AP model once this scaling has been done. We see that despite the small, but existing, quantitative differences, the general behavior is very similar to the one we obtained in the five dimensional treatment, included in FIG. 11 as transparent lines. Just like in the five dimensional case, the evolving complexity displayed in FIG. 11 increases with B/T 2 until it peaks at a value of this dimensionless parameter below the critical one, decreasing from that point onward. This means that for certain intensities, it is easier to start from the vacuum and create a state with a very intense magnetic field than a state with a lower one.
In FIG. 12 we explore the behavior of the evolving complexity while keeping B/T 2 fixed. From this it can be seen that C E is a monotonically increasing function of T τ and that when T τ goes to infinity it does it at a constant rate. While this general behavior is the same for all the explored values of B/T 2 , it is important to note that for large enough B/T 2 the evolving complexity is smaller than the one obtained at vanishing magnetic field for every value of T τ that we checked, that is C E (B/T 2 , T τ ) < C E (0, T τ ) for states that are part of the stable branch. To better present this effect it is again convenient to study the complexity of magnetization C M that, as stated, isolates the magnetic contribution to the complexity.
As can be seen in FIG. 13, C M at T τ = 1 becomes negative for values of the dimensionless ratio B/T 2 that are below the critical magnetic field intensity and are still part of the thermodynamically stable branch. This explicitly shows that forming a state in which the magnetic field has an intensity in this range of values is simpler than forming a state with the same physical parameters and no magnetic field. This is the magnetic simplification phenomenon that we encountered when studying the complexity in the consistent truncation of the theory: there exists a certain magnetic field intensity B s /T 2 above which the complexity of magnetization becomes negative for a given T τ . However, as can be appreciated in FIG. 13, B s for the 10-dimensional theory is larger than its five dimensional counterpart: a stronger magnetic field is necessary to reduce the complexity of the internal degrees of freedom appearing in the tendimensional scenario. This effect is such that, in contrast to what we found for the 5-dimensional truncated theory, for T τ = 0 there is no magnetic simplification phenomenon when working with the 10-dimensional theory.
This behavior can be better appreciated in FIG. 14, where we plot C M as a function of T τ for different values of B/T 2 . We can see that, in contrast to what we found in the five-dimensional theory, the complexity of magnetization of the stable state with B/T 2 ≈ 11.18 remains positive for T τ > 0.6, although it still becomes nega-

V. DISCUSSION
We computed the computational complexity using the CV conjecture for two different gravitational models dual to quantum field theories with a magnetic field, the D'Hoker-Kraus (DK) model and theÁvila-Patiño (AP) model, and for both contrasted the five dimensional effective version with the full ten dimensional theory.
As a first result, we verified that the evolving complexity and complexity of magnetization are both insensitive to the conformal anomaly present in these theories in both the full ten dimensional setup and its consistent truncation. We checked this by noticing that C E /T 3 and C M /T 3 depend only on the dimensionless quantities B/T 2 and T τ . This had already been proven to happen when the complexity is computed by means of the CA conjecture [39].
For the DK model, it was found that the evolving complexity of the state increases as the magnetic field intensity grows. This is not an unexpected result, as we can think of the evolving complexity as how difficult it is to prepare a certain state from a reference one at any given time. With this intuition, a state with a strong magnetic field should be more difficult to prepare as the desired field intensity reach higher values. However, for the AP model, the results were drastically different. The state becomes more complex as the magnetic field increases up to a value above which a phenomenon of 'magnetic simplification' occurs, meaning that the evolving complexity starts decreasing, reaching values even below the one at vanishing magnetic field.
To isolate the effect of the magnetic field on the complexity of a state and the aforementioned phenomenon, we introduced a new quantity that we term complexity of magnetization, C M , defined as the difference in the complexities of states that are identical to each other except for the presence of the magnetic field in one of them. One reason for this quantity to be useful is that it permits us to identify the states that have been magnetically simplified as those that satisfy C M < 0, which indeed happens for stable states with intensities of the magnetic field above a certain simplification value B s .
The two systems we used to compute the complexity seem ideal to understand the origin of the phenomenon of magnetic simplification, since the only relevant difference is the presence of one extra scalar field in one of them. The observation that in the DK model, where no other field has a non-vanishing VEV, the complexity increases with the intensity of the magnetic field, confirms our understanding that a state with a more intense magnetic field would be more complex to prepare. In contrast, in the theory dual to the AP model where the magnetic simplification occurs, there is a single trace scalar operator of scaling dimension equal to 2, with a non vanishing VEV that, at fixed source, changes with the intensity of the magnetic filed. A coherent way to encompass the above is to ascribe the simplification to the scalar operator, in which case, what we present in FIGS. 9 and 13 is that for intensities below B s , the complexity grows due to the magnetic field, but starting at B s , this increment is smaller than the reduction due to the complexity of the scalar operator, with the accumulated effect of the latter eventually even surpassing the one of the former.
Concerning the comparison of the full ten dimensional theories and the effective five dimensional ones, we demonstrated that the results obtained from uplifting the extremal hypersurface found in 5D and those derived after following the extremization procedure en 10D, were the same in the DK model and different in the AP. Our results are not enough to support the use of Σ up or Σ f ull in the CV conjecture, but they certainly positions the dilemma as relevant.

ACKNOWLEDGMENTS
The work DA is partially supported by Mexico's National Council of Science and Technology (CONACyT) grant A1-S-22886 and DGAPA-UNAM grants IN107520 and IN116823 and additionally supported by a DGAPA-UNAM postdoctoral grant. CD is supported by CONA-CYT Ph. D grant. All the plots in this paper were generated using Wolfram Mathematica.

Appendix A: Interior solutions
In this appendix we show the integration procedure needed to obtain the interior solutions for the AP model. The treatment is analogous to the one for the DK model, which can be consulted in [39]. The equations of motion for the metric, scalar and Maxwell fields come from the variation of the 5-dimensional truncated action (14). After substitution of the general anzats, the Maxwell equations are automatically satisfied and the Einstein and scalar equations can be manipulated as The first step is to numerically solve (A1) is to expand them in powers of r around r h using This behavior near the horizon allows the family of solutions to easily interpolate between the D3-black brane for B/V 0 = 0 and ϕ h = 0 and the other members by changing the value of B/V 0 and ϕ h . Additionally, this also ensures that the temperature of every member of the family is given by T = 3r h /2π. Substitution of (A2) into (A1) allows to solve for any of the undetermined coefficients in terms of B/V 0 , and then use this to provide initial data for the numerical integration performed from r = r h + to the boundary at r = ∞ for the exterior solutions, and from r = r h − to the singularity at r = r s for the interior solutions, with r h in both cases. The boundary behavior of the solutions built with this procedure is Nonetheless, we can exploit the symmetries of the equations of motion (A1) to re-scale them as which in turn gives the desired AdS 5 behavior at the boundary. Note that this re-scaling needs to be done consistently for both the exterior and interior solutions and that it is necessary to simultaneously scale the value of B to preserve the solution. It is also important to mention that the position of the singularity is not fixed at r s = −r h /2 for every member of the family of solutions, but only for B/T 2 = 0. Instead, the location of the singularity in the r-coordinate turns out to be a function of the magnetic field intensity. By r s we mean the radius at which the curvature scalar R µναβ R µναβ diverges. This behavior is shared for both the AP and DK models. Now, the family of solutions found with the procedure just described depends on the three independent parameters r h , B/V 0 and ϕ h . This coincides with the number of free parameters from the perspective of the dual gauge theory: the temperature T and the magnetic field intensity B on the plasma, and the source of the scalar operator O ϕ dual to ϕ. The latter is dual to the coefficient ψ 0 that appears in the boundary expansion of the scalar field ϕ → 1 r 2 (ϕ 0 + ψ 0 log r) .
Given that from the perspective of the dual gauge theory it makes sense to work at a fixed ψ 0 , in practice we solve (A1) for different values of r h , B/V 0 and ϕ h and then use that to numerically determine the value of ϕ h that fixes ψ 0 for any given B and T . The family of solutions studied in the main text correspond to the one with the source term turned off ψ 0 = 0. We show the metric functions for the critical magnetic field B/T 2 = 11.24 in the interior and exterior regions in Fig. (15).

Appendix B: Penrose diagram
In this appendix we show how to construct the Penrose Diagram for the two sided black hole geometries studied in the main text. Starting from the general anzats for the line element (38), we first change to the tortoise coordinate r given by the solution to the equation that satisfies the boundary condition r (∞) = 0. Note that near the horizon we have because of the behavior of the metric function U (r) given in (A2). Next we transform to the Kruskal-Szekeres coordinates, given by and finally we change to the compact coordinates which are globally spacelike and timelike respectively. This are the coordinates in which we plot the Penrose Diagram in FIG. (1) presented in the main text.
Appendix D: Rate of change of the complexity In this appendix we present the computation of the rate of change of the complexity dC/dτ for the fivedimensional models studied in the main text. We begin by noting that by means of (43) we can relate τ and r m (τ ) implicitly by τ = as by definition t(r ∞ ) = τ and t(r m ) = 0.
Using the previous expression we can rewrite (45) as which is suitable to compute the derivative of the volume with respect to τ . Indeed, a direct computation yields (D3) The first term vanishes by the definition of E given in (44), while the second does by virtue of (D1). Hence we are left with This expression holds for any value of the boundary time τ . However, in the limit τ → ∞ we have that r m → r min , with r min defined in (48). Thus the late time behavior of the rate of change of the complexity is given by which corresponds to the expected constant behavior consistent with Lloyd's bound [7,8,14] as the energy density of the state at temperature T is proportional to T 4 . Given that there is no analytical way to compute r min for any of the solutions with B = 0 in either the DK and AP models, equation (D5) needs to be evaluated numerically. In all the cases that we explored, said evaluation revealed that C V grows at a constant rate as τ → ∞, which is consistent with the results presented in the main text. Given that the same conclusion was obtained by two different methods, this gives confirmation on the validity of our numerical procedures. The question of whether Lloyd's bound is satisfied for states at B = 0 in the DK or AP models is a complicated one, as because of the conformal anomaly present in both the specification of the energy density requires fixing a renormalization scheme (see [36,45]). We have previously showed that, when working using the CA holographic prescription, it is possible to use the saturation of Lloyd's bound at late times to fix said renormalization scheme for the DK and Mateos-Trancanelli models. We expect that the same conclusion also apply for the CV prescription.
Appendix E: Σ f ull equations It was shown in section IV that in the AP model, Σ up and Σ f ull are different hypersurfaces except for certain particular cases. We claimed back then that in general Σ f ull is given by an embedding function t(r, θ) that necessarily depends on both coordinates r and θ, and we will see now that assuming a sole dependence in r is inconsistent with the embedding equation. To prove this, we need to extremize the volume of an hypersurface described by t(r, θ), that connects the boundaries dual to both theories in the double thermo-field setup for the full ten dimentional theory. Such volume is given by where L 10 is defined by equation (61), and its variation with respect to the embedding function results in the partial non-linear differential equation