First-order electroweak phase transitions: a nonperturbative update

We study first-order electroweak phase transitions nonperturbatively, assuming any particles beyond the Standard Model are sufficiently heavy to be integrated out at the phase transition. Utilising high temperature dimensional reduction, we perform lattice Monte-Carlo simulations to calculate the main quantities characterising the transition: the critical temperature, the latent heat, the surface tension and the bubble nucleation rate, updating and extending previous lattice studies. We focus on the region where the theory gives first-order phase transitions due to an effective reduction in the Higgs self-coupling and give a detailed comparison with perturbation theory.


I. INTRODUCTION
The electroweak phase transition marks a boundary between epochs in the history of the early universe. It was the moment in which the Higgs mechanism activated, and the last time in which baryon and lepton number violating processes took place with any degree of rapidity. In the pure Standard Model (SM) this transition has been accurately studied using a combination of effective field theory and lattice Monte-Carlo simulations [1]. The transition has been found to be a crossover, with pseudocritical temperature 159.5 ± 1.5 GeV, here defined as the temperature at which the susceptibility of the Higgs condensate has a maximum.
Beyond the SM, the nature of the electroweak phase transition depends sensitively on any new physics which lies near the electroweak scale. Models with a first-order electroweak phase transition can provide the necessary departure from equilibrium for successful baryogenesis at the electroweak scale [2][3][4][5]. This possibility also requires new sources of CP violation, and although such sources are strongly constrained by experimental bounds on the electron electric dipole moment [6], viable models are nevertheless possible, e.g. those based on CP violation in the τ lepton sector [7,8].
Collider experiments can in principle test scenarios predicting first-order electroweak phase transitions, because such scenarios require new fields with sizeable couplings to the Higgs field [9]. Precise measurements of the Higgs boson's properties and interactions constitute a major aim of the Large Hadron Collider (LHC) through its High Luminosity phase [10], and the construction of a Higgs factory was deemed the highest priority initiative in the recent European Strategy for Particle Physics update [11].
Gravitational wave experiments offer an alternative, and more direct, test of the nature of the electroweak * oliver.gould@nottingham.ac.uk † sinan.guyer@helsinki.fi ‡ kari.rummukainen@helsinki.fi phase transition: Models in which the electroweak phase transition is first-order also produce a stochastic gravitational wave background, with a spectrum peaked around the mHz range; for reviews see Refs. [12,13]. Planned gravitational wave observatories including LISA [14][15][16], DECIGO [17], BBO [18] and Taiji [19] may be sensitive to this signal.
For a given particle physics model, predicting the thermal evolution of the universe around the electroweak phase transition requires tackling a number of theoretical challenges. At high temperature, infrared bosonic degrees of freedom become highly occupied, thereby increasing their effective couplings. This results in large theoretical uncertainties at low perturbative orders, often amounting to several orders of magnitude undertainty for the gravitational wave peak amplitude [20,21]. Sufficiently long wavelength modes become strongly coupled. For non-Abelian gauge theories, such as the electroweak theory, this happens generically, and is known as Linde's Infrared Problem [22]. It implies that any perturbative approach to the study of the electroweak phase transition is fundamentally incomplete.
Direct lattice simulation of the electroweak theory at high temperatures is thwarted by the Nielsen-Ninomiya theorem [23,24], which prevents the simulation of Weyl fermions with a chiral gauge coupling. Nevertheless, a nonperturbative solution to Linde's Infrared Problem is possible, and was formulated in Refs. [25][26][27]. It involves first constructing a three-dimensional effective field theory (3d EFT) for the nonperturbative infrared bosonic modes, and then simulating this EFT on the lattice. In the first step, called high temperature dimensional reduction, the effects of all weakly coupled modes are accounted for perturbatively, including those of the chiral fermions. The result is a much simpler EFT describing only the infrared bosonic modes, which can be straightforwardly simulated on the lattice. This is the approach we will adopt in the following.
To study time-dependent and non-equilibrium quantities, such as the bubble nucleation rate, one must go beyond dimensional reduction which describes only static quantities. Direct lattice simulations are then thwarted arXiv:2205.07238v2 [hep-lat] 23 Jan 2023 by the sign problem of real-time evolution. However, at high temperatures, an alternative EFT approach is possible: The time evolution of the longest wavelength modes, together with screening and other plasma effects, is described by Langevin-type equations up to corrections suppressed by 1/ log(1/g) [28]. This framework was adopted in Ref. [29] to nonperturbatively compute the bubble nucleation rate, and is the approach we will adopt in this paper.
A wide range of theories beyond the Standard Model (BSM) have the same infrared degrees of freedom in the vicinity of the electroweak phase transition, and are therefore described by the same high temperature EFT as the SM. Any new BSM particles need only be heavy compared to the effective mass of the Higgs, which becomes light in the vicinity of the transition. Only the infrared degrees of freedom may have nonperturbative effective couplings, so only these require numerical lattice simulations. In this paper we will perform new lattice simulations to provide a quantitatively reliable description of the thermodynamics of the SM-like high temperature EFT, extending the results of Refs. [27,29,30]. We will focus on regions of parameter space where the electroweak phase transition is of first order. This can occur due to Higgs-portal couplings, through which BSM particles can induce the effective Higgs self-coupling to be weaker than it is in the SM. This approach has been successfully utilised to study phase transitions in several BSM theories, including the Minimally Supersymmetric Standard Model (MSSM) [31][32][33][34], the two Higgs doublet model (2HDM) [35][36][37][38], the singlet extended Standard Model (xSM) [39,40], and the real triplet scalar extension of the Standard Model (ΣSM) [41]. In Ref. [40] lattice simulations of the SMlike high temperature EFT were used to provide the first nonperturbative predictions of the gravitational wave signal of a first-order cosmological phase transition. These were however limited due to the existence of only a single nonperturbative calculation of the bubble nucleation rate [29]. One motivation for this paper was to remedy this deficit by nonperturbatively computing the bubble nucleation rate at other parameter points, and thereby extending the scope for making nonperturbative predictions of the gravitational wave signal of first-order electroweak phase transitions.
A final key motivation for this paper is the delineation of the validity of perturbation theory to describe firstorder electroweak phase transitions; see also Ref. [42]. While perturbation theory is fundamentally incomplete at high temperatures due to the Infrared Problem, it is nevertheless still useful when the transition is strongly first order. In this case, one can construct the first few orders of a perturbative expansion in ratios of couplings. This expansion contains half-integer [43,44] and quarterinteger [45][46][47] powers of the gauge coupling. It therefore both converges slowly and breaks down at finite order. Its quantitative reliability can only be reliably determined by a nonperturbative calculation.
In Sec. II we discuss the high temperature EFT of the electroweak phase transition, giving an overview of our existing knowledge of this EFT and of its applications to BSM theories. In the following sections we present our calculations and results for the thermodynamics of the EFT, starting with the equilibrium thermodynamics in Sec. III. In Sec. IV we present our calculations and results for the bubble nucleation rate. Though requiring significant computational resources, we are able to take controlled continuum limits for the first time, and to study the parametric dependence of the bubble nucleation rate. In Sec. IV B we show some visualisations of critical bubbles taken from our lattice simulations, extended into a movie of bubble growth which can be seen at [48]. In Sec. IV C we utilise our nucleation rate results within a cosmological context. We conclude in Sec. V. A number of appendices collect details of our continuum extrapolations and perturbative results.

II. ELECTROWEAK PHYSICS AT HIGH TEMPERATURES
At high temperatures, the low energy modes of bosonic fields become highly occupied. The thermodynamics of these modes is captured by a classical 3d EFT. For the electroweak phase transition, the relevant EFT contains all the bosonic fields of the SM: the Higgs, and the gauge bosons. In the presence of additional BSM bosonic fields, these fields may or may not enter the EFT, depending on their thermal effective masses.
At a technical level, the 3d nature of the hightemperature EFT can be understood from the imaginary time formalism, whereby thermodynamics is formulated as quantum field theory on R 3 × S 1 . The circle S 1 has circumference 1/T and is referred to as the imaginary time direction. Quantum fields are then expanded in Fourier modes of the imaginary time direction with frequencies nπT , where n is an even integer for bosons and an odd integer for fermions. These are called Matsubara (or Kaluza-Klein) modes. At sufficiently high temperatures, such that πT is large compared to other energy scales, all nonzero (n = 0) Matsubara modes become heavy and decouple from the physics of the zero (n = 0) modes. The zero modes are constant in the imaginary time direction; the EFT that describes them is therefore 3d.
This EFT describes the static long-wavelength modes of the thermal bath, in particular those with energies up to O(gT ). Of these modes, the Debye-screened temporal gauge fields are heaviest, and in turn can be integrated out [49][50][51]. Any BSM modes in this energy range can also be integrated out at this stage; see for example Ref. [38]. The result is a simpler 3d EFT, containing only the spatial, or magnetic, components of the gauge fields, as well as any sufficiently light scalar fields.
In principle this EFT contains the full complement of SU(3) × SU(2) × U(1) spatial gauge fields of the Stan-dard Model. However, couplings between the Higgs and the SU(3) gauge fields of the EFT are both indirect and parametrically suppressed, and hence the latter can be dropped if we are only interested in the electroweak phase transition. In addition, we choose to drop the U(1) hypercharge gauge field. This is for two reasons: first, due to the smallness of the Weinberg angle the U(1) gauge field has only a numerically small effect on the phase transition; second, due to the absence of U(1) gauge selfinteractions, this field does not suffer from Linde's Infrared Problem, and consequently perturbation theory provides a reliable guide to its contributions [52].
In this paper, we study the 3d SU(2) gauge-Higgs theory, defined by the following Lagrangian density, where we have defined the letters i, j run over the spatial indices, a, b, c run over the indicies of the adjoint representation of SU(2), abc is the Levi-Civita symbol and σ a are the Pauli matrices.
Here the Higgs field φ lies in the fundamental representation of SU(2) with gauge coupling g 3 , and we have left the corresponding indices implicit. As all couplings have nonzero mass dimension, there can only be nontrivial dependence on the dimensionless ratios, Including the U (1) hypercharge gauge field, with gauge coupling g 2 3 , would introduce an additional dimensionless parameter z(T ) = g 2 3 /g 2 3 . This 3d EFT (1) has been studied extensively both perturbatively and nonperturbatively in Refs. [26,27,29,30,51,[53][54][55][56]. The framework for lattice simulations was initially developed in Ref. [26], and exact and O(a) improved lattice-continuum relations were derived in Refs. [57][58][59]. Efficient multicanonical Monte-Carlo algorithms for the study of first-order phase transitions in this theory were presented in Ref. [27] (see also Ref. [60]).
The phase diagram of the theory contains a line of first-order phase transitions y = y c (x), for which x 1. This line ends in a critical point (x, y) = (x * , y * ) at which there is a second-order phase transition in the 3d Ising universality class [30,61,62]. Numerically, this happens at x * = 0.0983 (15), for MS renormalisation scale µ 3 = g 2 3 (which we use throughout). At other renormalisation scales, the position of this point is modified as This identity is exact, due to the superrenormalisability of the theory [53], and x * is unmodified.
To better understand how the parameters of the 3d EFT relate to those of the full ultraviolet theory, let us briefly consider an example: the SM plus an additional real scalar field (xSM). Denoting the additional field by σ, the following Higgs portal couplings will generically be present These terms modify the 3d effective couplings away from their SM values. At tree level in the xSM, the 3d coupling x reads [39,63] x = m 2 where m H and m W are the physical (pole) masses of the Higgs and W -boson, m σ is the physical mass of the BSM scalar particle and θ is its mixing angle with the Higgs. The mixing angle is generically nonzero for a 1 = 0.
In the pure SM, 3 > x * and the electroweak phase transition is a crossover [1]. In the xSM, the second term in Eq. (8) can reduce the value of x, if m σ < m H , and therefore move the electroweak phase transition towards being first order. In fact, this possibility is favoured by the recent measurement of the W boson mass by the CDF collaboration [64,65].
In the presence of relatively large portal couplings, oneloop corrections can also modify the value of x significantly, especially when the real scalar is not too heavy. In the Z 2 -symmetric limit of the xSM the scalar fields do not mix, sin θ = 0, and the leading corrections from the real scalar field arise at one loop. Assuming the thermal effective mass of the real scalar lies in the range gT m σ,3 πT , this correction to x is of order where g is the weak gauge coupling. This correction is also generically negative for thermal effective masses of order gT . As a numerical example, for a Z 2 -symmetric real scalar with mass m σ = 200 GeV, one finds x < x * for a 2 1.3 [40]. Current experimental bounds still leave ample room for the value of x to be modified. This is because x is determined by the Higgs four-point self-coupling, and, while the Higgs mass has been relatively precisely measured, m H = 125.25 ± 0.17 GeV [66], its self-couplings are much less well constrained. Measurements of Higgs pair production have be used to directly constrain the Higgs three-point coupling. The current limit can be expressed as λ/λ SM ∈ (−5.0, 12.0) at 95% confidence [67]. Direct constraints on the four-point Higgs coupling are even weaker. Such weak constraints leave open the possibility that x < x * , and hence for a first-order electroweak phase transition. This will be the focus of the current article.

III. EQUILIBRIUM QUANTITIES
The equilibrium thermodynamics of the 3d EFT is relatively straightforward to study on the lattice. There is no true order parameter in the SU (2) Higgs theory, in the sense of the Landau theory of continuous phase transitions [68]: The naive candidate φ is equal to zero for all temperatures, being gauge dependent [69]. The volume average of the quadratic scalar condensate, φ † φ , provides a means to distinguish the different phases, although it is positive for all temperatures. It plays a role analogous to the density in liquid-gas transitions.

A. Critical temperature
As outlined in Sec. II, the phase diagram of the 3d SU (2) Higgs theory consists of a line of first-order phase transitions y c (x) for x ∈ (0, x * ), terminating in a secondorder phase transition at x = x * . For larger values of x the transition is merely a crossover.
A given 4d BSM theory at a given temperature maps to a point on the space of couplings of the 3d EFT. As the temperature varies, the thermodynamics of the 4d theory trace out a trajectory through the couplings of the 3d EFT, parametrised by the temperature (x(T ), y(T ), g 2 3 (T )). There is a first-order phase transition if this trajectory intersects the surface of first-order phase transitions, y = y c (x).
The speed at which the trajectory moves through the space of 3d couplings, or equivalently the tangent vector, is naturally quantified by The generic structure of dimensional reduction implies that η x ∼ g 2 η y ∼ g −2 , where g 2 denotes a perturbative coupling [27,40]. Thus, for any perturbative 4d model, the trajectory is almost parallel to the y axis. Note also that η g 2 3 = g 2 3 · (1 + O(g 2 )), and hence the relation between units in 3d and 4d tracks the temperature up to small corrections.
To calculate y c (x), we fix a value of x < x * and vary y. At large positive (negative) values of y, equivalent to high (low) temperatures, there is a single phase, the symmetric (broken) phase. In between these two extremes, there is a region of y for which both phases coexist. At the  [27,62] are plotted together with two perturbative approximations. Note that the perturbative approach fails to predict an endpoint to the line of first-order phase transitions. The renormalisation group invariant quantityỹc ≡ yc − βy log(µ3/g 2 3 ) has been plotted (see Eq. (B1)), rather than simply yc. The green and blue bands for the perturbative results reflect the renormalisation scale dependence as it is varied over µ3/g 2 3 ∈ [ 1 √ 10 , critical temperature, y = y c , the two phases have equal free energies, and hence equal probabilities in the thermal ensemble: a histogram of φ † φ will show two peaks of equal probability; see Fig. 2.
An efficient approach to calculate the critical temperature makes use of an exact relation between the probability distribution of φ † φ at different values of y [70,71]. We denote this probability distribution, evaluated at some value ϕ 2 , by where δ is the Dirac delta function, x denotes integration over space, and V is the spatial volume. The crucial relation, which follows from the definition, is Use of this relation is referred to as reweighting. In principle one can reweight the results of a simulation at a single value of y to determine the probability distribution at all values of y. In practice, there is an overlap problem if one tries to reweight too far, which will manifest through larger statistical errors. Once one has found an approximate value for y c , one can reweight to find a very much more accurate value. A corresponding relation exists for the probability distribution of (φ † φ) 2 and changes in x, though we have not made use of it.
Histograms of the spatially averaged Higgs quadratic condensate, renormalised to match the MS scheme in the continuum limit. The two peak structure demonstrates the coexistence of phases, and the first-order nature of the transition. The upper panel shows the physical volume dependence of the histogram, from which one can see that states between the two phases become exponentially unlikely as the volume L 3 grows. The Gaussian width of each peak scales as 1/ √ V [26]. The lower panel shows the relatively mild lattice spacing (a) dependence of this quantity.
The two-peak histograms of Fig. 2 have been reweighted to the critical point, at which the probability in each phase (area under each peak) is equal. The boundary between phases can be taken as the local minimum of the probability, though the precise choice of boundary is unimportant: in large lattices the value of y c attained is exponentially insensitive to this choice. Fig. 2 shows examples of the volume and lattice-spacing dependence on a range of cubic lattices. For each physical parameter point, simulations were carried out for at least 4 lattice spacings, and for each of these, between 3 and 6 different volumes were simulated; see Table V. Suitable choices for lattice spacings and volumes can be gleaned from previous work [27], and from comparing results for different lattices.
As the theory is (nonperturbatively) gapped, for sufficiently large volumes the volume dependence of bulk observables, such as x φ † φ in a given phase, is exponentially suppressed. This is because the exponential barrier isolates the two phases, ensuring that the thermodynamics of one phase is independent of the presence of the other. The volume dependence of y c is more subtle, as it depends on both phases, and in fact on the precise definition of y c at finite volume. Defining y c through the maximum of the susceptibility, or the minimum of the Binder cumulant, leads to 1/V dependence [27,72], because the relative widths of the two peaks play a role. So too does defining y c as the temperature at which the two peaks of the histogram (Fig. 2) have equal height [27]. However, defining y c as the temperature at which the two peaks of the histogram have equal probability leads to exponentially suppressed volume corrections [72][73][74]. This is the definition we adopt. Extrapolations to infinite volume were therefore carried out simply by averaging the results of the largest lattices, ensuring that they agree within error. Following this, extrapolations to zero lattice spacing were performed with polynomial fits of the lowest degree such that Some examples of our continuum extrapolations for y c are shown in Appendix A. The full dataset can be found at [75].
Errors for individual lattices were estimated by performing a jackknife resampling, after blocking the data into ten blocks, as we do throughout. For continuum extrapolations, we quote the error on the fit parameter.
We have also made use of an alternative method to calculate the critical temperature. In lattices with one side much longer than the other, which we refer to as cylindrical lattices, the local minimum in the probability distribution becomes wider and flatter as the ratio between long and short sides grows. Physically this wide, flat region in the probability distribution is dominated by configurations where the two phases coexist. The tension of the phase boundary ensures that the boundary will line up perpendicularly to the long axis, as other configurations have much higher free energy. At the critical temperature, the free energy (and hence probability) of such a configuration is independent of the precise fraction of broken phase versus symmetric phase in the lattice. This is because the free energy densities of both phases are the same, and because both configurations have the same area of phase boundaries. Slightly away from the critical temperature, changing the fraction of broken phase will change the free energy, thus the probability distribution of the order parameter will no longer be flat. One can thus tune y to the critical temperature by reweighting the probability distribution until the region between phases becomes flat. A related method was used in Ref. [76]. An advantage of this method is that there is no need to wait for slow tunnellings between phases, as one only needs to know the probability distribution for mixed configurations. However, a disadvantage is that one must first find the appropriate range over which the probability distribution becomes flat, and this is not known a priori. The two methods we have used for calculating y c agree within error. Some example plots showing this can be found in Appendix A.
Our continuum-extrapolated results for y c are presented in Fig. 1, together with previous results from the literature. Altogether the data is now rather dense in x, exposing the smooth behaviour of the function y c (x).
The numerical values are collected in Table I. While Linde's Infrared Problem renders perturbation theory inherently incomplete, it is nevertheless possible to compute the first few orders of an expansion in x. With a two-loop computation one can determine the leading order (LO) and next-to-leading order (NLO) behaviour in x. This is outlined in Appendix B. The calculation of yet higher order terms involves infinite sets of diagrams within the 3d EFT, which we do not attempt here; see Refs. [46,47]. However, by scaling arguments one can determine that pure gauge diagrams yield an expansion in powers of x, while pure Higgs diagrams yield an expansion in powers of x 3/2 . The expansion for the full theory is thus a dual expansion in powers of x and x 3/2 , up to logarithms, thereby motivating the following fit function [77] y fit Performing a least-squares fit, we find with The relatively large values of the expansion coefficients suggest that perturbation theory in x breaks down around x ∼ 0.05. Note that despite the breakdown of perturbation theory, the coefficients c 3/2 , c 2 and c 5/2 (but not the coefficients of higher powers of x) can in principle be computed much more accurately within perturbation theory, with a resummed 3-loop computation. If this is performed, the fit to the lattice data should be updated to include higher powers of x, or an alternative functional form which describes the data better at larger x.

B. Latent heat
The latent heat L is the change in enthalpy density between phases at the critical temperature. It gives a measure of the energy per unit volume released during the phase transition. In the cosmological evolution, some degree of supercooling to temperatures below the critical temperature will take place, due to the slowness of bubble nucleation in comparison to Hubble expansion (see Sec. IV C). The resulting energy released is expected to be greater than L, as additionally energy will be released proportional to the free energy density difference ∆f between phases. The latent heat nevertheless gives a bound on the energy released, and a reliable estimate for small supercooling.
The latent heat is determined by the rate of change of the free energy difference between phases, evaluated at the critical temperature Using the chain rule, and that ∆f = 0 at the critical temperature, this can be written as where the condensates are expressed in units of g 2 3 , and the eta functions are defined in Eq. (10). The condensates are defined in terms of the derivatives of the free energy density with respect to the renormalised couplings [26], We have scaled out powers of g 2 3 so that the condensates are dimensionless. The quantities then correspond more closely to what is measured on the lattice, in which we adopt units where g 2 3 = 1. These definitions for the condensates, being the derivative of a finite quantity with respect to finite couplings, yield the necessary counterterms to make the condensates finite. The renormalised quartic condensate is equal to a linear combination of the unrenormalised quartic and quadratic condensates, the latter arising as a consequence of the mass counterterm depending on x. Note that the condensate corresponding to the partial derivative with respect to g 2 3 (at fixed x and y) is equal to zero at the critical temperature, being proportional to ∆f .
The natural hierarchy η y η x suggests that the second term in Eq. (16) can be neglected. For relatively weak transitions, this is indeed correct. However, in very strong transitions the quartic condensate can become larger than the quadratic condensate, so in the following we will study both terms.
Figs. 3 and 4 show our continuum-extrapolated results for the quadratic and quartic Higgs condensates, together with existing results in the literature. Extrapolations to infinite volume were carried out as for y c . For the extrapolations to zero lattice spacing, a → 0, the linear term in the polynomial fit is absent for data with O(a) improvement [59,78].
The discontinuity in the quadratic Higgs condensate. Note that this goes to zero at x * = 0.0983 (15), illustrated as the vertical orange band. Lattice results from this paper and from Ref. [27] are shown. The critical region, where the discontinuity in the condensate obeys critical scaling and dives down to zero, appears to be very narrow on this logarithmic plot.
The discontinuity in the quartic Higgs condensate. This is the first time lattice results for this quantity have been reported. Again the critical region appears to be rather narrow.
A, demonstrating clearly the O(a) improvement. The complete data can be found at [75].
The LO and NLO perturbative expressions for the Higgs condensates are plotted alongside the lattice data in Figs. 3 and 4. The expressions are given in Appendix B. In both cases the NLO corrections are large, and move the perturbative results considerably closer to the lattice data. Agreement between the lattice and perturbation theory is noticeably better for the quadratic condensate. This is consistent with the perturbative expansion parameter, as given by the ratio NLO/LO, being smaller for the quadratic condensate.
To compute the latent heat requires the coefficients of the Higgs condensates in Eq. (16), η x and η y defined in Eq. (10). These are determined by modes with energies of order O(πT ) and above, and hence depend on the UV completion of the 3d EFT that we study. In the Standard Model, they take the values η x ∼ −0.06 and η y ∼ 4.6 in the vicinity of the electroweak crossover [1]. In perturbative extensions of the Standard Model, the values are expected to be of the same order, as argued in Ref. [40]. Let us consider the example of the xSM. Using the dimensional reduction relations of Ref. [38], and scanning over the one-step phase transitions considered in Ref. [40], one finds roughly that |η x (T c )| 0.4 and |η y (T c ) − 4.6| ∼ 0.6. Fig. 5 shows one such example with η x (T c ) = 0.2 and η y (T c ) = 4.6. In this case, for the strongest transition that we study at x = 0.0152473, the quartic condensate gives a ∼ 20% contribution to the latent heat, decreasing for weaker transitions.

C. Surface tension
The surface tension is the free energy per unit area of a macroscopically flat interface between phases, evaluated at the critical temperature. It gives some measure of the strength of the transition, with the surface tension decreasing to zero as one approaches the endpoint of a line of first-order phase transitions, at which point the phases become miscible. The surface tension is relevant to the bubble nucleation rate through the thin wall approximation, where the energy E of a nucleating spherical bubble of radius R is given by Here p is the pressure, or free energy density difference, between the homogeneous phases. The approximation is justified near the critical temperature, where the bubble radius is much larger than all other scales in the theory, in which case Eq. (19)

receives corrections at O(R).
Note that when working within the 3d EFT, it is more natural to work in terms of temperature-scaled quantities S 3 ≡ E/T , ∆f 3 ≡ p/T , and when working on the lattice we further scale out powers of g 2 3 , so that everything computed is measured in units where g 2 3 = 1. In this convention, the dimensionless surface tension is We measure the surface tension using the standard histogram method [79]. The known contributions of the capillary waves are accounted for in the infinite volume limit by fitting [29,80,81] Here p max is the maximum of the order parameter histogram, equal to the probability density of being in either homogeneous phase, and p min is the minimum of the order parameter histogram between the two phases, equal to the probability density of being in a mixed phase containing two planar interfaces. L 1 = L 2 and L 3 are the physical lengths of the lattice in the 3 spatial directions. An alternative method to calculate the surface tension, by Fourier analysing the spectrum of transverse fluctuations of a phase interface, was presented in Ref. [82], and also used in Ref. [29]. This method has been shown to give more accurate results with less computational resources. However, as the computation of the surface tension is not a major goal of our paper, we have not pursued this method.
For measurements of the surface tension, typically cylindrical lattices with one size much longer than the others, L 3 L 1 , L 2 , are used so that interactions between interfaces are small. We have not carried out any dedicated simulations on such long lattices, but have merely attempted to reuse the results from the simulations for Secs. III A and III B on cubic lattices. As a consequence, many of our simulations are far from the infinite volume limit, and the volume dependence of the surface tension showed marked deviations from the expected form (21). This was particularly the case for the stronger transitions at smaller x, so we have only reported results for x ≥ 0.05, where the infinite volume extrapolations seemed under better control. Some example plots of the infinite volume extrapolations are given in Appendix A. Extrapolations to zero lattice spacing were performed as in Sec. III B.
Our continuum extrapolated results are shown in Fig. 6, together with existing results from the literature. FIG. 6. The surface tension plotted against x. Note that the continuum limit has not been taken for the lattice point at smallest x. Lattice results from this paper and from Refs. [27,29] are shown. The approach to zero in the critical region is clearer at smaller x for the surface tension than for the latent heat. Small values of the surface tension, while the latent heat is still sizeable, has also been observed in the 2HDM [42].
We also include the LO and NLO perturbative approximations to the surface tension, for which the expressions can be found in Appendix B. Fig. 6 shows that there is relatively poor agreement between the lattice and perturbative results for the surface tension.

D. Summary of equilibrium results
Our equilibrium lattice results are collected in Table I. We have also included lattice results from the literature in which the continuum limit was taken [27,29,30,62]. For completeness, we note that there are a number of other lattice results for this model in the literature at finite lattice spacing [49,61,[83][84][85][86][87].
Perturbation theory demonstrates relatively good agreement with the lattice for sufficiently small x, and diverges from it at larger x. However, the precise level of agreement depends on the observable, with the surface tension showing the greatest discrepancies between lattice and perturbation theory.

IV. BUBBLE NUCLEATION
Bubble nucleation is the first stage of the dynamical evolution of a first-order phase transition. After nucle-  [27,29]. The result in square brackets has not been extrapolated to the continuum limit; it is for a single, nonzero lattice spacing.
ation, the pressure difference between phases causes the bubbles to grow acceleratedly, until it reaches a terminal velocity where the friction of the plasma balances against the bubbles' outward pressure. The growing bubbles eventually meet, colliding and creating sound waves which in turn create gravitational waves [88][89][90]. The average distance between bubbles, and consequently the peak in the gravitational wave spectrum, is determined by the bubble nucleation rate. Therefore, developing a quantitative description of bubble nucleation is crucial for relating gravitational wave observations to the physics of cosmological phase transitions.
To calculate the bubble nucleation rate on the lattice, we make use of the Langevin description of the dynamics of the infrared modes of non-Abelian gauge theories at high-temperature [28,[91][92][93][94][95][96][97][98][99][100]. For the SU(2) Higgs theory, the relevant Langevin equations read [29,100] where H = x L 3 , and L 3 is the Euclidean Lagrangian of the 3d EFT, Eq. (1). The parameter σ el ∼ T / log 1/g is the SU(2) colour conductivity, and the noise terms ξ a i and ξ φ satisfy where η ∼ 1/g 2 1 is the ratio of evolution rates for the Higgs field to the gauge bosons, and 1 is the unit matrix for the fundamental SU(2) indices.
For the non-perturbative infrared gauge fields of the symmetric phase this describes the full quantum dynamics, up to corrections of order O(1/ log 1/g). We will assume the description also applies in the vicinity of the critical bubble, though the original derivations of the Langevin equations did not consider this case. The hard-thermal loop effective theory offers a more accurate dynamical description [101][102][103][104][105][106][107][108], which is correct up to O(g) corrections, and for which numerical schemes exist [109][110][111][112][113]. However, hard-thermal loop effective theory approaches do not have a continuum limit, and thus we do not pursue them here. The nucleation rate is of cosmological relevance when it is roughly e −100 [114,115], in which case an error of ∼ e ±1 is only a 1% error on the logarithm of the rate.
In principle, one could calculate the bubble nucleation rate by simply initialising a lattice in the metastable phase, evolving it according to the Langevin equations and waiting to see how long it takes to decay into the stable phase [116][117][118][119][120]. However, bubble nucleation is an exponentially suppressed process, and for the cosmologically relevant situation the exponent is O(100). Thus, unfeasibly long simulations times would be required to simulate bubble nucleation directly.
Instead, we adopt the method proposed in Refs. [29,121]. A variant of this method was also used to compute the broken-phase sphaleron rate [122,123]. The approach follows the spirit of Langer's seminal work [124], and generalises it beyond the saddlepoint approximation.
When bubble nucleation is very slow, the metastable phase will have time to equilibrate long before nucleation occurs. The relative probability of a field configuration on the metastable side, including bubble configurations, is then given by its Boltzmann weight. This statistical problem, of finding the probability of bubble configurations, can be calculated with standard Monte-Carlo methods. To turn the resulting probability into a rate, some additional dynamical information is required, about how often the dynamical end state of a bubble configuration is indeed in the stable phase, and about the rate of bubble growth. Calculating these dynamical quantities only requires relatively short Langevin simulations. In this way, the difficult problem of simulating an exponentially suppressed process is factorised into two easier problems: one statistical and the other dynamical.

A. Nucleation rate
To compute the nucleation rate, we follow closely the approach of Refs. [29,121], where the reader can find further details. This approach requires an observable which is phase-sensitive, which we take to be the spatially averaged Higgs quadratic condensate, φ † φ, as this allows us to reweight in y. The probability distribution p(φ † φ) shows a two-peak structure, with the peak at smaller (larger) values of φ † φ corresponding to the symmetric (broken) phase; see  Histograms showing the probability distribution of the volume-integrated quadratic Higgs condensate, normalised with respect to its value in the symmetric phase. This observable has been plotted, instead of the volume-averaged quadratic Higgs condensate, because the critical bubble has fixed absolute volume, and hence its position on this plot agrees for different lattice sizes. The relative probability of the critical bubble versus the symmetric phase is what determines the statistical part of the nucleation rate. This quantity is stable under changes of the lattice size, L, provided the bubble takes up a volume fraction 4π/81 [29].
an appropriately sized lattice, 1 the minimum of p(φ † φ) between phases consists of spatially localised critical bubbles, the gatekeepers between phases.
The process of nucleation is limited by the probability of critical bubbles. We denote by φ † φ = ϕ 2 C the location of the minimum of p(φ † φ). This condition defines a co-dimension one surface in configuration space, separating the two phases, called the critical surface or separatrix. The probability P of being in some small vicinity around the critical surface is In the context of the nucleation rate, this should be normalised relative to the probability of the metastable phase P φ † φ < ϕ 2 C . Some examples of histograms are shown in Fig. 7, where the region |φ † φ − ϕ 2 C | < /2 has been highlighted and labelled "critical bubble". 1 As demonstrated in Refs. [29,121], an appropriately sized lattice must be sufficiently large that the bubble takes up a volume fraction less than about 4π/81 ≈ 15%. On the other hand, in order that only one bubble is accommodated, the lattice extent should be much smaller than the average distance between nucleating bubbles in infinite volume. In practice this latter condition is irrelevant, as the exponential suppression of bubble configurations implies that their average separation is exponentially large. To attain the probability flux through the critical surface in configuration space, one needs to multiply the probability density at the critical bubble by the magnitude of the vector perpendicular to the critical surface, Finally, to attain the nucleation rate from the probability flux, we must account for the fact that trajectories can cross the critical surface more than once, and may in fact cross an even number of times and hence not tunnel. Some example trajectories are shown in Fig. 8. To account for this effect, one introduces the following dynamical prefactor, where δ tunnel = 1 if there is tunnelling and δ tunnel = 0 if not, and N crossings is the number of crossings of the separatrix. From the definition, one can see that d lies between zero and one. If all critical bubbles were to be equally likely to expand to the broken phase or contract to the symmetric phase, and all were to only cross the critical surface either once or twice, then d = 1/2. In fact, one expects d < 1/2 due to the spiky nature of Langevin evolution causing multiple crossings; see Fig. 8. Conversely d increases as the trajectory is downsampled, though this is compensated for by a decrease in the magnitude of ∆(φ † φ)/∆t ϕ 2 C , such that the product is independent of the sampling rate [29].
In sum, the nucleation rate per unit volume, as calculated on the lattice, is given by the following expression [29, 121] The statistical part of this expression may be calculated with standard (multicanonical) Monte-Carlo simulations. The dynamical part requires real-time Langevin simulations. The factor of 1/2 removes overcounting, because half of the transitions between the symmetric and broken phase are in the wrong direction. Compared to a simple wait-and-see approach [116][117][118][119][120], Eq. (29) relies on the crucial assumption that the full rate factorises into dynamical and statistical parts. This can be argued for based on the enormous hierarchy of scales between the total timescale of nucleation, and the timescale for which the system remains in the vicinity of the critical bubble. Over the exponentially long times taken to get from the symmetric phase to the vicinity of the critical bubble, all correlations in time are expected to be washed out by the large numbers of interactions which occur, so that this process occurs effectively in equilibrium. Only for the short time spent in the vicinity of the critical bubble is it necessary to account for correlations in time. Such a factorisation can be explicitly demonstrated within a saddlepoint approximation of the path integral [124][125][126]. However, while the present approach relies on this factorisation, lattice simulations go beyond the saddlepoint approximation.
To numerically simulate the Langevin equations, Eqs. (22) and (23), one could directly discretise the time derivative, and evolve as a stochastic initial value problem. However, in this approach finite time step errors will lead to deviations from the correct thermodynamics of the system, which may cause havoc for the evolution of a finely balanced critical bubble. Alternatively, one may perform simulations using a different dissipative update, if the relationship to Langevin evolution is known. Following Ref. [29], we choose to perform heatbath updates on the gauge fields, which is equivalent to Langevin evolution where the Langevin time step ∆t is related to the number of heatbath updates per link n hb through This relation was proven for the case where the lattice sites are updated in a random order [99]. We however adopt a checkerboard order, which is significantly simpler to parallelise. This can be partially justified by noting that for each group of odd or even sites updates are uncorrelated, therefore a sequentially ordered update algorithm is equivalent to a random ordered algorithm within each group. In addition, we randomly select whether odd or even sites are updated in each sweep of the lattice. The evolution of the Higgs field is parametrically faster than that of the gauge fields, therefore, to leading order in the couplings, the gauge field sees the Higgs field in equilibrium [29,99]. To ensure this, for every one gauge-field update, we perform a large number η 1 of heatbath and overrelaxation updates on the Higgs field. Fig. 10 shows how the dynamical prefactor depends on η at x = 0.0152473. This shows that the prefactor initially grows with η until eventually flattening off, suggesting a smooth η → ∞ limit. The result of taking this limit gives the leading order result [99]. We have simply used either η = 10 or η = 40 in our final results (for specifics see [75]). The error thereby introduced is comparable to the uncertainty in the statistical part of the rate.
The y dependence of the statistical part of the nucleation rate can be determined by reweighting, following Eq. (12). This greatly reduces the computational effort of the calculation, as for a given value of x, the whole functional dependence on y follows from one simulation at a single value of y. There is unfortunately no such trick applicable to the dynamical part of the nucleation rate. We have thus studied the y dependence of the dynamical part through direct simulations, some examples of which are shown in Fig. 9. As can be seen, the y dependence is very mild, being consistent with constant within errors. Fig. 9 also reveals some volume dependence of the dynamical part. However, when incorporated into the full nucleation rate the total volume dependence is milder, consistent with volume-independent within errors. The finite size of may be responsible for the volume dependence of the dynamical part. This is because, as the volume grows the minimum of the probability distribution for φ † φ becomes more strongly curved, leading to bubbles at the edges of the range |φ † φ − ϕ 2 C | < /2 being more represented in the dynamical prefactor calculation. These bubbles at the edges of the range are less likely to tunnel, and hence for them d is smaller. While this effect decreases the dynamical part, it increases the statistical part, because the integrated probability P |φ † φ − ϕ 2 C | < /2 grows due to the increased probability density at the edges of the range. These two effects cancel, as argued in the appendix of Ref. [29].
The relationship between the physical nucleation rate Γ, and the rescaled quantity calculated on the lattice,Γ, takes the form The prefactors on the right hand side of Eq. (31) arise from the relation between physical units and those used on the lattice [99]. The factors also reflect properties of the typical infrared gauge configurations relevant to bubble nucleation, with the first factor being the inverse time scale of their evolution, and the second factor being their inverse volume scale. In a sufficiently large lattice, finite volume effects on bubble nucleation are exponentially small, owing to the model being gapped. This requires that the bubble fits inside the lattice with enough space around that it does not interact with itself through the periodic boundary conditions. As shown in Refs. [29,121], this is equivalent to the condition the bubble takes up less than an approximately π 2 /81 fraction of the total lattice volume. In smaller lattices, the lowest energy configuration is distorted away from spherical, becoming either cylindrical or slab-like. Transitions between these discretely different geometrical configurations appear as kinks in the histograms. There is another kink for the transition between bulk fluctuations and localised bubble fluctuations [29,121,[127][128][129]. tices were chosen to be sufficiently large to accommodate spherical bubbles. For the smallest lattice spacings, a number of different lattice volumes are shown, the results of which agree within error. This demonstrates that our lattices are sufficiently large that we may neglect volume dependence. Table II lists the complete set of parameters and lattice volumes on which we have computed the nucleation rate. Also shown in Fig. 11 are the effects of changing the lattice spacing, while keeping the physical volume fixed. We perform continuum extrapolations by fitting 1 + a 2 , for each value of y c (a) − y. This is justified because all our simulations were performed with an O(a) improved lattice action. The only place where O(a) corrections may in principle arise is through the relation between Langevin and heatbath updates [99], though we expect such corrections to be small, if present, as they only affect the dynamical prefactor.
The calculated nucleation rate is a function of y, with error bars, for each value of x. For convenience of future use, we fit our numerical results to the following function For each value of x we perform the fit over the range of y c − y shown in Figs. 11 and 12. This function is motivated by the expected form in the thin-wall limit, (y c − y) → 0 + . In this limit, the action grows as here written in terms of the surface tension and jump in the quadratic condensate, each evaluated at y = y c . Note, however, that we do not perform the fit in the limit y → y c , but rather in some range of nonzero y c −y. Thus, our fit results s i are not the coefficients in an expansion about y = y c , but simply parameterise our results over the range of y studied. The results are collected in Table  III.   [75]. Note that larger lattices were required at larger x, because, due to the small surface tension, the cosmologically relevant critical bubbles, for which − logΓ ∼ 100, are closer to the thin-wall limit. All volumes are sufficiently large that the bubble takes up a small volume fraction (ϕ 2 4π/81, ensuring that the bubble does not see itself [29]. Here ϕ 2 S and ϕ 2 B refer to the positions of the peaks of the symmetric and broken phases respectively. TABLE III. Lattice Monte-Carlo results for fits to the bubble nucleation rate, from this work, and from Ref. [29]. The (large) errors for x = 0.036 follow from assuming ±1 errors on the rate [29]. The asterisks ( * ) are a reminder that, for these values of x, only two lattice spacings have been used to extrapolate to a → 0.
Constructing the perturbative expansion for the nucleation rate requires some care. We utilise the EFT approach to bubble nucleation [125], and perform a strict expansion in x to ensure order-by-order gauge and renormalisation-scale invariance [125,130,131]. Details are given in Appendix B. Beyond LO, the difference between the values of y c in successive approximations poses difficulties for a strict expansion, because at y c the rate is nonanalytic and the logarithm of the rate is singular. This issue can be overcome by computing the rate at fixed δy ≡ y c − y, rather than at fixed y. At NLO, this means that one writes y = y LO c + y NLO c − δy, and then splits the LO and NLO parts of y c between the corresponding parts of the action. Without this trick, perturbation theory breaks down altogether: logΓ NLO > 0 for all the values of x studied.
Though we have not done so here, the next-to-nextto-leading order (NNLO) perturbative results could be constructed from the numerical results of Refs. [47,132], following the prescription for scale-shifters in Ref. [125].
Comparing lattice and perturbation theory for the nu-cleation rate shows a similar trend as for the equilibrium quantities: NLO corrections are essential for any quantitative agreement, but still sizeable discrepancies remain. At fixed y c − y and at LO in powers of x, perturbation theory significantly overestimates the nucleation rate: (− logΓ LO )/(− logΓ) ∼ 1/2, or very roughlỹ Γ LO /Γ ∼ e 60 . The perturbative rate is also a shallower function of y c − y over the range studied. Conversely, if one is interested in the degree of supercooling at a fixed value of the rate, LO perturbation theory gives a significant underestimate: (y c − y) LO /(y c − y) ∼ 1/2. Extending to NLO in powers of x, perturbation theory performs significantly better. The logarithm of the NLO nucleation rate at fixed y c − y agrees with the lattice to within about 20% over the range studied. However, in all cases the slope of the rate appears to disagree significantly, implying disagreement on the duration of the transition (see Sec. IV C) and on the rate at both larger and smaller supercooling.

B. Visualising bubble nucleation
Typical lattice field configurations in the Boltzmann distribution have fluctuations on the lattice scale. In the continuum limit a → 0, these fluctuations give an infinite contribution to the free energy, and to the mass of the Higgs field, essentially a manifestation of the Ultraviolet Catastrophe. These infinite contributions are cancelled by counterterms, which grow as 1/a and log a in the continuum limit. Due to the superrenormalisability of this theory, its ultraviolet behaviour is simple, and hence the counterterms can be computed exactly [57]. This cancels divergences in physical observables such as the latent heat or surface tension, but not in the lattice field configurations themselves, which retain lattice-scale fluctuations. Thus, bare lattice configurations do not have a good continuum limit, and we cannot simply identify the most likely configurations between the two phases as physical critical bubbles.
We can, however, construct a field configuration with a good continuum limit by coarse-graining over the lattice scale. Such coarse-graining is also motivated within perturbative approaches to bubble nucleation [125,133]: In the presence of a hierarchy of scales coarse-graining is necessary to correctly describe the critical bubble at leading order. In radiatively-induced transitions (such as this one) coarse-graining is also necessary for the existence of the critical bubble within perturbation theory.
For our coarse-graining procedure, we smooth the Higgs field by combining the field at each point with its neighbours, suitably parallel transported, where i runs over the three positive links to neighbouring lattice sites, and U i (x) refers to the lattice gauge link variables. After smoothing the field n times, structures smaller than na are washed out, but larger scale structures are largely unaffected. Thus, if one is interested in the field on physical length scales ∼ l, this should be well exhibited by choosing 1 n l/a. Figure 13 shows two isosurface plots of φ † φ after smoothing the Higgs field to remove lattice-scale noise. The particular configuration was extracted from the Boltzmann distribution, saved to file as the Markov chain crossed the separatrix φ † φ = ϕ 2 C . The origin of the coordinates has been shifted so that the bubble lies in the centre. A video showing a sequence of isosurface plots for an example real-time trajectory of bubble growth can be seen at [48].
For weaker transitions (larger x), the bubble configurations relevant for transitions with − logΓ ∼ 100 are comparatively larger. The trend in bubble size can be understood from an argument based on the thin wall approximation, as a consequence of the surface tension decreasing for weaker transitions. That is, for a bubble of fixed size, decreasing the surface tension increases the nucleation rate, and thus to return to − logΓ ∼ 100 one must increase the size of the bubble.

C. Cosmological evolution
Given the bubble nucleation rate, one can determine the bulk evolution of the phase transition [114,115,134]. The onset of the transition, when there is approximately one bubble nucleated within a Hubble volume, occurs approximately when where H is the Hubble rate, and β can be defined as 2 equal to the inverse of the time over which f sym varies by an O(1) amount. In the second equality of Eq. (36), we have made the assumption of radiation domination, so that dT /dt = −HT .
As the phase transition proceeds, the fraction of space which remains in the symmetric phase f sym satisfies the following approximate equality where v w is the bubble wall speed. Due to the strong exponential growth of the bubble nucleation rate with time, the temperature at which the phase transition takes place is relatively well defined, independently of the precise choice of f sym ∼ 1.
Rewriting Eq. (37) in terms of the rescaled lattice nucleation rate gives the following approximate condition for percolation where we have definedβ ≡ −∂ y logΓ. In deriving the numerical value on the right hand side, we have assumed the transition to take place at T ≈ 140 GeV [9,40], and have substituted SM-like parameters in order to arrive at the numerical values [16,29]. This value, 137, will vary The (scaled) inverse duration of the transition, evaluated at percolation. Lattice results from this paper and from Ref. [29] are shown.
by O(1) depending on the specific 4d model considered. We have also approximated the factor β as neglecting the parametrically slower T -dependence of x and g 2 3 /T , in comparison with that of y; see the discussion around Eq. (10). Note the factorisation of ultraviolet and infrared contributions in Eq. (39). Fig. 14 shows the degree of supercooling at percolation, y c − y p , and Fig. 15 shows the (scaled) inverse duration of the transition,β p . Together with the lattice data, we have included the LO and NLO perturbative results. Again we have used a strict expansion in x to determine the NLO values, thereby ensuring order-by-order gauge invariance. The NLO perturbative results agree well with   [29]. The asterisks ( * ) are a reminder that, for these values of x, only two lattice spacings have been used to extrapolate to a → 0.
the lattice for the smallest value of x, but by x = 0.05 are no closer than the LO results.

V. CONCLUSIONS
In this paper, we have carried out extensive lattice Monte-Carlo simulations and continuum extrapolations of the SU(2) Higgs 3d EFT. This model describes the high temperature thermodynamics of the Standard Model electroweak sector (up to small corrections), as well as the thermodynamics of a wide range of BSM extensions of the electroweak sector. We have focused on the region of parameter space containing first-order phase transitions in this EFT, motivated by planned gravitational wave experiments, and by the possibility of successful electroweak baryogenesis.
Our results significantly extend those of the original works [27,29], and provide a data resource which can be used to reliably and quantitatively determine properties of first-order phase transitions in extensions of the electroweak sector. Due to the factorisation of infrared and ultraviolet contributions, evidenced in Eqs. (16), (20), (31) and (39), once the effective couplings of this 3d EFT have been calculated for a given 4d model, the nonperturbative thermodynamics of the 4d model can be simply read off. The results for the bubble nucleation rate also allow one to estimate the gravitational wave spectrum produced by sound waves in the fluid plasma [88][89][90]. This approach was adopted in Ref. [40], using the single lattice result for the bubble nucleation rate from Ref. [29].
The results of this paper are directly applicable to BSM scenarios in which new degrees of freedom, with effective masses at the transition of order O(gT ) or greater, induce the Higgs symmetry-breaking transition to be first order. However, in scenarios where BSM degrees of freedom are lighter than this, or participate directly in the transition, such as in two-step phase transitions, new lattice Monte-Carlo simulations in other 3d EFTs are necessary; see for example Refs. [42,60,[135][136][137][138], and Refs. [139,140] in a different context.
By densely scanning the parameter space of the 3d SU(2) Higgs model, we have revealed information on the functional forms of key thermodynamic quantities, such as the critical mass and the jump in the Higgs quadratic condensate. Beyond their intrinsic value, these can be used to provide rigorous tests of different perturbative approaches [46,47,56,141], and alternative nonperturbative approaches [142,143], as a function of the perturbative expansion parameter; see for example Refs. [138,144,145]. While perturbative approaches to the thermodynamics of non-Abelian gauge theories are fundamentally incomplete [22], the computational cost of lattice Monte-Carlo simulations means that complementary approaches are necessary for exploring the high dimensional parameter spaces of possible physical models. Reliably benchmarking our confidence in such complementary methods is therefore imperative.
We have tested the first two orders of the perturbative expansion in x, and find the NLO approximation to provide reasonable estimates for most of the quantities studied, at least for x 0.05. However, further work at higher orders in this expansion is necessary to determine to what extent it converges onto the lattice results, especially given the large disparity between LO and NLO. The extension to NNLO is presented in contemporary works [46,47].
A possible extension of this work would be to perform a similar lattice Monte-Carlo study of the 3d U(1) Higgs model. This model does not suffer from the Infrared Problem [22], its infrared sector being free. Hence, by extending Refs. [146,147] and simulating a number of different phase transition strengths in the U(1) Higgs model, one would be able to address the question: how numerically important is the Infrared Problem?
The simulations presented in this paper made use of well-understood and highly efficient algorithms for the study of the thermodynamics of 3d bosonic theories [27,148]. However, our study has revealed a need to develop more efficient algorithms specifically for the study of the bubble nucleation rate. The calculation of the bubble nucleation rate requires large lattices to comfortably fit a critical bubble, and in this case the multicanonical algorithm of Ref. [27] requires very long Markov chains in order to sample the full phase space. The underlying reason is that the volume-averaged order parameter φ † φ cannot distinguish between nascent, localised bubbles and delocalised fluctuations spread over the lattice, preventing the multicanonical algorithm from efficiently tunnelling between the symmetric phase and field configurations containing critical bubbles. Development of more efficient algorithms for simulating bubble nucleation is therefore an important next step.  ciently large volumes. The same is true for the bubble nucleation rate on lattices where the bubble takes up a sufficiently small fraction of the lattice, such that it cannot see itself through the periodic boundaries. In these cases, we simply fit a constant to the results from the largest few lattices, ensuring that they agree within error. The lattice spacing of the surface tension is more complicated, but the leading dependence is known analytically so can be subtracted off, and the remaining dependence fit to.
Without the O(a) improvement, all the physical quantities that we consider suffer from O(a) corrections at finite lattice spacing. In all these cases we have performed polynomial fits, choosing the lowest degree polynomial such that χ 2 /d.o.f. ∼ 1, either 1 + a or 1 + a + a 2 . With the O(a) improvement, the linear lattice spacing dependence of all quantities except y c is cancelled, so the linear term is omitted from the polynomial fits. The benefits of the O(a) improvement for the condensates and surface tension are marked.
At tree-level in the 3d EFT, symmetry-breaking phase transitions occur at m 2 3 = 0 and appear to be of second order. However, loop corrections may modify the order of the transition, and for a first-order phase transition the critical temperature occurs for m 2 3 > 0. A strict loop expansion for the transition leads to infrared divergences and stray imaginary parts at O( 2 ) [149], demonstrating the inapplicability of the loop expansion in this case. For sufficiently small x, a perturbative expansion in x can be constructed [29,77,146]. Doing so requires including some one-loop terms into the LO approximation, thereby resumming the loop expansion. The resulting expansion is closely related to the coupling expansion for the case where the 4d couplings satisfy λ ∼ g 3 [44,45]. In contemporary works, this expansion has been formalised and extended to NNLO [46,47].
The EFT approach to thermal bubble nucleation [125] offers a consistent method to calculate the nucleation rate in perturbation theory. This approach is based on the construction of the nucleation scale effective action S nucl , by integrating out all parametrically heavier modes; which can be carried out directly at the level of the path integral [150]. In the context of the SU(2) Higgs theory at small x, the gauge fields can become parametrically heavier than the Higgs field on the bubble, and therefore must be integrated out. The gauge fields are however light in the symmetric phase; they are scale-shifters [125]. Integrating out the gauge fields yields terms in the effective action which are local at LO and NLO, but nonlocal at higher orders. At LO, the nucleation scale effective action is which is of O(x −3/2 ) when evaluated on the critical bubble, assuming the mass is not parametrically smaller than the critical mass. Here φ is a real-valued background field. The NLO corrections to the nucleation scale effective action are which is of O(x −1/2 ) when evaluated on the critical bubble. This nucleation scale effective action has been discussed in Refs. [29,125,[130][131][132].
Corrections to S LO nucl + S NLO nucl arise at O(x 0 ) due to nucleation scale, or lighter, modes fluctuating in the bubble background. This holds as long as the mass is not parametrically smaller than the critical mass. These corrections depend on the shape of the critical bubble, and make up the statistical prefactor of the nucleation rate, which for this theory has been calculated in Ref. [47,132]. At this same order, the nucleation rate receives correc-tions related to the real-time growth of the critical bubble, the dynamical prefactor [124][125][126]. We have stopped short of computing these corrections in our perturbative analysis.
The nucleation scale effective action can also be used to calculate the equilibrium properties of the phase transition. This is no accident, and follows because the largest contributions to the change in free energy ∆F are those which are due to the heaviest modes, and hence ∆F ≈ T S nucl at LO and NLO. The perturbative results for equilibrium quantities are