Numerical evidence for fractional topological objects in SU(3) gauge theory

The continued development of models that propose the existence of fractional topological objects in the Yang-Mills vacuum has called for a quantitative method to study the topological structure of $\mathrm{SU}(N)$ gauge theory. We present an original numerical algorithm that can identify distinct topological objects in the nontrivial ground-state fields and approximate the net charge contained within them. This analysis is performed for $\mathrm{SU(3)}$ colour at a range of temperatures crossing the deconfinement phase transition, allowing for an assessment of how the topological structure evolves with temperature. We find a promising consistency with the instanton-dyon model for the structure of the QCD vacuum at finite temperature. Several other quantities, such as object density and radial size, are also analysed to elicit a further understanding of the fundamental structure of ground-state gluon fields.


I. INTRODUCTION
The nonperturbative nature of quantum chromodynamics (QCD) precludes the analytic study of many of its most important phenomena, such as quark confinement.In SU(N ) gauge theory, an area-law behaviour of large Wilson loops, is often taken as an indicator of confinement in the context of static heavy quarks [1].This picture is complicated by the presence of light quarks, which results in string breaking at large separations [2].One can instead analyse the Schwinger function of the gluon propagator, where a transition to negative values at large Euclidean times implies the spectral density is not positive definite [3].It follows that there is no Källen-Lehmann representation of the gluon propagator, a manifestation that the corresponding physical states are confined.This is found in theories with or without dynamical quarks [4,5], suggesting it is the behaviour of the gluon fields that underpins confinement, though no complete theoretical mechanism is currently known.Pure SU(N ) Yang-Mills theory is known to experience a phase transition at a critical temperature T c above which confinement breaks down.This motivates exploring the evolution of the gauge fields through the phase transition to elicit fundamental properties that can be attributed to confinement.
Nonperturbative aspects of QCD are primarily studied through lattice QCD, wherein the theory is formulated on a discrete lattice in Euclidean spacetime.Modelling SU(N ) gauge theory in Euclidean spacetime brought about the discovery of the instanton [6], a classical topological configuration which corresponds to the (anti-)selfdual local minima of the Yang-Mills action functional, The instanton solution formed the basis of the instanton liquid model [7], which sought to model the QCD vacuum in terms of an ensemble of interacting semiclassical instantons, with fluctuations around the classical solution.The model was able to explain chiral symmetry breaking, though did not account for confinement [7][8][9][10][11][12].Models subsequently emerged that proposed the existence of fractional topological configurations.Analytic self-dual solutions to the Yang-Mills equations with fractional charge ∼ 1/N have been known to exist on the twisted torus T 4 since the early 1980s [13].Cooling methods with twisted boundary conditions have isolated such "fractional instantons" [14,15], and the solutions have been studied numerically in SU (2) [16,17] and general SU(N ) [18,19] gauge theory.Unlike the regular instanton, fractional instantons possess Z N flux and have thus been put forth as a possible microscopic mechanism for confinement [14,15,20].The original twisted T 4 solution has since been extended to a vastly broader class of configurations [21,22].
By additionally varying the periods of the torus in each dimension, classical fractional solutions have been constructed on R n × T 4−n for different n.For instance, whilst the caloron solution has n = 3, Refs.[35][36][37] considered "doubly-periodic" vortex-like solutions on R 2 ×T 2 (n = 2) and Refs.[16,17,38] explored the "Hamiltonian geometry" R×T 3 (n = 1).Although specific periodicities and twists are necessary to isolate these fractional solutions, they are certainly not required for their existence [20].More recently, there have also been constructions of fractional topological objects with charges ∼ 1/N in the confining phase through quantum fluctuations of an effective action [39].
Working in SU(3) pure gauge theory, we present a nu-merical algorithm that can identify distinct topological objects within an arbitrary distribution and approximate the net topological charge contained within each such object.This analysis is performed at a range of temperatures either side of T c , providing a direct evaluation of the underlying structure in the gluon fields and how this evolves with temperature.The conclusions are subsequently compared against the instanton-dyon model to test the validity of its main predictions.This paper is structured as follows.In Sec.II, our algorithm is presented in detail and tested.Section III covers the smoothing applied to the gauge fields.The continuum limit is explored in Sec.IV, and our main results are subsequently presented in Sec.V. Thereafter, the significance of the results is discussed in Sec.VI, along with an investigation into several other statistics available through our methods.Finally, we conclude our main findings in Sec.VII.

II. CALCULATING CHARGES
The core of our analysis involves a novel method to approximate the net charge of distinct topological objects within an arbitrary topological charge distribution.This is fundamentally a very complicated task, as we desire a technique that avoids any references to a specific topological configuration.A previous approach to studying topological structure, presented throughout Refs.[40][41][42][43][44][45][46], proceeds by implementing a threshold that divides the distribution up into disconnected clusters.Though interesting properties can be studied through this method, it is unsuitable for our primary goal of providing an unbiased estimate of the charge within each distinct object.This is due to two primary reasons.First, points below the threshold that are not assigned to a cluster are disregarded, even though these may be an important contribution to the charge of some objects.Second, distinct objects can be connected by a path above the threshold and identified as part of the same cluster, which is a clear hindrance to our objective.
To overcome these, we have devised a more fundamental strategy presented below.Instead, we utilise an iterative procedure where objects are first identified by peaks in the topological charge density and then allowed to grow outwards one step at a time.Subject to a few assumptions, we can be confident the set of points ultimately assigned to an object is a reasonable representation of its distribution.The algorithm is described in detail below.

A. Algorithm
We start by presenting the full algorithm and discuss the motivation for each step in the following subsections.The algorithm is to be applied to a UV-smoothed configuration, as covered in detail in Sec.III.For a topological charge distribution q(x): 1. Identify the peaks of objects through local maxima [for q(x) > 0] and minima [for q(x) < 0] within a 3 4 hypercube.Label each one with an identifying "object number".
2. Proceeding in order of smallest to largest peak, take all points currently assigned to the corresponding object number, {x}, and assign the same object number to neighbouring points, {x ′ }, in the 3 4 hypercube centred at x which: • have not yet been assigned an object number, • have the same sign density (q(x ′ ) q(x) > 0), and

Repeat
Step 2 until no more valid points can be assigned as part of an object (i.e.no remaining lattice sites satisfy the above criteria).
4. Filter out the peaks that fail to assign all points within the surrounding hypercube.Reinstate Step 3 for the surviving peaks to reallocate the newly available points.
5. Calculate the net charge of each object by summing q(x) over each set of points having the same object number.
This procedure is demonstrated visually in Fig. 1.

Peak identification
Treating each object as a localised region of dense topological charge density, q(x), it follows that each will have some distribution which decays away from a peak value.This structure has been seen before in visualisations [47,48], and allows one to identify the approximate centres of distinct objects through local extrema.To determine whether a point qualifies as a local maximum or minimum, we consider the 3 4 hypercube centred at that point, which is to say one point in every direction (including diagonals).

Allocating points
The substance of our algorithm consists of iteratively allocating points to one of the objects.In a specified order (see below), all points {x} currently assigned a given object number assign the same object number to neighbouring points {x ′ } within the 3 4 hypercube surrounding x that pass the below constraints.
• x ′ has not previously been assigned an object number.This simply ensures we do not overwrite information already associated with another object.
FIG. 1.A graphic demonstrating our algorithm, proceeding from left to right then top to bottom.The order in which objects grow is red → blue → purple → green (in greyscale, from darkest to lightest shade of grey).First, the peaks are identified and neighbouring hypercubes assigned (top row).The objects are then allowed to grow until all possible points have been assigned (middle row).Finally, we discard the peaks that fail to assign all points within the 3 4 hypercube, and the remaining objects grow until no more points satisfy the required criteria (bottom row).In this case, these are the green (lightest grey) points.
FIG. 2. A graphic illustrating the topological charge density as a function of two coordinates, highlighting a potential scenario our algorithm must deal with where two objects of differing sizes are located near each other.In order to stop the narrow object incorrectly obtaining points associated with the broader object, we enforce that each object can only grow downwards in topological charge value.
• q(x ′ ) has the same sign as q(x).This requirement treats positively and negatively charged topological excitations as entirely distinct.
• x ′ has a lower absolute topological charge density value.In essence, this condition implements a natural boundary between objects, keeping them contained to a region of the lattice within which the topological charge is solely decreasing.A twodimensional illustration of this idea is shown in Fig. 2, a hypothetical scenario wherein a narrow object and a broader object are situated near each other.Each has a circle drawn near its base to show that one object is unable to grow within the circle associated with the other when it is restricted to grow downhill.In this way, we guarantee the sites assigned to each object reflect their distributions.This concept readily extends to four dimensions.

Growing order
The order in which the objects assign their neighbouring points is observed to be important, encompassing "inbetween points" where it is ambiguous to which specific object they should be assigned.As mentioned in Step 2, our choice is to perform this process in order of ascending peak value.This is based on the below observations and assumptions: • Points farther away from lower-peaked objects still have a greater relative weight compared to sharply peaked objects.
• If two objects have similar net charges (not necessarily identical), the lower-peaked one must have a broader distribution.
This order therefore introduces a bias towards smallerpeaked objects such that we conform to the above observations.In Sec.II B, we present test results for both our chosen ordering and the reverse order (decreasing peak value), emphasising the difference between the two extremes and demonstrating that our selection produces the more accurate results.

Dislocation filtering
A nonzero lattice spacing a gives rise to dislocations, fluctuations in the action and topological charge density on the scale of the lattice spacing.We stress that these are distinct from the "genuine" topological features we are interested in, and thus desire a method to distinguish between the two to prevent our results from being skewed by the presence of lattice artefacts.
For this reason, we implement a cutoff such that any identified peak that fails to assign all points within a defined size is discarded.The objective is to filter out the dislocations with size ∼ O(a).Accordingly, we investigate two different choices for the filter in terms of the lattice spacing: A. Nearest-neighbour filter: the peak must attain all points 1 unit away in each Cartesian direction (i.e. the peak must be resolved by the lattice spacing).
B. Hypercube filter: the peak must attain all neighbouring points, covering the full 3 4 hypercube in addition to the points 1 unit away.
Note that based on the conditions outlined in Step 2, a size cutoff enforces, as a minimum, that: • no two peaks overlap within the cutoff, • the topological charge density q(x) has the same sign at every point within the cutoff of each peak, and • |q(x)| exclusively decreases up to the cutoff away from each peak.
These constraints make intuitive sense on defining a topological object to have a minimum size.The remaining objects that survive the filter are then allowed to grow until no more points satisfy the criteria in Step 2.
A natural question that arises from introducing a scale-dependent filter is whether this dependence carries through to our final results.However, an appropriate cutoff will achieve the opposite, precisely because dislocations scale with the lattice spacing.Therefore, with no filter or an especially weak filter, our results would be sensitive to the size of the dislocations and would change if the lattice spacing is varied.Conversely, provided the filter is sufficiently strong to separate out a majority of the dislocations, the remaining topological features should be the same irrespective of the lattice spacing.Using this logic, we deduce that the simple nearest-neighbour filter is too weak, producing results that diminish with the lattice spacing.In contrast, the hypercube filter is found to ensure the desired scale independence.The details are provided in Sec.IV.Hence, we specifically mention in Step 4 to consider the hypercube for dislocation filtering, as an "objective" choice of a correct filtering method.

B. Classical Limit
Having developed an algorithm, we next require to test it on a configuration with an expected outcome.To achieve this we employ gauge cooling [49][50][51], which seeks to minimise the local action at each lattice site through a sequential update of the link variables U µ (x).Zerotemperature gauge configurations under extended cooling are known to approach the classical limit consisting entirely of (anti-)instantons with integer topological charge [50,51].
Following the procedure outlined in Appendix A, 4000 sweeps of O(a 4 )-improved cooling is performed on five 32 3 × 64 pure gauge configurations with a = 0.1 fm.The simulation details for these configurations are provided in Sec.V A. Their properties after cooling, including the action S, integrated topological charge Q and number of identified objects are summarised in Table I.Based on the filtering detailed in Sec.II A, we find there are no dislocations present under extended cooling as defined by either filter.Therefore, the number of objects in this instance is precisely the number of local extrema.
The action is normalised by the single-instanton action S 0 = 8π 2 /g 2 such that it can be directly compared to Q.Our observation is that the net topological charge consistently converges to within 1% of an integer in < 75 sweeps through the lattice and remains stable thereafter for the duration of the cooling, which is in agreement with previous work [52].The large number of cooling sweeps is to ensure the configurations satisfy self-duality, which is seen to be reached with the values of S/S 0 and |Q| for each configuration agreeing to at least one part in one thousand.The number of extrema N tends to be less than S/S 0 in each case, though the fact they are nevertheless similar indicates these features are instanton-like.Analytic self-dual solutions to the Yang-Mills equations are known to exist for arbitrary topological index [53,54], which we refer to as "multi-instanton" objects.Additionally, the profile of overlapping instantons is known to result in "hiding" instantons in extreme circumstances [55,56].Both of these could be the source of the slight discrepancy between N and S/S 0 .
The collated results of applying the algorithm to the cooled configurations, for both ascending and descending peak order, are shown in Fig. 3.In these histograms, the horizontal axis shows the absolute value of the charges we calculate for each identified object, whilst the vertical axis gives the number of identified objects for a given interval of values.
As mentioned, even this self-dual limit is not comprised of ideal well-separated instantons, and this has ramifications for accurately capturing the single-instanton prop-erties.To highlight this complication, we visualise the topological charge density on a cooled self-dual configuration in Fig. 4. The overlapping nature of individual topological features is clear, and by colouring the topological charge based on object number we can observe the effective boundary created by the algorithm between these objects.Consequently, we expect to find a distribution of charge values, representing fluctuations around the exact |Q| = 1 solution that arise from overlapping distributions.
For growing in ascending peak order, the majority of charge values we calculate lie very near |Q| = 1, with a small spread of values slightly farther away.The same pattern is seen for the reverse ordering although to a visibly lesser extent, corroborating our choice.On occasion, integer values of larger magnitude are observed, as several objects have a calculated charge near |Q| = 2.As previously surmised, these could arise due to extreme cases of overlapping instantons which only have a single peak in their topological charge density, or signal the presence of multi-instanton objects after extended cooling.
These results provide strong evidence that we can take the mode of the distribution for a general configuration as a reliable indicator for the topological charge values which tend to comprise the gluon-field objects.Their distribution reflects both quantum fluctuations around those solutions and inherent uncertainties in assigning topological charge density to objects in four dimensions.The upshot is we have developed a means by which we can explore the nature of objects in ground-state fields and their evolution with increasing temperature.

III. SMOOTHING
It is well known that lattice operators for the action and topological charge densities encounter renormalisation factors that differ significantly from 1 and are poorly controlled.For example, different O(a 4 )-improved lattice operators will produce significantly different topological charge densities, which is elaborated on in Sec.III B. To ensure reliable results, we accordingly seek to minimise these discretisation effects through the application of smoothing.
In addition to cooling, there are various smoothing algorithms in common use, including APE smearing [57,58], stout-link smearing [59], gradient flow [60,61] and their over-improved variants [62,63].Over-improvement proceeds by defining a one-parameter family of actions S(ε), with ε tuned to preserve instantons in the smoothing process with a size above some minimum dislocation threshold ρ ≥ ρ 0 .We specifically avoid such methods in our work so as to not bias our results towards a particular topological configuration.Instead, our dislocation filtering additionally serves to remove any topological features that shrink below the lattice spacing during the smoothing process.For this reason, the size cutoff remains important regardless of the level of smoothing.
At this stage we are no longer interested in the classical limit, and instead desire to minimise the amount of smoothing required to accurately resolve the topological objects of a typical vacuum configuration.Cooling is unsuitable for such a gradual process, and thus we implement gradient flow, described below.

A. Gradient flow
The evolution of the link variables U µ (x) under gradient flow is defined by the differential equation [60] for dimensionless "flow time" τ , and where where Σ µν (x) is the staple product of links connecting U µ (x) in the µ-ν plane.Q µ (x) is seen to be traceless Hermitian by construction.
In the interest of preserving locality during the smoothing process, we calculate a staple sum which includes the contributions of the plaquette, 1 × 2 and 2 × 1 rectangles.This is given pictorially by where the coefficients correspond to a standard Symanzik O(a 2 )-improved lattice action.Defining P (m×n) µν Numerical integration of the Wilson flow is performed using an Euler integration scheme in which the link variables are updated successively in time steps of ϵ via FIG. 4. Visualisations of the topological charge density in the self-dual limit obtained by slicing along the temporal dimension and visualising the remaining three-dimensional spatial structure.These are six consecutive frames, displayed from left to right then top to bottom.The topological charge density is coloured (shaded) according to the object number in the algorithm, allowing insight into how it divides topological objects in four dimensions.We only visualise the topological charge density above some minimum threshold value to observe the behaviour of the algorithm on the most significant topological charge density; this also enables one to see into the three-dimensional space.The overlapping nature of the instantons, even in this "classical limit", is apparent.The challenging four-dimensional nature of the problem is also revealed.For instance, the red object grows on top of the purple object as we advance in the temporal dimension, and subsequently merges with the yellow object.The algorithm is seen to implement an effective and reasonable boundary between each of the topological features.
One can see that with the given choice of Q µ (x), gradient flow corresponds to an annealed version of stout-link smearing with an extremely small isotropic smearing parameter.Indeed, for sufficiently small ϵ the finite transformation generated by Euler integration of the gradient flow is equivalent to stout-smeared links [59][60][61][64][65][66].
Previous work has shown that taking ϵ ≲ 0.02 is sufficiently small to accurately solve the differential equation and ensure the independence of ϵ [64,66].That said, we desire to perform enough smoothing to guarantee discretisation errors are negligible, whilst at the same time retaining as many genuine topological features as possible.Based on this, we choose a comparatively small value of ϵ = 0.005.This provides a greater degree of control over the level of smoothing, which is highly beneficial to our cause.
In addition to removing UV fluctuations, the gradient flow is understood to distort the distribution of topological objects such as through instanton and anti-instanton pair annihilation.One might be concerned about the potential effect this has on our results.To understand this, we draw on previous work comparing the effects of smoothing on the gluonic definition of the topological charge density to that obtained via the overlap Dirac operator with an ultraviolet cutoff, λ cut [67].At low levels of stout-link smearing, the structure of the gluonic density is found to strongly coincide with the overlap definition for a specific λ cut .Given that the UV-filtered overlap topological charge density has no distortion effects, we can be assured, at the comparable amount of gradient flow performed in this work, that this issue is negligible.
An intuitive picture for this is realised by recalling that at leading order, the gradient flow corresponds to a simple convolution of the gauge field with a Gaussian of RMS radius √ 8τ [61].This obviously results in a smoothing effect at short flow times τ .It takes extended cooling for instanton and anti-instanton pairs to walk across the lattice and begin to annihilate with each other, as revealed through visualisations [48].

B. Comparison of improvement schemes
The suppression of action and UV fluctuations induced by gradient flow can cause substantial changes to the topological charge density over a relatively small number of updates.This makes selecting the flow time at which to analyse the topological charge a nontrivial matter.Some inroads have formerly been made towards solving this problem by comparing different action lattice operators.Besides the "standard" action, such as Eq. ( 8), one can define an alternate "reconstructed action" via a lattice field strength tensor which can be substituted directly into the definition of the continuum action [52], These discretisations will experience different renormalisation effects, allowing the two actions to be compared when smoothing to gauge the size of the remaining discretisation artefacts.Nevertheless, it is still unclear exactly how similar the operators should be before one can be confident errors have been amply suppressed.Furthermore, it is not obvious whether comparing two discretisations of the action translates directly to the topological charge density, which is our primary interest.Still, motivated by this idea we instead utilise two different O(a 4 )-improved topological charge operators to assess the magnitude of the discretisation errors.The first of these is an "Improved F µν " scheme calculated by substituting an O(a 4 )-improved field strength tensor into the definition of the topological charge, The improvement of F µν proceeds as follows [52].First, the m × n clover term C (m×n) µν (x) is defined as the sum of the m × n + n × m Wilson loops in the µ-ν plane touching the point x, depicted as Each clover term gives an estimate of the field strength tensor as The improved operator is then constructed by an appropriate linear combination of the above terms, To eliminate the O(a 2 ) and O(a 4 ) errors, it is sufficient to consider (m, n) = (1, 1), (2, 2), (1, 2), (1, 3), and (3, 3).The desired coefficients are where k (3×3) , the coefficient of the 3 × 3 clover term, is a free parameter.Alternatively, one can proceed by defining a series of discretised topological charge operators q (m×n) (x) for each clover term [68], Here, F (m×n) µν (x) is as defined in Eq. ( 13), and the factor of 1/(m 2 n 2 ) is included in the definition for convenience.These terms can subsequently be combined to produce a different improved topological charge operator, We refer to this as the "Improved TopQ" scheme.Since q(x) is nonlinear in F µν , an O(a 4 )-improved operator via this method will have different renormalisation effects than the Improved F µν scheme.The same five clover terms can be used to eliminate the O(a 2 ) and O(a 4 ) corrections, with the coefficients in this case turning out as In fact, these coefficients are identical to those used in constructing an improved action from the same five planar m × n loops.
With these choices, we now present comparisons between various improved topological charge operators: • three-loop vs five-loop Improved F µν , • three-loop vs five-loop Improved TopQ, • three-loop Improved F µν vs Improved TopQ, and • five-loop Improved F µν vs Improved TopQ.
For each possibility we sum the absolute value of the topological charge Q = |q(x)|, and compute the below "pseudo-" relative error between the two forms in question: We normalise the difference by the average of the two values to provide a common base for comparison.Figure 5 shows the evolution of the relative error for each of the listed comparisons over a flow time τ = 2. From this, we conclude there is a greater disparity in the operators defined in the contrasting improvement schemes as opposed to using a different combination of loops within one of the improvement schemes.Hence, this allows us to proceed referring exclusively to the three-loop combination for the two improvement schemes as the most reliable indicator for when discretisation effects have been suppressed.

C. Selecting the optimal smoothing level
Even though we now have a technique for analysing the discretisation errors in the topological charge density, this is yet to single out the precise ideal smoothing level.A natural solution to this problem is provided by our methods.If we apply our algorithm in Sec.II A to the topological charge densities obtained through both improvement schemes, then provided renormalisation factors are significant the net charges obtained will in general be different.This emerges from three compounding effects: differences in the topological charge density value at lattice points we identify as part of topological objects, inherent distinctions in the number and locations of local extrema, and variations in how the lattice points are distributed between the objects.
As a consequence, the histograms of the charge assigned to each object reveal divergent modes, inhibiting our ability to draw the same conclusion from both improved topological charge operators.An example of this is presented in Fig. 6.The distribution produced by the Improved TopQ scheme is visibly shifted to the left from the Improved F µν scheme, with the modes clearly incom- Improved F µν Improved TopQ FIG. 6. Histograms of the net object charges obtained with the hypercube dislocation filter for both O(a 4 )-improved topological charge operators.This example is taken from our 32 3 ×64 ensemble at τ = 1.The modes are marked by a darker colour and are visibly shifted from each other by ≈ 0.15, which is certainly not an insignificant difference.This implies the level of smoothing is insufficient.
patible at this smoothing level.Recall that the modes (and fluctuations thereabout) provide an indicator of the underlying topological structure.It follows that the conclusions we would infer on the net charge of distinct topological objects would differ from each other.This motivates our criterion for the optimal smoothing level as the minimum number of updates required for the modes of the two histograms to agree.This is a necessary condition to ensure that discretisation errors have been adequately suppressed, with the two improvement schemes providing consistent conclusions.At a foundational level, this requirement is justified from both improvement definitions being valid ways to calculate the topological charge density on the lattice.Therefore, either should be able to be used to the same effect.At the same time, we do not desire to perform any more smoothing than what is necessary as it risks distorting or destroying genuine topological features.Throughout our results in Sec.V we will continue to display both histograms to emphasise that this criterion has been satisfied and to illustrate any remaining systematic uncertainties.
Before proceeding, we establish that this criterion is allowed to depend on both the lattice spacing and the filter used.The former of these will be considered in greater detail in the next section.The latter is because the size and number of lattice sites associated with the topological objects differs greatly between the two dislocation cutoffs under investigation.This could induce a greater discrepancy between the improved operators when the hypercube filter is applied compared to the nearest-neighbour filter.Nevertheless, as discussed at the end of Sec.II A 4 and detailed in the following section, our best results correspond to the hypercube dislocation criterion.

IV. CONTINUUM LIMIT
Before proceeding to present our finite-temperature findings, it is important to establish that our results scale properly in the continuum limit; that is, they are independent of the lattice spacing a.To achieve this, we utilise two ensembles with equal physical volumes: a 32 3 × 64 ensemble with a ≈ 0.10 fm, and a 48 3 × 96 ensemble with a ≈ 0.067 fm.For simplicity, we refer to these as the "coarse" and "fine" ensembles respectively throughout this section.
We consider two different possibilities for taking the continuum limit.The first of these is a "fixed lattice dislocation filter" method in which the dislocation filter is applied identically on both ensembles.This allows for physically smaller topological objects to be considered as a → 0. The second is a "fixed scale" method, for which care is taken to fix the physical scale for resolving the topological objects as the continuum limit is approached.

A. Fixed lattice dislocation filter
In this approach, we apply a consistent dislocation filter across the two ensembles.In Sec.II A 4, the intention to investigate "nearest-neighbour" and "hypercube" filters was discussed.These are both expressed in terms of the lattice spacing, and thus have the prospect to admit physically smaller topological features on the finer lattice.This allows us to take advantage of the improved resolution provided by the finer lattice spacing to probe vacuum structure at a smaller scale.We are interested in determining whether the charge contained within each such objects nonetheless remains invariant, for instance because their topological charge profiles are sharper.In this case, it is crucial to allow for the possibility that less smoothing is required on the finer lattice (as per the criterion from Sec. III C).
We start by comparing the results obtained with the nearest-neighbour filter between the two ensembles.The histograms are shown in Fig. 7.We can see that both distributions have been sufficiently smoothed such that the two topological charge definitions produce consistent results.For the coarse ensemble, this is achieved after a flow time τ = 0.625, whilst for the fine ensemble it is slightly less at τ = 0.525.This is the expected outcome.
The modes occur at |Q| = 0.111 (6) and |Q| = 0.064 (5), which are inconsistent with each other.This embodies a considerable relative difference, with an ≈ 40% decrease in value moving to the finer lattice.Given the precision with which the modes have been resolved, we can be assured this is a statistically significant discrepancy arising Improved F µν Improved TopQ FIG. 7. The results of our algorithm with the nearestneighbour dislocation filter applied to the coarse (top) and fine (bottom) ensembles.The mode for the former is 0.11, but this shifts to 0.064 for the latter, suggesting that a nearest-neighbour cutoff is insufficient to ensure scale independence.
from the smaller lattice spacing.Indeed, one can observe the extent to which the histogram for the fine ensemble has fallen off from the mode by |Q| ≈ 0.11.From this, we deduce that the topological structure revealed by the simple nearest-neighbour cutoff is scale dependent.This provides evidence that this filter is insufficient to minimise the effects of dislocations on the lattice, with the results being sensitive to their size ∼ O(a).
Next, we repeat the above process by applying the hypercube filter to the definition of a topological object.The results are presented in Fig. 8.The hypercube filter is observed to be substantially stronger than the nearestneighbour cutoff, preserving fewer than 10% of all objects accepted by the nearest-neighbour filter.This is especially pronounced on the finer lattice.The size of the objects and necessary degree of smoothing accordingly ensembles.The mode is 0.336 for the former and 0.324 for the latter.These are consistent with each other, suggesting that a hypercube cutoff is sufficient to ensure proper scaling in the continuum limit.
increase, achieved with τ = 1.45 on the coarse ensemble and τ = 1.25 on the fine ensemble.
The corresponding modes are now located at |Q| = 0.336 (15) and |Q| = 0.324 (15).Although there is still a slight discrepancy in central value, the difference here is insignificant compared to the larger charge values calculated and broader bin width required to maintain a smooth distribution.The two modes are observed to overlap within the uncertainty provided by the bin width, meaning we can draw the same conclusions in both cases: the topological charge is predominantly comprised of individual objects with net charges near |Q| ≈ 0.33, far from the instanton charge of 1.
Based on this we are confident, up to statistical fluctuations, that the hypercube filter is sufficient to render our results insensitive to the size of dislocations and therefore independent of the lattice spacing.Hence, implementing a fixed hypercube cutoff and allowing for less gradient flow provides a valid procedure for taking the continuum limit in the usual manner where the scale of shortdistance physics included in the calculation reduces with the lattice spacing.This is what we sought to achieve.
In Sec.V B, we will consequently present the finitetemperature analysis exclusively for the hypercube filter, as our preferred definition for what is considered a "genuine" topological object.
Before proceeding, it is insightful to examine the radial size of the topological objects under the hypercube filter, having now established that their charges are scale independent.An RMS estimate of their radii is calculated by summing the squared distance between each point assigned to a given object and its centre x 0 (approximated by the local extremum), weighted by the ratio of the topological charge density to the net charge Q of the object.These weights sum to unity over the entire object and discriminate between broad and sharply peaked features.Mathematically, this normalised topological charge density radius is The calculations are performed at the same flow time as for the charges, and we find that the modes of the RMS radii distributions from our two different topological charge definitions also match.We contrast the ρ rms results between the two ensembles in Fig. 9.This reveals a decrease in the typical radial size of objects with the lattice spacing.Although this may seem curious given the consistency between the net charge values, upon further investigation one finds that this is to be anticipated.The lattice spacing introduces a cutoff such that any topological features smaller than the lattice spacing fail to be resolved.As previously surmised, by utilising a filter that scales with the lattice spacing, one would therefore expect the resulting distribution to be comprised of topological objects with a smaller radial size.In a similar vein to the instanton solution, one could propose that the topological objects have a free size parameter which can be varied whilst keeping their net charge constant.The combination of Figs. 8 and 9 strongly suggests this is the pattern underlying changes to the gauge field as the lattice spacing is decreased.

B. Fixed scale method
Besides probing the topological structure at the improved resolution provided by a finer lattice, the other continuum limit we turn our attention towards is a fixed scale method.This entails maintaining a fixed physical scale for resolving the topological objects under consideration.There are two different aspects at play to ensure this occurs.First, there is the matter of the smearing scale, r sm /a = √ 8τ , induced by gradient flow.To set a fixed size, this smoothing radius r sm must remain unchanged (in physical units) between the two lattice spacings.Using primed symbols for the finer lattice, this clearly requires where we have substituted the values a = 0.10 fm and a ′ = 0.067 fm.
The second factor concerns the filter applied in the algorithm.To ensure an equal footing between the ensembles, it is vital to implement a dislocation filter of fixed physical size for each ensemble.Instead of the hypercube filter examined in the previous section, which scaled with the lattice spacing, we choose here a radial cutoff r cut in Improved F µν Improved TopQ FIG. 10.The results of our algorithm with a fixed scale, realised by a physical radial cutoff rcut = 0.2 fm, applied to the coarse ensemble at a flow time τ = 1.65 (top) and the fine ensemble at τ ′ = 3.71 (bottom).The histogram modes again match, implying that this is an equally valid procedure for taking the continuum limit.
physical units which is applied on both ensembles.Motivated by the success of the hypercube filter in eliminating dislocations, however, we choose the minimum physical radius needed to cover the hypercube on the coarse ensemble.This is a radius of two lattice units, dictating r cut = 0.2 fm.On the fine ensemble, this is three lattice units.
As for the flow time τ , we note that a radial cutoff of two lattice units on the coarse ensemble is stronger than the hypercube filter previously applied.Thus, a mild increase in smoothing level is necessary to match the charge values.This is satisfied by τ = 1.65, implying τ ′ = 3.71 on the fine ensemble as per Eq.(21).With this setup, the charge histograms in the fixed scale method are shown in Fig. 10.Remarkably, we once again find consistent histogram modes, indicating we have successfully uncovered similar topological structures between the two lattice spacings.Therefore, setting a fixed physical scale provides an alternative continuum limit.As with the fixed cutoff limit, we also investigate the radial size of the topological objects within this framework to ascertain whether their physical sizes are indeed coincident, as one might expect.These ρ rms histograms are given in Fig. 11.
The modes are observed to be consistent with each other within uncertainty, indicating we have successfully held the typical radial size for resolving topology constant whilst decreasing the lattice spacing.The centre of the modal bin for the fine ensemble is marginally to the right of that for the coarse ensemble, though one can easily imagine this might be due to statistical deficiencies or a slightly inaccurate setting of the smoothing scale.
To summarise, we have considered two possibilities for taking the continuum limit.In one, a consistent hypercubic dislocation filter was applied to analyse physically smaller topological features as a → 0, whilst in the second method a fixed physical scale for resolving topology was utilised.Both provide topological charge distributions insensitive to the lattice spacing, as illustrated in Figs. 8 and 10.Thus either can be used in the subsequent analysis.
Our preferred method is the fixed hypercube cutoff, due to its ability to probe vacuum structure at the improved resolution provided by a smaller lattice spacing.This approach is more in accord with traditional continuum limits where short-distance physics is allowed to enter the calculations as a → 0. This makes such a limit more interesting, and it is notable that the object charge values remain well defined.

V. RESULTS
Having established the behaviour of the continuum limit, we now present our findings on the evolution of the topological structure with temperature.This is performed by applying the hypercube filter for distinguishing topological objects over dislocations.We ensure the flow time is independent of temperature such that the results on each ensemble can be easily compared, set to τ = 1.45 as per the 32 3 × 64 ensemble in Sec.IV A.

A. Simulation details
To explore the evolution of the topological structure with temperature, we generate ensembles consisting of 100 configurations at three temperatures below the critical temperature T c , and three temperatures above T c .Each has a spatial volume of 32 3 and fixed isotropic lattice spacing a = 0.1 fm, with the temporal extent of the lattice varied to change the temperature.The details are provided in Table II, where we take T c = 270 MeV [70].
A unit trajectory length is used for the Hamiltonian dynamics evolution, with 50 accepted trajectories between sampling a configuration following thermalisation.
For each histogram, we analyse 100 gauge field configurations.We obtain 100 bootstrap resamples on the set of calculated charge values and extract the bin counts for each such resample.This allows an error to be placed on the histogram bins, which we display in our results.The precise location of the mode is ascertained by shifting the bins to maximise the height of the modal bin, and is singled out by a darker colour so the associated charge value is visually clear.We find that the position of the mode shows no variation in the bootstrap resamples such that its uncertainty is governed by the bin width.

B. Finite Temperature
We present the charge histograms for each finitetemperature ensemble in Fig. 12.The quantitative value of the mode for each temperature is provided in Table II.We find that below the critical temperature, the topological charge tends to be comprised of objects with net charges near |Q| ≈ 1/3, with the modes for each of these ensembles situated very near each other.Being the ensemble closest to the critical temperature, the marginally smaller value for N t = 8 could arise from finite-volume effects resulting in a smooth crossover around T c instead of a discontinuous phase transition.That being said, given that the N t = 8 mode sits directly adjacent to the other modes below T c , it could also be attributed to simple statistical fluctuations in the calculated charge values; this is more likely near T c due to challenges in the Markov chain around the phase transition.
Nevertheless, as the temperature increases into the deconfined phase, there is an undeniable shift in the calculated charges towards smaller values, with the modes visibly separated from each other.The largest decrease occurs in our first ensemble above T c , where we find |Q| ≈ 0.2, and this steadily continues to decline down to |Q| ≈ 0.1 for the largest temperature considered here.

VI. DISCUSSION
Having presented our key results, we now proceed to compare with the instanton-dyon model for the topological structure of the gluon fields at finite temperature.We also investigate the temperature dependence of additional quantities such as object density and radial size, which are in general distinct from the charge contained within each such object.Throughout this section, all statistics are obtained using 100 bootstrap ensembles, with errors calculated through the standard deviation of the bootstrap estimates.

A. Polyakov loop and holonomy
The Polyakov loop is an order parameter for confinement in SU(N ) Yang-Mills theory defined for each spatial position x as P (x) = P exp ig (bottom right).Below the critical temperature, the mode is roughly constant at just above 0.3, but this shifts towards smaller values as T increases above Tc.This behaviour is consistent with the free holonomy parameter in SU(3) (Sec.VI B), shown with the dashed vertical line.The results are calculated with a hypercube dislocation filter after a flow time τ = 1.45, the amount required to ensure consistency between the two improvement schemes considered.where P is the path-ordering operator.It exhibits a simple relation with the free energy of a single quark [75], From this one concludes ⟨Tr P ⟩ = 0 below T c where confinement implies F q → ∞, whilst it jumps to a nonzero value above T c .
The Polyakov loop at spatial infinity, also known as the holonomy, is a topological invariant and (up to gauge symmetry) can be written as [25] Simply put, Eq. ( 25) says that the eigenvalues of P ∞ lie on the unit circle within one rotation of 2π, with the summation condition enforcing det P ∞ = 1.Whilst at extremely high temperatures the holonomy is trivial, near and below T c it develops a nontrivial value, P ∞ ̸ = I.In particular, the maximally nontrivial holonomy corresponds to Tr P ∞ = 0, which occurs in the confined phase; it is also known as the "confining holonomy".
For convenience, one defines N so-called "holonomy parameters" as ν i = µ i+1 −µ i .Following the definition of µ N +1 = µ 1 +1, the holonomy parameters are constrained to sum to unity, leaving naively (N − 1) free parameters in SU(N ) gauge theory.However, as per Eq. ( 24), ⟨Tr P ⟩ is a physical quantity and must accordingly be real.This imposes an additional constraint on the set of holonomy parameters, which in the case of SU(3) leaves just a single parameter ν to uniquely specify each ν i .The relationship is summarised as [34] It is thus immediately clear that one finds ν = 1/3 in the confined phase, whilst above T c the holonomy parameter decreases towards zero as the Polyakov loop becomes trivial.We note immediately that this quantitatively agrees with the mode of our charge histograms below T c , and also displays initial qualitative agreement above T c .This hints at a connection between the holonomy and charge of topological objects, which motivates calculating ν on each finite-temperature ensemble.
The Polyakov loop on the lattice is calculated as a product of temporal link variables over all sites in the temporal dimension, We exploit translational symmetry to calculate ⟨Tr P ∞ ⟩ as the expectation of the spatially averaged Polyakov loop, The pure gauge theory carries an additional complication in the form of a centre symmetry which does not exist in full QCD.The pure gauge action is invariant under centre transformations, though the Polyakov loop transforms nontrivially under such transformations as Thus if centre symmetry is preserved we must have ⟨Tr P (x)⟩ = 0, and deconfinement corresponds to the spontaneous breaking of the centre symmetry.Consequently, the Polyakov loop below T c is observed to exhibit a symmetry between the three centre phases of SU(3) [76,77].Above T c , this symmetry is spontaneously broken with one of the three phases becoming dominant.In full QCD, the fermion determinant singles out m = 0 as the preferred phase [77], ensuring the Polyakov loop remains real.On the other hand, in the pure gauge theory the dominant phase can vary on a configuration-toconfiguration basis, meaning one would find ⟨Tr P ⟩ = 0 even above T c .This is often overcome by taking the modulus of the Polyakov loop as the order parameter, though we take an alternative approach to remove the remaining symmetry by performing centre transformations [78].This can be interpreted as rotating the phase of Tr P by ± 2πi/3 to bring the dominant phase of each configuration to a phase of zero.Finally, we take the real part to discard any remnant imaginary part that should vanish in the ensemble average.This is subsequently taken to estimate ⟨Tr P ⟩.Substituting the result into Eq.( 26) gives a corresponding estimate of the free holonomy parameter.The final values are given in Table II and illustrated in Fig. 12.
We find that for each temperature considered here, the calculated values of the holonomy coincide remarkably well with the histogram modes for the charge contained within each distinct topological object.For each ensemble in the confined phase, we find ⟨P ⟩ ≈ 0 as expected, and the associated holonomy parameter is ν = 1/3.This is consistent with our findings below T c , with the modes located near 1/3.In the deconfined phase, where the holonomy parameter decreases away from 1/3, we continue to find a strong agreement between its value and the histogram mode for each ensemble above T c .This reveals an intrinsic connection between the holonomy of the field configurations and the distinct topological charges comprising their structure.Such a relationship is built into the instanton-dyon model, discussed in the following section.

B. Instanton-dyons
Given the link between confinement and holonomy, it is natural to seek analytic solutions of the Yang-Mills equations that possess nontrivial asymptotic holonomy.This is realised by the caloron [23][24][25][26], a finite-temperature generalisation of the instanton.In SU(N ) gauge theory, calorons can be viewed as composed of N monopole constituents known as dyons, whose structure depends on the value of the holonomy.
To be precise, the caloron is divided up into its constituent dyons according to the holonomy parameters ν i , such that the action of the ith dyon type is S i = S 0 ν i .Like the instanton itself, the dyons are self-dual such that their topological charges satisfy |Q i | = ν i .Since the ν i sum to unity, it is clear that summing the dyons' individual actions and topological charges recovers the singleinstanton properties.
Thus, one finds a one-to-one relationship between the temperature of the system (through the Polyakov loop) and the charges of the dyons (through the free holonomy parameter).Two of the dyons have charges |Q i | = ν, whilst the third dyon has |Q 3 | = 1 − 2ν.This provides a prediction for the topological structure at finite temperature we can compare to our numerical findings.The agreement between the holonomy parameter ν and the charge histogram modes suggests a consistency with the presence of the first two dyon types.This can be inter-preted as evidence that dyons form a significant part of the gluon fields' topological structure.At each temperature, the mode provides an indicator for the dominant contribution, with the distribution around the mode representing quantum fluctuations about the semiclassical dyon solution and systematic uncertainties in assigning topological charge density to the objects.
However, the simple decomposition described above holds exclusively for a single-caloron configuration.The fact that we observe consistency between the holonomy parameter and object charges on our configurations, which in general comprise an ensemble of positive and negative topological excitations, is a substantially stronger constraint.This is nonetheless allowed within the framework of instanton-dyons.Analytic "multi-caloron" configurations of higher topological charge |Q| = k have previously been constructed [79][80][81][82][83].These follow the natural expectation of decomposing into kN constituents in SU(N ) gauge theory, with k dyons of each of the N types.All dyons of the ith type have the same mass [25,26], as determined by the corresponding holonomy parameter ν i of the system.
In addition, the typical procedure for superposing calorons in modelling vacuum structure requires each distinct caloron to have the same holonomy [84][85][86], and therefore their constituent dyons of the same type have matching charges.The system resulting from the superposition then features that same asymptotic holonomy.
These are both in a similar vein to the scenario reflected by our findings, where we observe a sharply peaked topological charge distribution located at ν 1 = ν 2 .From this discussion, we can conclude our results admit the presence of the two dyons possessing charge |Q 1 | = |Q 2 | = ν, with fluctuations about this value as reflected by the charge histograms.
One would be prescient to point out the stark lack of charge values consistent with the third dyon in Fig. 12, which would be |Q 3 | ≈ 0.6, 0.7 and 0.8 respectively for each temperature above T c .However, this can be understood by considering the number densities of each dyon.In the SU(3) dyonic partition function each dyon is individually weighted by [34] d ∼ e −S0νi ν for holonomy parameter ν i .The outcome is a compounding effect where both terms favour small holonomy parameters, and the third dyon with ν 3 = 1 − 2ν will be exponentially suppressed.For this reason we would expect our results to be overwhelmingly skewed towards the two dyons with charges equal to ν, as we observe One might query how this imbalance manifests in the topological structure, given the topological index is restricted to integer values on the periodic torus by the Atiyah-Singer index theorem [87].We find that the majority of our N t = 4 configurations converge to a net topological charge of zero under smoothing, which overcomes the problem entirely by representing an equal quantity of positive and negative topological charge density.The suppression of the third dyon is irrelevant in that context, needing only a balance of dyons with Q = ± ν.There are nevertheless a handful of configurations with small nonzero values for the topological index, which could be attributed to the presence of the larger third dyon.In fact, our highest-temperature histogram in Fig. 12 displays a small clustering of points near |Q| = 1 which originate precisely from this small set of configurations, supporting this idea.Prior analysis of the topological content in the confined and deconfined phases substantiates this discussion [88].

C. Zero-temperature distribution
All considered, we have found that the instantondyon model captures the essence of the observed finitetemperature physics.Even so, the plethora of other fractionally charged topological configurations ∼ 1/N [13-15, 20-22, 35-39] leave our zero-temperature results open to different interpretations.In fact, if the inverse temperature greatly exceeds the characteristic separation between their constituent dyons, then calorons simply resemble standard instantons [25,26].
In this regard, the peaks we observe near ≈ 1/3 at low temperatures T < T c could reflect a significant contribution from "ordinary" fractional instantons.Although the current methods do not provide a means to distinguish between these constructions, we can nonetheless conclude the confining vacuum is an ensemble of fractionally charged objects.Many of these configurations are analytically constructed on tori with different twisted boundary conditions, and it is natural to query whether they can arise on a torus without twist, as studied here.The resolution is that these are trivially also solutions on a torus comprising multiple periods of the original, smaller torus.It directly follows that fractional instantons can emerge on the standard periodic torus, with the requirement that they exist in groups of N to conserve the integer topological index.
There is also the "Z N dyons" of Ref. [39], which in contrast to instanton-dyons and fractional instantons are not predicted to be self-dual in general.This is allowed within the bounds of our present results.At the level of smoothing which ensures consistency between our two topological charge definitions, S/S 0 is still approximately a factor of 3 larger than |q(x)| for an average configuration.This demands the presence of non-self-dual topological objects comprising the gluon fields.
On that account, our low-temperature results could signal the presence of non-self-dual Z N dyons.In addition, these configurations carry magnetic charge quantised by Z N , and consequently are relevant exclusively in the confined phase and a narrow region above T c .In the deconfined phase, the increasing magnetic tension binds the Z N dyons into dilute instantons at high temperatures [39].The small clustering of points near |Q| = 1 found 13.The evolution of the observed number of topological objects under gradient flow after hypercubic dislocation filtering, for Nt = 64.We show separate curves for each topological charge improvement scheme, highlighting a subtle difference between the two distributions which is reduced as we continue to smooth the fields.
at our highest temperature could also signal a contribution from these instantons, though the leading effect remains which aligns with instanton-dyons with a peak in the charge histogram at |Q| = ν.
Further analysis could proceed by summing the action over each object individually using the partition of the lattice obtained herein, allowing an investigation into self-duality on a per-object basis.There is also the possibility that some topological objects look locally self-dual near their peak, though experience significant deviations from self-duality at the tails of their distributions due to interactions with surrounding objects.We leave such investigations open for future work.

D. Number of objects
In addition to the charge of the topological objects, another basic statistic available through our methods is the total number of objects per configuration.We start by investigating how the number of objects varies as we smooth the configurations.Performing this analysis is enlightening as it reflects the effect smoothing has on the gauge field.This evolution, as defined by the hypercube filter, is shown for the N t = 64 ensemble in Fig. 13; a very similar trend is observed for the other ensembles.
We find that initially there are zero topological objects that pass the hypercube filter.This quantitatively emphasises how "raw" lattice configurations are comprised entirely of UV fluctuations on the scale of the lattice spacing which obscure the long-distance topological features of the gluon fields.Continuing to smooth gradually reveals larger-scale features at least the size of a hypercube, whilst the short-scale fluctuations are smoothed out.Eventually, this begins to plateau, with the number of objects stabilising as we approach a flow time of τ = 2.Although we have not explored beyond this point, one could imagine that the number of objects would eventually begin to decrease, which is necessary to coincide with the classical limit.Indeed, topological excitations are understood to undergo pair annihilation under extended smoothing [50-52, 89, 90], and it has been revealed through visualisations how the excitations "walk" across the lattice to annihilate with each other [48].This provides an intuitive picture for when one may wish to cease smoothing: after the majority of the UV fluctuations have been removed, but before the genuine features we are interested in begin to annihilate.Our method of matching the object charges between two topological charge definitions singles out the precise level required.
The number of objects also gives access to the object density, which we can compare across each temperature.For our purposes, we define the number density as where N is the total number of objects, and V is the (physical) four-dimensional volume of the lattice.To resolve the slight ambiguity between the two topological charge definitions, as seen in Fig. 13, we take the average of the two values as a unified estimate of the density, and combine half the difference with the statistical error in quadrature.The evolution of n with temperature is shown in Fig. 14.
Whilst the density is found to be constant below the critical temperature, there is interesting behaviour above T c .We observe a slight drop in the density as the phase transition is crossed, but it subsequently increases as the temperature climbs away from T c .This points towards an increase in "activity" in the gauge fields at very high temperatures.
However, we must note that the scaling of n as a → 0 depends on how one takes the continuum limit.The precise value of n utilising the fixed lattice dislocation filter method is dependent on the lattice spacing.For instance, calculating n with the hypercubic dislocation filter on our finer (a = 0.067 fm) ensemble gives n ≈ 17.46 (37), over a factor of 3 larger than the coarse a = 0.10 fm result of 5.498 (19).Clearly, the topological objects being physically smaller allows them to be more densely packed.Alternatively, in the fixed scale continuum limit, the density remains insensitive to the lattice spacing.This motivates further investigation into the object density at a broader range of temperatures and lattice spacings.

E. Root-mean-square radius
Next, we investigate any variation in the radial size of the topological objects with temperature using the RMS radius defined in Eq. ( 20).These histograms are displayed in Fig. 15.In direct contrast with the charges, we find there is minimal change in the distribution of radial sizes, besides a plausible shift at the highest temperature where the mode drops from ≈ 0.43 fm to ≈ 0.39 fm.Nevertheless, this could simply be a consequence of statistical variability, with the smaller lattice volumes above T c by default having less statistics.Either way, the drastic drop in charge values cannot be primarily accounted for by a corresponding decrease in radial size.For instance, taking the decrease at face value, the fractional reduction in volume taken up by a typical object would be (0.39/0.43) 4 ≈ 0.67, which fails to account for the factor of ≈ 1/3 cut in charge values by the highest temperature.Instead, the shift in charge values is likely due predominantly to a reduction in the height of the peaks in the topological charge.In other words, vacuum field fluctuations are suppressed at high temperature.

F. Visualisations
As a final point of discussion, we visualise the topological charge density for the two different lattice spacings, allowing further insight into the nature of the algorithm.The visualisations are produced at the respective levels of smoothing for the fixed hypercube dislocation filter, under which there are interesting changes to the vacuum structure.This also qualitatively reveals the changes to the gauge field as the lattice spacing is decreased identified in Secs.IV A and VI D. A single temporal slice is presented for each lattice spacing in Fig. 16.In both cases, the effectiveness of the algorithm in capturing the behaviour around each peak is evident, with most "lumps" consisting of a single colour.The instances of overlapping objects are also convincingly managed, with visible boundaries dividing the individual peaks.From these visualisations, it is clear we can be confident in the calculated numerical charge values as reliable estimates of the topological charge distribution.Additionally, there is a substantial shift in the topological structure as the lattice spacing is decreased, consistent with the quantitative findings on object density and radial size.For one, the number of distinct objects has significantly increased, which matches the comment on the increase in object density in Sec.VI D as the lattice spacing is decreased.Second, the lumps in the bottom visualisation are, on average, unarguably smaller in size than the top figure, coinciding with the decrease in radial size found in Sec.IV A by considering an RMS estimate of each object's radius.Clearly, shrinking the lattice spacing admits the existence of radially smaller topological features (but with the same net charges, as illustrated in Fig. 8) which tend to comprise the volume, producing the increase in abundance of individual topological objects observed in Fig. 16.

VII. CONCLUSION
In this work we have devised a novel method for identifying and calculating the net topological charge contained within topological objects.This was utilised to explore the evolution of the topological structure of SU(3) gauge theory with temperature.We obtained a distribution of charge values which was interpreted as quantum fluctuations around a semiclassical value, considered as the mode of the distribution.This was taken as an indicator of the underlying topological features.
The results exhibit a foundational consistency with the instanton-dyon model [23][24][25][26] for the topological structure of SU(N ) gauge theory at finite temperature.They reveal distributions peaked near ≈ 1/3 in the confined phase, and which decrease above the critical temperature in a trend matching the single free holonomy parameter in SU (3).
The lack of self-duality on the analysed configurations leaves our results open to a variety of fractional topological objects, such as Z N dyons [39] which are predicted to arise from quantum fluctuations of an effective action.Future work can explore the action at the individualobject level to provide a more detailed assessment of the types of objects present in the ground-state gluon fields.
We intend to extend the work presented here to SU(N ) gauge theory for N ̸ = 3, with a focus on N = 2 and 4, to determine whether there is any discernible change between the various numbers of colours and if it follows the instanton-dyon prediction for that particular value of N .
Insight into the large-scale topological structure of the Yang-Mills vacuum can also be obtained through eigenmodes of the Dirac operator, which can isolate semiclassical features of the gauge fields as an alternative to smoothing [67].Although previously studied in the fun-damental representation, a recent analysis [91] has provided evidence in favour of eigenmodes in the adjoint representation, needing only a small number of modes to reconstruct the (semiclassical) topological charge density.The "adjoint filtering method" (AFM) [92,93] is used to filter out the UV fluctuations in the configurations, which has benefits and drawbacks compared to smoothing.For instance, the AFM has been shown to capture instanton and anti-instanton pairs which would otherwise annihilate under smoothing, though on occasion it also misses objects that are revealed under smoothing [91].Looking forward, an especially quantitative approach could involve applying the algorithm presented here to the eigenmodes identified through the AFM.It will be interesting to learn if the reduced structure seen on individual eigenmodes offers any quantitative advantage in the process of identifying objects in the QCD ground-state fields.
FIG.5.The evolution of the relative error between the summed absolute topological charge density for several different O(a 4 )-improved lattice topological charge operators under gradient flow.The difference produced by the two improvement schemes is much greater than varying the number of loops within the same improvement scheme.

FIG. 8 .
FIG.8.The results of our algorithm with the hypercube dislocation filter applied to the coarse (top) and fine (bottom) ensembles.The mode is 0.336 for the former and 0.324 for the latter.These are consistent with each other, suggesting that a hypercube cutoff is sufficient to ensure proper scaling in the continuum limit.

FIG. 9 .
FIG.9.Histograms showing the normalised RMS topological charge density radius results on the coarse (top) and fine (bottom) ensembles.The radial size of the objects tends to be smaller for the smaller lattice spacing.

FIG. 11 .
FIG. 11.Histograms showing the normalised RMS topological charge density radius results the fixed physical scale continuum limit on the coarse (top) and fine (bottom) ensembles.The typical radial sizes of the objects are indistinguishable between the lattice spacings.

dx 4 FIG. 12 .
FIG.12.Histograms showing the results of our algorithm with the hypercube dislocation filter applied to each of our finitetemperature ensembles: Nt = 64 (top left), 12 (top right), 8 (middle left), 6 (middle right), 5 (bottom left) and 4 (bottom right).Below the critical temperature, the mode is roughly constant at just above 0.3, but this shifts towards smaller values as T increases above Tc.This behaviour is consistent with the free holonomy parameter in SU(3) (Sec.VI B), shown with the dashed vertical line.The results are calculated with a hypercube dislocation filter after a flow time τ = 1.45, the amount required to ensure consistency between the two improvement schemes considered.
FIG.13.The evolution of the observed number of topological objects under gradient flow after hypercubic dislocation filtering, for Nt = 64.We show separate curves for each topological charge improvement scheme, highlighting a subtle difference between the two distributions which is reduced as we continue to smooth the fields.

FIG. 14 .
FIG.14.The object number density under hypercubic dislocation filtering for each 32 3 × Nt ensemble, calculated at the justified flow time τ = 1.45.There appears to be an initial drop in n as we cross Tc, though it increases thereafter.

FIG. 15 .
FIG.15.Histograms showing the normalised RMS topological charge density radii for objects resolved on each of our finitetemperature ensembles: Nt = 64 (top left), 12 (top right), 8 (middle left), 6 (middle right), 5 (bottom left) and 4 (bottom right).There are very minimal changes to the distribution as temperature is increased.The results are calculated with a hypercube dislocation filter after a flow time τ = 1.45, the amount required to ensure consistency between the topological charge modes of our two improvement schemes.

TABLE II .
The statistics for each of our 323× Nt ensembles, including: the number of sites Nt in the temporal dimension, the corresponding temperatures, histogram modes, Polyakov loop values ⟨P ⟩ =13 ⟨Tr P ⟩ and respective holonomy parameters.The histogram mode is determined by the centre of the corresponding bin, with uncertainties quoted as half the bin width.The trend displayed by the holonomy parameter matches with the modes of our histograms.