Bound State Scattering Simplified

In the description of the AdS5/CFT4 duality by an integrable system the scattering matrix for bound states plays a crucial role: it was initially constructed for the evaluation of finite size corrections to the planar spectrum of energy levels/anomalous dimensions by the thermodynamic Bethe ansatz, and more recently it re-appeared in the context of the glueing prescription of the hexagon approach to higher-point functions. In this work we present a simplified form of this scattering matrix and we make its pole structure manifest. We find some new relations between its matrix elements and also present an explicit form for its inverse. We finally discuss some of its properties including crossing symmetry. Our results will hopefully be useful for computing finite-size effects such as the ones given by the complicated sum-integrals arising from the glueing of hexagons, as well as help towards understanding universal features of the AdS5/CFT4 scattering matrix.


Introduction
In the study of the AdS 5 /CFT 4 correspondence [1] the problem of computing string energy levels or, in the dual N = 4 super Yang-Mills theory (SYM), the planar anomalous dimensions of gauge-invariant composite operators has been related to an integrable system, namely an extended and deformed version of the Heisenberg spin chain [2]. The form of the S-matrix governing the scattering of the excitations on this chain is constrained by symmetry [3] up to one overall phase [4].
This integrable model is able to provide all orders in the 't Hooft coupling λ in the asymptotic regime of infinite spin chain length. Finite size corrections have been addressed by Lüscher corrections [5] first and then, systematically, by the thermodynamic Bethe ansatz (TBA) [6] which requires taking into account the bound states of the theory. An S-matrix for such bound states generalising [3] was first derived in [7] for bound states up to length two and then extended to arbitrary bound states in [8] on grounds of Lie algebra and Yangian symmetry [9]. It has a block diagonal structure with two equal 1 × 1 blocks called X , two equal 4 × 4 blocks Y and finally a 6 × 6 block named Z. In the original work [8], X is given explicitly -it is essentially a generalised hypergeometric 4 F 3 function -but the other blocks were only implicitly defined involving matrix inverses that seemed hard to simplify. This in principle poses an obstacle to Lüscher-type computations which rely on the explicit form of the S-matrix.
Recently, the computation of three-point functions in N = 4 SYM became accessible to "integrability" methods by the invention of the hexagon approach [10]. Here one cuts the closed string world sheet into two hexagonal patches; the gauge theory equivalent is cutting up Feynman diagrams on the sphere into two halves. To obtain the full quantum result these patches have to be glued together again [10] by inserting complete sets of bound states on the edges. Hence also in this context the scattering of bound states is of prime importance.
Finally, higher point functions can apparently be computed by hexagon tessellations, using the hexagon operator of the three-point problem as an elementary patch and glueing appropriately [11,12,13]. At weak coupling, the procedure is technically involved already at one loop not at last because of the complexity of the bound state S-matrix needed in the glueing. Yet, in a recent attempt [15] on verifying and extending existing work at five points [14] we noticed that the bound state S-matrix had to be a much simpler object than the original work [8] suggested. In this work we tackle the programme of simplifying the matrix. We are able to provide a completely explicit writing in terms of relatively concise objects. Moreover, we uncover some new structure between the elements of the bound state S-matrix.
The note has the following structure: first, we recall the basic construction and results of [8]. After this we discuss our approach to simplifying the bound state S-matrix and give compact expression for its components. Their pole structure is clear from our new expressions. Finally we discuss some discrete symmetries of the S-matrix and crossing symmetry.

Review of bound state scattering
Let us briefly review the construction of the bound-state S-matrix as presented in [8]. The two-particle S-matrix S 12 has to commute with the symmetry of the problem: Here J, the manifest symmetry of the S-matrix, spans a subalgebra of the superconformal algebra psu(2, 2|4) given by two-copies of su(2|2); moreover, and crucially, this algebra is centrally extended as discovered in [3] (see also [16] from a derivation of the central extension from the string worldsheet). Hence the algebra of interest will be the centrally extended su(2|2) of [3].
Lie Superalgebra. There are two su(2)'s, spanned by the generators L a b ,L α β with L a a =L α α = 0, two sets of supercharges Q α b ,Q a β and three central elements H, C,C. Latin letters a, b, . . . = 1, 2 run over the Grassmann even indices and Greek letters α, β, . . . = 1, 2 run over the odd indices. The non-trivial commutation relations are given by By setting C,C = 0 the algebra reduces to sl(2|2).
Yet, the key ingredient of the construction in [8] is the Yangian of the centrally extended su(2|2) [9]: Hopf Algebra. The Hopf algebra structure depends on a central element U which is called braiding element. It is used to deform the coproduct of the Lie generators J in the following way By requiring that the coproduct of the central elements is cocommutative, one can derive a relation between the braiding element and the central elements.
Extended Yangian In addition to the above elements J I , U ∈ A, the Yangian algebra Y is generated by level-one elements J I . They obey the conventional Yangian relations with the structure constants f IJ K . The only non-trivial part of the Hopf algebra is the coproduct, since the remaining Hopf algebra structures are readily derived from it.
Let us spell out the coproduct of the supercharges Q α a , since the rest follows by using the commutation relations The coupling constant g also takes the role of the deformation parameter in the definition of the Yangian.
The bound state S-matrix is then by definition the invertible operator that intertwines the usual and opposite coproduct for any generator J of the Yangian of centrally extended su(2|2) in the corresponding representation. Here the opposite co-product is defined by means of the graded permutation operator Π g .
The algebra generators of centrally extended su(2|2) are then represented by differential operators of the form The supersymmetry generators depend on four parameters a, b, c, d that are parameterized as The variables x ± satisfy the following relations The representation parameter γ arbitrary as it can be changed by rescaling single-particle states, and our results will hold for general γ. It is convenient to choose which makes the representation unitary and provides it with nice analytic properties [16]. (Multiplying γ in (15) by a function e iφ(p) such that φ(p) is a real analytic function also yields a unitary representation.) Let us also introduce the rapidity u and the rescaled rapidity v The braiding factor is given by U = x + /x − .

Two-particle basis
The two-particle S-matrix scatters states of the form |m 1 , m 2 , m 3 , m 4 ⊗ |n 1 , n 2 , n 3 , n 4 . We will use the convention that states from space one are labelled by integers K 1 , k, m and states from space two are labelled by K 2 , l, n. Moreover, it is convenient to introducē We only need to restrict to the eigenspaces V r,ℓ of ∆L 1 1 and ∆L Case I First, let us look at the vector space where r = ±1 We only need to label the vectors |k, l (±1) by their ∆L 1 1 eigenvalue since the eigenvalue of ∆L 1 1 can be directly read off from the labels k, l in the state. The labels k, l take the values k = 0, . . . K 1 − 1 and l = 0, . . . K 2 − 1. In [8] these vectors were labeled by IA, IB respectively.

S-matrix
The S-matrix is defined up to a normalisation factor. We choose our S-matrix to be normalized such that i.e. the scattering of highest weight fermionic states has eigenvalue one.
Case I S-matrix The S-matrix restricted to the subspaces V ±1 takes the simple form where Notice that X is purely of difference form and actually coincides with the su(2) universal R-matrix evaluated in the symmetric representations [17].
Case II S-matrix The S-matrix that describes the scattering of the fermionic states from the subspaces V ±1/2 can be obtained by using the supersymmetry generators. By using Yangian symmetry it is possible to define four operators that relate the four basis vectors |k, l (1/2) i to the vector |k, l (1) . In this way, one can express the matrix Y in terms of X as follows where A, A ± are 4 × 4 matrices with some rather involved components whose explicit form can be found in [8].
Case III S-matrix By similar arguments, the S-matrix restricted to V 0 can be obtained from Y kl n with various shifted indices We again encounter complicated matrix inversion and multiplication.
3 Simplifying X Formula (26) for the X -matrix is rather concise. However, this or any other writing conceals the pole structure of the object, which is essential knowledge for example for residue calculations arising from the glueing procedure of the hexagon approach [14,15]. Furthermore, X is a hypergeometric function and hence it obeys a number of contiguity equations.
Pole decomposition From the explicit expression (26), it is easy to see that the only poles are at which are all in the complex plane. These are simple poles, as we can make apparent in the following elegant decomposition The second sum can actually be performed and gives an expression in terms of 4 F 3 .
Recursion relations From Yangian symmetry it can be shown that X satisfies the recursion relations [15] X k+1,l Herek etc are defined in (17). Notice that the cases X k±1,l n and X k,l±1 n are each related by switching barred and unbarred indices. From these relations we see that for fixed k, l all X -matrices with shifted indices can be brought into a standard form X k,l n , X k,l n±1 , . . .. Finally, by successively using (31) and (32) we obtain the following From this we can also remove any X -matrix whose n index is shifted by a negative integer. This can now be used to compare different, possibly equivalent ways of writing the other entries of the bound state S-matrix. Moreover, we can recursively construct the X -matrix starting from X 00 0 = 1 and raising the indices by making use of these relations. For instance, using (31) we find Since X 00 −1 = 0, we find By repeating this argument we can derive any X kl 0 . We can then use, for example (32), to compute X kl 1 in terms of X kl 0 and X k−1,l 0 and work from there to general n.
Useful identities It is clear that swapping barred and unbarred indices should leave X invariant and indeed We also have the symmetry property We note that the inverse of X (v) is simply given by In what follows we will use the identities which are also a consequence of Yangian symmetry.

Factorizing in the presence of Zhukowski variables
By way of example, To reverse this step is non-trivial: the expression on the r.h.s. of the last equation cannot be factored without knowledge of the square root property of the x ± function defined by equation (43). In particular, algebraic computing systems are able to factor polynomials in variables like g, u ± that do not obey such relations, but cannot easily be taught to apply rules like undoing (44). On the other hand (the two ± are independent), For a proof it suffices to expand the product and to use (43 and We can use the property to simplify the r.h.s. of (45): in a first step we multiply e.g. with the "inverse" of ( upon which we use (47) to eliminate x + 1 , x + 2 from the denominator. Multiplying out one obtains up to cubic powers of x + 1 , x + 2 , on which now (48) is used repeatedly. We obtain 1 As in [8] we use the "string scaling" which differs from that of [2].
where the factorisation of the l.h.s. is easily achieved by Factor[] because the result is by construction multi- does not occur in this example.) Last, we have used (46) backwards to rewrite Cancelling the last factor we have shown the factorisation of the rhs of (45) as desired. To arrive at the same conclusion one can alternatively use the "inverse" of x + 1 − x − 2 . This procedure seems a little involved, but it gives a way of factoring out any of x ± − y ± (± is again independent in the two terms) or 1 − 1/(x ± y ± ): to test for the presence of such a factor, one multiplies by its inverse and takes the steps described above. If Factor[] is able to pull out u ± 1 − u ± 2 we have succeeded. One can also eliminate (positive or negative) powers of x ± , y ± or factors like . Admittedly, the method only works by "shooting" in that we have to try the inverse of any particular factor to detect it. This is not much of an obstacle as long as an idea about the form of the result exists. As we shall see, it is possible to deal with more general polynomials of x ± , y ± in the same way.
Computing Y. Our first application of the technique concerns the simplification of the Y matrix. From (27) we see that it is defined by a matrix equation in which the matrix A has to be inverted. Employing Kramer's rule A −1 = A # /Det(A) we find that the entries of the adjoint matrix are polynomials of up to seventh (total) order in the representation parameters r, but maximally cubic in each of them. The determinant in the denominator is where the p ij are maximally of overall order eight in the representation parameters. Remarkably, p does not depend on k, l, n, a first hint that it might be factorisable in the way sketched above.
Indeed after some rewritings and running our factorisation scheme on that form of p we find The greatest worry has disappeared: the denominator of the Y matrix does not have a complicated dependence on the coupling constant, we only see the bricks of the Beisert S-matrix [3]! For the ensuing attempt on factoring Y it is perhaps not necessary but surely convenient to appeal to the contiguity equations (31)-(34) to reduce the r.h.s. of (27) to a different basis of X -matrices with index shifts. The most concise formulae seem to arise choosing {X k,l n , X k−1,l n , X k,l−1 n−1 }. Intriguingly, in all entries of Y, the coefficients of {X k−1,l n , X k,l n } both acquire the same x ± , y ± -dependent coefficient 2 , followed by different albeit simple rational functions of v, K 1 , K 2 , k, l, n. We will state these in a form where the contiguity relations are used to reintroduce another instance of X -X k,l−1 n−1 to be precisein order to eliminate δu from the coefficients. These expressions are strikingly simple.

Simplified scattering
The Y-matrix can be split into two different parts under component-wise multiplication The part Y depends only on the Zhukowski variables x ± . Recall that U i = x + i /x − i and γ i is the representation parameter for the i-th particle.
2 For Y I I this was already noticed in [15]. The expressions given in (A.6) in that article motivate the present study.
whileỸ =Ỹ 1 +Ỹ 2 +Ỹ 3 only depends on X , δv and simple numerical factors Notice that this form makes the pole structure explicit; in particular, it has no spurious poles. At this point it is also easy to see the coefficients of the fundamental S-matrix appear since they simply correspond to the elements of Y. However, owing to the identity (41) we can actually simplify the explicit v dependence and writeỸ in the form of a compact matrix when n = k The three-vectors refer to the "basis" {X k,l−1 n−1 , X k−1,l n , X k,l n }. Equation (58) can be taken as a definition, valid when n = k.

Factorization
Similar to the derivation of Y, in [8] the Z matrix is found from a matrix equation (28) where Y ′ is a 6×8 block diagonal compilation of Y elements with index shifts (k − 1, l, n), (k, l − 1, n), (k − 1, l, n − 1), (k, l − 1, n − 1) and the matrices C, D depend on the representation parameters and the various counters. The inverse of C needed to compute Z is much simpler than that of A discussed above. However, all components of (C) −1 have the denominator factor which can hardly be a physical singularity of the S-matrix; for once, in the residue calculation [15] the matrix elements are "mirrror rotated" x − 1 → 1/x − 1 , x + 2 → 1/x + 2 and expanded to leading order in the coupling constant, so that d yields a singularity which would spoil any hope of obtaining a Taylor series. In fact, upon explicitly evaluating the diagonal Z elements in this kinematics and to leading order in g it was seen in [15] that the singularity d ′ generically cancels. Obviously one will ask whether the original denominator d cancels from the full Z matrix in the first place.
In order to apply the factorisation approach of Section 4.1 we need to construct a multiplicative "inverse" of d. To this end we write a general ansatz whose product with d will also take the form upon employing (48). Imposing c ijkl = 0 : i + j + k + l > 0 we obtain a set of 15 independent homogeneous equations on the 16 coefficients p ijkl . Up to overall rescalings, the solution is unique. Choosing to scale up by the denominator we obtain the coefficients p ijkl . With this scaling d e = c 0000 (63) As for the simpler factorisation problems described above, if multiplying e on any given polynomial and using the rule (48) yields a factor c 0000 , we will have detected a factor d in that polynomial. Finally, c 0000 can be cancelled against d e in the denominator.
To not overcharge Mathematica, it is helpful to decompose the test polynomial, say, f in the same way as e in (61). The product with e is best taken keeping the coefficients in both polynomials abstract, leading to a decomposition of the type . . . p i q j = r k for the decomposition of the result in terms of the sixteen "basis elements". The dots stand for coefficients expressed in terms of u ± , v ± .
To start on simplifying Z we reduce the problem to the calculation of two coefficient matrices for X k−1,l n , X k,l n using the contiguity relations (31)-(34). This is imperative here, only in such a form do all entries in the coefficient matrices factor out c 0000 upon multiplication by e. Barring for Z I I , i ∈ {1 . . . 4} and Z 5 6 , Z 6 5 the computation is now as for Y: in any other component, the sixteen r k for X k−1,l n have a common -at times fairly involved -polynomial factor depending on v, K 1 , K 2 , k, l, n, and the same happens for those multiplying X k,l n . These two "long" polynomials are in general distinct. The remaining simple factors and the powers (x − 1 ) i (x − 2 ) j (x + 1 ) k (x + 2 ) l are finally put together and dealt with as sketched in Section 4.1 and its application to Y. Like it happens for Y we obtain the same rational function of x ± 1,2 in the coefficients of both X 's. To illustrate these features we display the final expression for Z 3 2 , which is the most concise example: In the last formula The numerator factors in the square brackets are essentially what we called the "long polynomials" above.
In the six special cases there are several different such polynomials within either set of sixteen r coefficients. For Z 5 6 , Z 6 5 one straightforwardly sees that there are minimally two x ± , y ± structures: a trivial one producing an isolated instance of X , the other a problem similar to the simplification of Y and the more ordinary components of Z. Indeed, such formulae were pre-empted in [15], equations (A.8), (A.9): With some hindsight and a lot of patience we could find a similar split into two groups of terms also in the remaining four cases, where it is far less obvious how the long polynomials combine. Such a writing is, of course, not unique.  ), O(δu 4 ), respectively. We will not elaborate on these two somewhat atypical cases in the following as they are given by Z 5 5 , Z 6 6 through (65). Expressing X k+δk,l+δl n+δn by the contiguous X k−1,l n , X k,l n using (31)-(34) we obtain coefficients resembling those in (64). Conversely, can the Y, Z elements be cast into a simpler form using more instances of X ? Scanning the range δk, δl, δn ∈ {−2 . . . 1} it is found that some of the index shifts with δn = δl are individually of the same form as (64): there is one simple pole at δũ + k + l or no pole in δu, and the numerators of the two coefficients are of comparable order: Properties of the decomposition of (δk, δl) = X k+δk,l+δl n+δl in terms of X k−1,l n , X k,l n Other cases, especially when the range is extended to larger shifts, introduce new types of poles in δu.
Attempting to use, say, (64) in an analytic resummation of residues as in [15] one would ideally want to construct a form in which each X is multiplied by simple factors that can be absorbed into the defining 4 F 3 . Leaving this programme to future work, we propose here to eliminate δu from the coefficients, which must already entail a simplification because a variable is suppressed. This is in fact possible as long as n = k: with the notation of the table above, we may use (−1, −1) to subtract out the pole in δu, upon which also the order in δu of the two long polynomials decreases by one unit. Successively, (1, −1), (−1, 1), (0, −1) can be employed to subtract powers of δu from the higher to the lower orders. For instance, where we have written (−1, 0), (0, 0) for X k−1,l n , X k,l n . In order to write Z in terms of shifted Y elements it will prove useful to trade (−1, 1), (1, −1) for (−2, 0), (0, −2) by the five-term identity

Z from Y
After the appropriate simplifications, we found a very compact and interesting way to define the Z block. It can be expressed quadratically in the Y block by introducing a wedge product so that we can write Z = Y ∧ Y.
On the level of the basis vectors we identify |k, l 3 ∧ |k, l |k, l 3 ∧ |k, l with |k, l (0) d and the product acts on X as X kl n · X kl n = X kl n , X kl n · X k+a,l+b n+b = X k+a,l+b n+b , X k−1,l n · X k,l−1 n−1 = X k−1,l−1 Using (41) we always make sure that one of the Y factors has a X k−1,l n term and the other has a term X k,l−1 n−1 . This ensures that any component of Z can be written as a linear combination of X kl n , X k−1,l n , X k,l−1 n−1 , X k−1,l−1 n−1 . Because of the identities that X satisfies, it does not matter which Y factor has the X k−1,l n term. From this we find the additional rules Recall that we can writeỸ as a three-vector w.r.t. the spanning system {X k,l−1 n−1 , X k−1,l n , X k,l n }. In particular, Then from the above rules we find the very compact expression The new symbol This seems to be a type of fusion relation in which the scattering of two bosons is written as some sort of composite scattering of fermions. At this point it is unclear what the meaning of this observation is, but it hints at some further structure of the bound state S-matrix. Understanding this property might be important, for example, for potentially finding a universal R-matrix. It would be interesting to understand the nature of the wedge product and its non-trivial action on X .
As an example, let us work out (Z) 1 2 . Via the above identification, we have |k, l 2 and |k, l , we see that almost all components of Z are just given by one term. However, this is not true for the diagonal elements Z i i , where i = 1, 2, 3, 4 and Z 5 6 , Z 6 5 . As a consequence, these elements have two different x ± dependent prefactors.

Results for Z
Following the decomposition of the wedge product, we can write where and For conciseness, we have defined As we can see, Z 5 6 and Z 6 5 cannot be very elegantly expressed in terms of X k+δk,l+δl n+δl . However, if we allow for atypical index shifts then they simplify, too, since from su(2) invariance we can prove We have checked that these relations indeed hold. We would like to stress again that the decomposition in terms of X functions is not unique, once instance of (78)) is  (90) relevant to the bottom left corner ofZ 1 . Consequently, there are also several ways to express Z in terms of Y.

Properties
In this section we discuss some properties of the bound state S-matrix. We will mainly generalize the properties that were found for the fundamental S-matrix, along the lines as they were formulated in [18].
Braiding and physical unitarity. Much like the S-matrix of fundamental particle, the bound-state Smatrix enjoys braiding unitarity, This provides us with a simple way to compute the inverse S-matrix, which is important when describing the scattering of particles in the anti-symmetric representation.
Generalised physical unitarity. If we started from a unitary representation of the symmetry algebra, e.g. by picking γ like in (15), the S-matrix also enjoys generalised physical unitarity Complex conjugation acts on the S-matrix parameters as where γ is given by (15).
Symmetry. For γ like in (15), we find that the S-matrix is symmetric: This property is easy to prove from (39) and the explicit form of Y and Z in terms of X . Thus, if we properly normalize our states, then this reduces to the regular relation S T = S.
Inversion. By combining the symmetry property and physical unitarity we find that the inverse S-matrix may be computed by sending Remarkably, this property holds for any γ.
Crossing. It is most convenient to define crossing symmetry analogous to [18]. The charge conjugation transformation then simply corresponds to the trivial automorphism C · |a, b, c, d = i a+b+c+d (−1) a+c |b, a, d, c .
The prefactor i a+b+c+d = i K is used for convenience. It corresponds to the simple transformation that acts on the variables that generate the bound state representation as From this it is easy to see that We the find the following crossing symmetry of the S-matrix, written in components as where [19] and the crossing transformation is Upon properly normalizing our basis elements, the crossing relation can now brought to the standard form (C ⊗ 1) S t1 (u cross 1 , u 2 ) (C −1 ⊗ 1) = F S −1 .
Monodromy. We have that S is also invariant under the crossing (in the same way) both variables, i.e.
For a particular choice of γ i = i(x + − x − )U , see e.g. [16], this is precisely the crossing transformation. More generally, this corresponds to crossing transformation on x ± combined with a redefinition of γ which follows from a local basis transformation.

Conclusions
The construction of the bound state S-matrix in [8] is complete, though not completely explicit: one is left to work with certain matrix inverses which obfuscate for instance the pole structure. The central obstruction to simplification are the Zhukowsky variables x ± , y ± that are root functions, which impede factorisation if occurring in rational functions. For the case at hand we solved this problem introducing a concept of "inverse" (modulo readily factorisable expressions) for certain combinations of Zhukowsky variables. Our results are split into a part containing Zhukowsky variables, and with them the dependence of the bound state scattering matrix on the 't Hooft coupling λ, and another one of hypergeometric type. The first factor is of the same type as in the Beisert S-matrix for fundamental particles [3]. It has only physical singularities, e.g. poles like u + − v − or u ± ; for once, the unphysical x − 1 x − 2 − x + 1 x + 2 singularity of the Z block is shown to cancel.
The hypergeometric parts depend on the various counters and the rapidity difference, but not on λ. Its Y blocks can be expressed by X elements with shifted counters, likewise those of Z are written in terms of Y; from where one can regain a slightly more complicated form in terms of X . We display completely explicit results for all parts on just a few pages. There are only a few distinct coefficients in these formulae; their appearance suggests that there may be a unifying superspace form. In particular, we have found a very suggestive relation between the Y and Z components that hints at a fused structure.
Finally we have clarified several properties of the bound state S-matrix such as crossing, inversion and braiding unitarity.
The writing we chose was mainly motivated by brevity; it is, of course, not unique. An open question is what form will be most useful for residue calculations as in [14,15] or alternative future approaches to multiple glueings of hexagon tiles. Our findings might also yield interesting reformulations of the TBA [6].
• The S-matrix is then programmed as an operator acting on such states as which evaluates to give the correct components.
• In order to not deal with spurious poles in X , Y, Z, we send K → K + ǫ and send ǫ → 0 in the end. This regulates combinatorial factors of the form K i − A which sometimes naively result in a 0/0.