Minimal percolating sets for mutating infectious diseases

This paper is dedicated to the study of the interaction between dynamical systems and percolation models, with views toward the study of viral infections whose virus mutate with time. Recall that r-bootstrap percolation describes a deterministic process where vertices of a graph are infected once r neighbors of it are infected. We generalize this by introducing F(t)-bootstrap percolation, a time-dependent process where the number of neighboring vertices that need to be infected for a disease to be transmitted is determined by a percolation function F(t) at each time t. After studying some of the basic properties of the model, we consider smallest percolating sets and construct a polynomial-timed algorithm to find one smallest minimal percolating set on finite trees for certain F(t)-bootstrap percolation models.


I. INTRODUCTION
The study of infectious diseases though mathematical models dates back to 1766, when Bernoulli developed a model to examine the mortality due to smallpox in England [1]. Moreover, the germ theory that describes the spreading of infectious diseases was first established in 1840 by Henle and was further developed in the late 19th and early 20th centuries. This laid the groundwork for mathematical models as it explained the way that infectious diseases spread, which led to the rise of compartmental models. These models divide populations into compartments (also called coarse-grained models), where individuals in each compartment have the same characteristics; Ross first established one such model in 1911 in Ref. [2] to study malaria and later on, basic compartmental models to study infectious diseases were established in a sequence of three papers by Kermack and McKendrick [3] (see also Ref. [4]).
In this paper we are interested in the interaction between dynamical systems and percolation models, from the point of view of infections which mutate with time. The use of stochastic models to study infectious diseases has been popular for a long time, and dates back to the 1970s (e.g., see the celebrated work of Harris [5] and Metz [4]). There are many ways to mathematically model infections, including statisticsbased models such as regression models (e.g., Ref. [6]), cumulative sum charts (e.g., Ref. [7]), hidden Markov models (e.g., Ref. [8]), and spatial models (e.g., Ref. [7]), as well as mechanistic state-space models such as continuum models which are described by differential equations (e.g., Ref. [9]), stochastic models (e.g., Ref. [10]), complex network models (e.g., Ref. [11]), and agent-based simulations (e.g., Ref. [12]-see also Ref. [1] and references therein).
Difficulties when modeling infections include incorporating the dynamics of behavior in models, as it may be difficult to access the extent to which behaviors should be modeled explicitly, quantifying changes in reporting behavior, as well as identifying the role of movement and travel [13]. When using data from multiple sources, difficulties may arise when determining how the evidence should be weighted and when handling dependence between datasets [14].
In what follows we shall introduce a novel type of dynamical percolation which we call F (t )-bootstrap percolation, thought of as a generalization of classical bootstrap percolation. This approach allows us to model mutating infections, and thus we dedicate this paper to the study of some of its main features. After recalling classical r-bootstrap percolation in Sec. I A, we introduce a time-dependent percolation function F (t ) through which we introduce a dynamical aspect for the percolating model, as described in Definition 1 in Sec. II, given as follows. Given a function F (t ) : N → N, we define an F (t )-bootstrap percolation model on a graph G with vertices V and initially infected set A 0 as the process which at time t + 1 has infected set given by where N (v) denotes the set of neighboring vertices to v, and we let A ∞ be the final set of infected vertices once the percolation process has finished. As mentioned before, this model allows one to study situations in which an infection propagates at different rates depending on the time. As an example one may consider the case of the mutating virus of influenza within this setting: instead of having a mutating virus for which the current vaccination becomes ineffective for the new mutation, we can think of the setting as a model with a fixed virus for which it is the rate of infection which is the one that changes with time-in this case, the rate becomes higher as time passes.
In Sec. II we study some basic properties of this model, describe certain (recurrent) functions which ensure the model percolates, and study the critical probability p c . Since our motivation comes partially from the study of effective vaccination programs which would allow to contain an epidemic, we are interested both in the percolation time of the model, as well as in minimal percolating sets. We study the former in Sec. III, where by considering equivalent functions to F (t ), we obtained bounds on the percolating time.
Finally, in Secs. IV and V we introduce and study smallest minimal percolating sets for F (t )-bootstrap percolation on (nonregular) trees. This leads to one of our main results in Sec. V D, where we describe an algorithm for finding the smallest minimal percolating sets. Last, we conclude the paper with a comparison in Sec. VII between our model and our algorithm for this model with the one considered in Ref. [15] that solves the same problem for classical bootstrap percolation, and analyze the effect of taking different functions within our dynamical percolation.

A. Bootstrap percolation
The model introduced in this paper, described in Eq. (1), is a dynamical generalization of what is known as bootstrap percolation, introduced in 1979 in the context of solid state physics to analyze diluted magnetic systems in which strong competition exists between exchange and crystal-field interactions [16]. Bootstrap percolation has seen applications in diverse areas, including the studies of fluid flow in porous areas, the orientational ordering process of magnetic alloys, as well as the failure of units in a structured collection of computer memory (e.g., see Ref. [17]).
Bootstrap percolation has long been studied mathematically on arbitrary trees [18], as well as on finite and infinite rooted trees including Galton-Watson trees (e.g., see Ref. [19]). Compared with other models for infectious diseases, cellular automata models better simulate the effects of individual behavior and the spatial aspects of epidemic spreading, and better account for the effects of mixing patterns of individuals, as each individual is modeled separately, instead of all individuals being assumed as homogeneous. Hence, contagious diseases in which these factors have significant effects are better understood when analyzed with cellular automata models such as bootstrap percolation [20], which is defined as follows. For n ∈ Z + , we define an r-bootstrap percolation model on a graph G with vertices V and initially infected set A 0 as the process in which at time t + 1 has infected set given by Here, as before, we denoted by N (v) the set of neighboring vertices to v. In contrast, a SIR Model relates at each time t the number of susceptible individuals S(t ) to the number of infected individuals I (t ) and the number of recovered individuals R(t ), by a system of differential equations-an example of a SIR model used to simulate the spread of the dengue fever disease appears in Ref. [21]. In these models, a fixed parameter β denotes the average number of transmissions from an infected node per time period. In particular, the rate of spread of diseases are not necessarily constant in these models. This helps motivate the introduction of a time-dependent model of bootstrap percolation where the rate of spread varies according to time, done in Sec. II.
In what follows we shall present a dynamical generalization of the above model, for which it will be useful to have an example to establish the comparisons. Consider the (irregular) tree with three infected nodes at time t = 0, given by A 0 = {2, 4, 5} as shown in Fig. 1. Through 2-bootstrap percolation at time t = 1, node 3 becomes infected because its neighbors 4 and 5 are infected at time t = 0. At time t = 2, node 1 becomes infected since its neighbors 2 and 3 are infected at time t = 1. Finally, note that nodes 6,7,8 cannot become infected because they each have only 1 neighbor, yet two or more infected neighbors are required to become infected.

II. TIME-DEPENDENT PERCOLATION
The motivation of time-dependent percolation models appears since the rate of spread of diseases may change over time. In the SIR models mentioned before, since β is the average number of transmissions from an infected node in a time period, 1/β is the time it takes to infect a node. If we "divide the work" among several neighbors, then 1/β is also the number of infected neighbors needed to infect the current node.
Consider now an infection which would evolve with time. This is, instead of taking the same number of neighbours in rbootstrap percolation, consider a percolation model where the number of neighbours required to be infected for the disease to propagate changes with time, following the behavior of a function F (t ). We shall say a function is a percolation function if it is a function F : I → N where I is an initial segment of N (or all of N) that we use in a time-dependent percolation process, and which specifies the number of neighbors required to infect a node at time t.
Definition 1 (F (t )-Bootstrap percolation). Given a function F (t ) : N → N, we define an F (t )-bootstrap percolation model on a graph G with vertices V and initially infected set A 0 as the process in which at time t + 1 has infected set given by Here, as before, we denoted by N (v) the set of neighboring vertices to v, and we let A ∞ be the final set of infected vertices once the percolation process has finished.

023001-2
One should note that r-bootstrap percolation can be recovered from F (t )-bootstrap percolation by setting the percolation function to be the constant F (t ) = r. In what follows, unless otherwise stated, the initial set A 0 is chosen in the same way as in r-bootstrap percolation: by randomly selecting a set of initially infected vertices with probability p, for some fixed value of p which is called the probability of infection.
If there are multiple percolation functions and initially infected sets under consideration, then we may use the notation A F t to denote the set of infected nodes at time t percolating under the function F (t ) with A 0 as the initially infected set. In particular, this would be the case when generalising the above dynamical model to a multitype bootstrap percolation model such as the one introduced in Ref. [22].
To understand some basic properties of F (t )-bootstrap percolation, we shall first focus on a single update function F (t ), and consider the critical probability p c of infection for which the probability of percolation is 1 2 on finite trees-in the case of infinite trees, this is the value below which there are no clusters, and above which there are infinite clusters, with probability 1. When considering classical bootstrap percolation, note that the resulting set A r ∞ of r-bootstrap percolation is always contained in the resulting set A n ∞ of n-bootstrap percolation provided n r. From the above, setting the value m := min t F (t ), the resulting A F ∞ set of F (t )-bootstrap percolation will be contained in A m ∞ . Note that for any time t such that then v ∈ A F t for the next time t for which F (t ) = m. Moreover, since for the recurrent functions we are considering there are infinitely many times t such that F (t ) = m, one has that the final resulting set A m ∞ of m-bootstrap percolation is contained in the final resulting set A F ∞ of F (t )-bootstrap percolation. Then, the resulting sets of m-bootstrap percolation and F (t )-bootstrap percolation need to be identical, and hence the critical probability for F (t )bootstrap percolation is that of m-bootstrap percolation. In other words, we have shown that if F (t ) equals its minimum for infinitely many times t, then the critical probability of infection p c for which the probability of percolation is 1/2, is given by the value of the critical probability in m-bootstrap percolation for m := min t F (t ), this is The type of update functions that satisfy this include sinusoidal functions and, since we restricted the codomain to be positive, weakly decreasing functions. The percolation function F (t ) can be written in terms of a one-parameter family of parameters β by setting F (t ) := 1 β(t ) . As we shall see later, different choices of the oneparameter family β(t ) defining F (t ) will lead to very different dynamical models. A particular setup arises from Ref. [23], which provides data on the time-dependent rate of a specific virus spread, and through which one has that an interesting family of parameters appears by setting where b 0 is the initial rate of spread, b f is the final rate of spread, and 0 < k < 1. Then at time t, the number of infected neighbors it takes to infect a node is In this case, since β(t ) tends to b f , and 1 β tends to 1 b f , one cans see that there will be infinitely many times t such that Hence, in this setting from Eq. (4), the critical probability will be same as that of a r-bootstrap percolation where r = 1 b f .

III. PERCOLATION TIME
Informally, the percolation time (for finite graphs) is the time it takes for the percolation process to terminate, starting from a specific initially infected set of a graph. In terms of limits, recall that the final percolating set is defined as and thus one may think of the percolation time as the smallest time t for which A t = A ∞ . Note that for percolation on all infinite trees, there exists a percolation function and an initially infected set such that a percolation time does not exist, whereas there is always a defined percolation time for percolation on finite trees. Thus, we restrict our following discussions to finite trees. By considering different initial probabilities of infection p which determine the initially infected set A 0 , and different percolation functions F (t ) one can see that the percolation time of a model can vary drastically. To illustrate this, in functions. The model was ran 10 3 times for each combination on random graphs with 10 2 nodes and 300 edges.
In the settings of Fig. 2, one can see that all the models stabilize by time 10, implying that the percolation time is less than or equal to 10. Generally, understanding the percolation time is useful in determining when the disease spreading has stabilized. In what follows, we find a method to generate an upper bound on the percolation time given a specific graph and function. Formally, for the percolation functions considered in this paper, we define the percolation time t * as the minimum

A. Equivalent functions
Expanding on the notation of Eq. (5), we shall denote by A F ∞ the set of nodes infected by percolating the set A 0 on the graph with percolation function F (t ), and we shall simply write A ∞ when the percolation function F (t ) is clear from context or irrelevant. Moreover, we shall say that two percolation functions F 1 : I 1 → Z + and F 2 : This equivalence relation can be understood through the lemma below, which uses an additional function γ (t ) to relate two percolation functions F 0 and F 0 if F 0 can be intuitively "generated" by removing some values of F 0 . This removal procedure is further specified below. Given two subsets I 1 and I 2 of N, we say a function γ : The notion of a nice function allows us to understand the relation between two different dynamical percolation models defined through two functions F (t ) and F (t ). Given I 1 , I 2 ⊂ N, let F (t ) be any percolation function with domain I 1 , and define the percolation function F (t ) with domain I 2 as for γ (t ) a nice function. Through the function F (t ), for any fixed initially infected set A 0 and t ∈ I 2 , one can show by induction (see Appendix A) that Intuitively, the above results tell us that given a fixed time t 0 and some t > t 0 , if F (t ) = is the smallest value the function takes on after the time t 0 , and F (t ) has already taken on that value more than times, for the number of nodes in the graph, then there will be no nodes that will be infected at that time and the value is safe to be "removed."

B. Removal process
In what follows we shall clarify the removal process, by defining an upper bound on percolation time on a specified tree and function F (t ). For this, let G be a regular tree of degree d and vertices. Given a percolation function F (t ), define the functions F (t ) and γ : N → N ∪ {−1} by setting: Then, one can show (see Appendix B) that the two functions are equivalent as defined in Eq. (6), this is From the above description of equivalent functions, we can see two things: (i) The upper bound on the percolation time is the time of the largest t such that F (t ) is defined, and we can use this function in an algorithm to find the smallest minimal percolating set since F (t ) and F (t ) are equivalent.
(ii) An upper bound on the percolation time can not be obtained without regards to the percolation function.
To see item (ii), suppose we have such an upper bound b on some connected graph with degree d and with 1 node initially infected and more than 1 node not initially infected. If we have percolation function F (t ) such that F (t ) = d + 1 for all t ∈ N b and F (m) = 1 otherwise, then we see that there will be nodes infected at time b + 1, leading to a contradiction.
To see the implications of the above points within the equivalence of functions, suppose that the degree of the graph in consideration is d, and define a sequence a, where a 1 = d and a n+1 = (a n + 1)d.
Then, the size of the domain of F (t ) in Eq. (8) Indeed, suppose each value do appear exactly d times after the last value smaller than it appears. To count how large the domain can be, we start with the possible ts such as F (t ) = 1s in the function; there are d of them as 1 can maximally appear d times. Note that this is equal to a 1 . Now, suppose we have already counted all the possible ts when F (t ) < n + 1, for 1leqn < d, which amounted to a n . Then, there can be maximally d instances at the between the appearance of each t when F (t ) < n as well as before and after all such appearances, so there are a n + 1 places where F (t ) = n can appear. Thus, there are maximally (a n + 1)d elements t in the domain such that F (t ) = n + 1. Summing all of them yields d i=1 a i , the total number of elements in the domain in Eq. (9). Finally, note that from Eq. (8), for some F (t ), A 0 and n, one has . Hence, in this setting an upper bound of F (t ) percolating on a graph with d vertices can be found by taking γ −1 ( d i=1 a i ), as defined in Eq. (9).

IV. MINIMAL PERCOLATING SETS
When considering percolations within a graph, it is of much interest to understand which subsets of vertices, when infected, would lead to the infection reaching the whole graph.
To study those sets, we shall refer to a percolating set of a graph G with percolation function F (t ) is a set A 0 for which A F ∞ = G at a finite time. A minimal percolating set is a percolating set A such that if any node is removed from A, it will no longer be a percolating set.
A natural motivation for studying minimal percolating sets is that as long as we keep the number of individuals infected to less than the size of the minimal percolating set, we know that the entire population will not be decimated. Bounds on minimal percolating sets on grids and other less regular graphs have extensively been studied. For instance, it has been shown in Ref. [24] that for a grid [n] d , there exist a minimal percolating set of size but there does not exist one larger than (n + 2) 2 /6. In the case of trees, Ref. [15] gives an algorithm that finds the largest and smallest minimal percolating sets on trees. Since then, only a few further results have been obtained improving those bounds (see, for example, the work on degree conditions for bootstrap percolation from small sets in Ref. [25] and references therein). However, the results in the above papers cannot be easily extended to the dynamical model because it makes several assumptions such as F (t ) = 1 that do not necessarily hold in the dynamical model.
In the following sections we shall study minimal percolating sets for certain models of F (t )-bootstrap percolations, but before this is done, we shall first consider an example of a minimal percolating set with F (t ) = t, as shown in Fig. 3.
In this case, the minimal percolating set has size 3, as shown in Fig. 3(a). Indeed, we see that if we take away any of the shaded nodes, the remaining initially infected shaded nodes would not percolate to the whole tree, and thus they form a minimal percolating set; further, there exists no minimal percolating sets of size 1 or 2, thus this is the smallest minimal percolating set. It should be noted that minimal percolating sets can have different sizes. For example, another minimal percolating set with 5 vertices appears in Fig. 3(b).
In what follows we shall work with general finite trees T (V, E ) with set of vertices V and set of edges E . In particular, we shall consider the smallest minimal percolating sets in the following section.

V. ALGORITHMS FOR FINDING SMALLEST MINIMAL PERCOLATING SET
Consider F (t )-bootstrap percolation on a tree T (V, E ) with initially infected set A 0 ⊂ V . As before, we shall denote by A t be the set of nodes infected at time t. For simplicity, we shall use here the word "filled" synonymously with "infected."

A. Smallest and largest times
To build an algorithm to find smallest percolating sets, we first need to introduce a few definitions that will simplify the notation at later stages. First, we shall denote by L(a) the largest time t such that a F (t ), and if there does not exist such a time t, then set L(a) = ∞, this is Similarly, let B(a) be the smallest time t such that a F (t ), and if such a time t does not exist, set B(a) = ∞, leading to Given a, b ∈ N, if a < b, then L(a) L(b). Indeed, this holds because if a node can be infected to with b neighbors, then it can with a neighbors where a < b. Note that in general, a smallest percolating set A 0 must be a minimal percolating set. To see this, suppose not. Then there exists some v in A 0 such that A 0 − {v} percolates the graph. That means that A 0 − {v}, a smaller set that A 0 , is a percolating set. However, since A 0 is a smallest percolating set, we have a contradiction. Hence, showing that a percolating set A 0 is the smallest implies that A 0 is a minimal percolating set.
The first algorithm one may think of is to try every case. There are 2 n possible sets A 0 , and for each set we much percolate A 0 on T to find the smallest percolating set. This amounts to an algorithm of complexity, where t is the upper bound on the percolation time. In what follows we shall describe a polynomial-timed algorithm to find the smallest minimal percolating set on T (V, E ), described in the algorithm. For this, we shall introduce two particular times associated to each vertex in the graph, and formally define what isolated vertices are.

B. Isolated nodes
For each node v in the graph, we let t a (v) be the time when it is infected, and t * (v) the time when it is last allowed to be infected; moreover, when building our algorithm, each vertex will be allocated a truth value of whether it needs to be further considered. A node v is said to be isolated with regards to A 0 if there is no vertex w ∈ V such that v becomes infected when considering F (t )-bootstrap percolation with initial set A 0 ∪ {w}. From these definitions, a node is isolated with regards to a set if it is impossible to infect it by adding one of any other node to that set that is not itself.
Building toward the percolating algorithm, we shall show a few properties first. First, note that if a node cannot be infected by including a neighbor in the initial set, it is isolated. Hence, by filling the neighbor in the initial set, we either increased the number of neighbors infected to a sufficient amount, or we expanded the time allowed to percolate with fewer neighbors so that percolation is possible.

023001-5
A quick test to see whether a vertex is isolated can be done as follows. Let v be an uninfected node such that not all of its n neighbors are in set A 0 . Define a function where N (i) is the smallest time when i of the neighbors of node v is infected, and set N (0) = 0. Then, a vertex v is isolated iff there exists no i such that To see that this test works, suppose s ∈ N (v) ∩ A 0 . If there exists i such that F (t ) i + 1 for some t ∈ (N (i), t * ], then using A 0 ∪ {s} as the initially infected set allows percolation to happen at time t since there would be i + 1 neighbors infected at each time N (i). Thus, by contradiction, the forward direction is proven.
Let v be not isolated, and v ∈ P(A 0 ∪ {s}) for some neighbor s of v. Then there would be i + 1 neighbors infected at each time N (i). Moreover, for v being to be infected, the i + 1 neighbors must be able to fill v in the allowed time, (N (i), t * ]. Thus, there exists N (i) such that F (t ) i + 1 for some t ∈ (N (i), t * ]. By contradiction, we proved the backwards direction.

C. Variation of initial sets
Note that if a vertex v is uninfected and N (v) ⊂ A 0 , then the vertex must be isolated. In what follows we shall study the effect of having different initially infected sets when studying F (t )-bootstrap percolation. For this, let Q be an initial set for which a fixed vertex v with n neighbours is isolated. Denoting the neighbors of v be s 1 , s 2 , ..., s n , we let the times at which they are infected be t Q 1 , t Q 2 , . . . , t Q n . Here, if for some 1 i n, the vertex s i is not infected, then set t Q i to be some arbitrarily large number. Moreover, consider another initial set P such that the times at which s 1 , s 2 , ..., s n are infected are t P 1 , t P 2 , . . . , t P n satisfying for some 1 j n. In the above setting, if v / ∈ P, then the vertex v must be isolated with regards to P as well. Indeed, consider N Q (i) as defined in Eq. (12) for the set Q, and N P (i) the corresponding function for the set P. Then for all integers k ∈ {0, 1, ..., n}, one has that N Q (k) N P (k). Indeed, this is because with set P, each neighbor of v is infected at or after they are with set Q. Then, from Eq. (3), v is isolated with regards to Q so there is no m such that However, since N Q (k) N P (k) for all k ∈ {0, 1, ..., n}, we can say that there is no m such that F (t ) m + 1 for some t ∈ (N P (m), t * ], as (N P (m), t * ] ⊆ (N Q (m), t * ]. Thus, we know that v must also be isolated with regards to P.
Given a vertex v which is not isolated with n infected neighbors, we shall define t p (v) ∈ (0, t * ] to be the largest integer such that for i ∈ {0, 1, ...n}, one has that Note that to fill an isolated node v, one can fill it by filling one of its neighbors by time t p (v), or just add the vertex it to the initial set. Hence, one needs to fill a node v n which is either the parent par(v n ), a child chi(v n ), or itself. One can further understand the variation of initially infected sets by noting that, given an isolated node v / ∈ A 0 , to achieve percolation, it is always better (faster) to include v in A 0 than attempting to make v unisolated. Indeed, it is possible to make v isolated by including only descendants of v in A 0 since we must include less than deg(v) neighbors. But we know that if given the choice to include a descendant or a v to the initial set, choosing v is absolutely advantageous because the upwards percolation achieved by v infected at some positive time is a subset of upwards percolation achieved by filling it at time 0. Thus, including v to the initial set is superior.
The above set-up can be understood further to find which vertex needs to be chosen to be v n . To see this, consider a vertex v / ∈ A 0 . Then, in finding a node u to add to A 0 so that v ∈ A ∞ for the initial set A 0 ∪ {u} and such A ∞ is maximized, the vertex v n must be the parent par(v) of v. This can be understood by noting that filling v by time t * (v) already ensures that all descendants of v will be infected, and that all percolation upwards must go through the parent par(v) of v. This means that filling any child of v to fill v (by including some descendant of v in A 0 ) we obtain a subset of percolation if we include the parent par(v) of v in A 0 . Therefore, the parent par(v) of v or a further ancestor needs to be included in A 0 , which means v n needs to be the parent par(v) of v.

D. Smallest minimal percolating set algorithm
Note that given a node v / ∈ A 0 , if we fill its parent par(v) before t p (v), then the vertex will be infected. We are now ready for our main result, which improves the naive O(t2 n ) bound for finding minimal percolating sets to O(tn), as discussed further in the last section.
To obtain one smallest minimal percolating set of a tree T (V, E ) with percolation function F (t ), proceed as follows: 1.
Step 1. Initialize tree: For each node v, set t * (v) to be some arbitrarily large number, and set it to true for needing to be considered. 2.
Step 2. Percolate using current A 0 . Save the time t a 's at which the nodes were infected. Stop the algorithm if the set of nodes that are infected equals the set V . 3.
Step 3. Consider a node v that is furthest away from the root, and if there are multiple such nodes, then choose the one that is isolated, if it exists.
(a) if v is isolated or is the root, then add v to A 0 . Set v as considered.

Step Go to step 2.
After the process has finished, the resulting set A 0 is one of the smallest minimal percolating sets.
Note that the specification that the tree must be finite is important as the algorithm is iterative and relies on the existence of a node furthest from the root.
The description of the algorithm through which one can find a smallest percolating set, shall be organized as follows: we will first show that the set A 0 constructed through the steps of the algorithm is a minimal percolating set, and then show that it is the smallest such set. To see that A 0 is a minimal percolating set, we first need to show that A 0 percolates. In step 3, we have included all isolated nodes, as well as the root if it wasn't infected already, in A 0 and guaranteed to fill all other nodes by guaranteeing that their parents will be infected by their time t p .
Showing that A 0 is a minimal percolating set is equivalent to showing that if we remove any node from A 0 , it will not percolate to the whole tree. Note that in the process, we have only included isolated nodes in A 0 other than the root. This means that if any node v 0 is removed from A 0 , it will not percolate to v 0 because we only fill nodes higher than v 0 after considering v 0 and since turning a node isolated requires filling at least one node higher and one descendant of v 0 , it cannot be infected to after removing it from A 0 . Moreover, if the root is in A 0 , since we considered the root last, it is implied that the rest of A 0 does not percolate to the root. Thus, A 0 is a minimal percolating set. Now we show that the set A 0 constructed through the algorithm is of the smallest percolating size by contradiction using Lemma 15. For this, suppose there is some other minimal percolating set B for which |B| |A|. Then, we can build an injection A 0 to B in the following manner: iteratively consider the node a that is furthest from the root and a ∈ A 0 that hasn't been considered, and map it to a vertex b 0 which is itself or one of its descendants of b where b ∈ B. We know that such a b 0 must exist by induction.
We first consider the case where a has no descendant in A. Then, if the vertex b ∈ B and b is a descendant of a, we map a to b. Now suppose there is no node b that is a descendant of a where b ∈ B. Then, a ∈ B because otherwise a would be isolated with regards to B as well, by Lemma 15. This means that we can map a to a in this case. Now we can consider the case where all the descendants d of a such that d ∈ A := A 0 has been mapped to a node b d ∈ B where b d is d or a descendant of d. If there is such a b ∈ B, then b is a descendant of a, and thus no nodes in A have been matched to b yet, allowing us to map a to b. Now suppose there is no such b ∈ B. This means that there is no b ∈ B such that all of the descendants of a are descendants of b. Then, all nodes in B that are descendants of a is either some descendant of a ∈ A or some descendant of a descendant of a in A. This means that percolating B, the children of a will all be infected at later times than when percolating A, and by Lemma 15, one has that a ∈ B because a would be isolated with regards to B. So in this case, we can map a to a.
The map constructed above is injective because each element of B has been mapped to not more than once. Since we constructed an injective function from the set generated by the algorithm A 0 to a smaller minimal percolating set B 0 , we   FIG. 4. Panels (a-c) show the first three updates through the algorithm in Sec. V D, where the vertices considered at each time are shaded and each vertex is assigned the value of t * .
have a contradiction because A 0 then must be the same size or larger than B 0 . Thus, the set generated from the algorithm must be a smallest minimal percolating set.
From Sec. V D one can find the smallest minimal percolating set on any finite tree. Moreover, it gives an intuition for how to think of the vertices of the graph: in particular, the property of "isolated" is not an absolute property, but a property relative to the set of nodes that has been infected before it. This isolatedness is easy to define and work with in trees since each node has at most one parent. Moreover, a similar property may be considered in more general graphs and we hope to explore this in future work. Below we shall demonstrate the algorithm of Sec. V D with an example.

E. Smallest minimal percolating sets on trees
We will preform the algorithm on the tree in Fig. 3, with percolating function F (t ) = t. We first initialize all the nodes, setting their time t * to some arbitrarily large number, represented as ∞ in Fig. 4.
Percolating the empty set A 0 , the resulting infected set is empty, as shown in Fig. 4(a). We then consider the furthest node from root. None of them are isolated, so we can consider any; we begin by considering node v = 6 in the labeling of Fig. 3. It is not isolated, so we set the value to be as can be seen in Fig. 4(b). Then we consider another node furthest from the root, and through the algorithm set the t * of the parent to t p − 1 = 0, as can be seen in Fig. 4(c).
The following steps of the algorithm are depicted in Fig. 5 below. As done in the first three steps of Fig. 4, we consider the next furthest node v from the root, and by the same reasoning as node 6, set the t * par(v) of the parent to t * par(v) = 1, as can be seen in Fig. 5(a) 6. Panels (a-c) update through the algorithm in Sec. V D after setting A 0 to be as in Fig. 5. Now we consider node 4: since it is isolated, we fill it in as in Fig. 5(b). The set of nodes infected can be seen in Fig. 5(c). We then consider node 5, the furthest node from the root not considered yet. Since it is not isolated, change the t * par(v) of its parent to t p (v) − 1 = 0, as in Fig. 6(a).
Then we consider node 3, which is isolated, so we include it in A 0 . The infected nodes as a result of percolation by this A 0 is shown as red vertices in Fig. 6(c). To finish the process, consider the vertex v = 2 since it is the furthest away nonconsidered node. It is not isolated so we change the as shown in Fig. 7(a). Finally, we consider the root: since it is isolated, we include it in our A 0 as seen in Fig. 7(b). Finally, percolating this A 0 results in all nodes being infected as shown in Fig. 7(c), and thus we stop our algorithm.
Through the above algorithm, we have constructed a smallest minimal percolating set shown as red vertices in Fig. 7(c), which is of size 3. Comparing it with Fig. 3, we see that the minimal percolating set in that example is indeed the smallest, also with 3 elements. Finally, it should be noted that in general the times t p for each node could be different from each other and are not the same object.
From the above example, and its comparison with Fig. 3, one can see that a graph can have multiple different smallest minimal percolating sets, and the algorithm finds just one. In the algorithm of Sec. V D, one minimizes the size of a minimal percolating set, relying on the fact that as long as a node is not isolated, one can engineer its parent to become infected so as to infect the initial node. The motivation of the definition of isolated stems from trying to find a variable that describes whether a node is still possible to become infected by infecting its parent. Because the algorithm is on trees, we could define isolation to be the inability to be infected if we add only one node.

VI. FURTHER PROPERTIES OF F(t )-BOOTSTRAP PERCOLATION AND OUR ALGORITHM
We shall dedicate this section to further the analysis of our algorithm and its complexity, its comparison to the work in Ref. [15], and to consider our model on random trees.

A. Complexity
First, we shall consider the complexity of the algorithm in Sec. V D to find the smallest minimal percolating set on a graph with n vertices. To calculate this, suppose t is the upper bound on percolation time; we have presented a way to find such an upper bound in the previous sections. In the algorithm, we first initialize the tree, which is linear timed. Steps 2 and 3 are run at most n times as there can only be a total of n unconsidered nodes. The upper bound on time is t, so steps 2 will take t to run. Determining whether a node is isolated is linear timed, so determining isolated-ness of all nodes on the same level is quadratic timed, and doing the specifics of step 3 is constant timed. Thus, the algorithm is O[n + n(t + n 2 )] = O(tn + n 3 ) = O(tn), much better than then O(t2 n ) complexity of the naive algorithm.

B. Comparison on perfect trees
Finally, we shall compare our algorithm with classical r-bootstrap percolation. For this, in Fig. 8 we show a comparison of sizes of the smallest minimal percolating sets on perfect trees of height 4, varying the degree of the tree. Two different functions were compared: one is constant and the other is quadratic. We see that the time-dependent bootstrap percolation model can be superior in modeling diseases with time-variant speed of spread, for that if each individual has around 10 social connections, the smallest number of individuals needed to be infected to percolate the whole population has a difference of around 10 3 between the two models.

C. Comparison on random trees
We shall conclude this work by comparing the smallest minimal percolating sets found through our algorithm and those constructed by Riedl in Ref. [15]. To understand the difference of the two models, we shall first consider in Fig. 9 three percolating functions F (t ) on random trees of different sizes, where each random tree has been formed by beginning with one node, and then for each new node i we add, use a random number from 1 to i − 1 to determine where to attach this node.
In Fig. 9, the size of the smallest minimal percolating set can be obtained by multiplying the size of the minimal percolating set by the corresponding value of n. In particular, one can see how the exponential function requires an increasingly larger minimal percolating set in comparison with polynomial percolating functions.

D. Comparison with Ref. [15]
Riedl provided an algorithm for the smallest minimal percolating sets in trees for r-bootstrap percolation in Ref. [15] that runs in linear time. We shall describe his algorithm generally to clarify the comparisons we will make. Riedl defined a trailing star or trailing pseudostar as a subtree with each vertex being of distance at most 1 or 2 away, respectively, from a certain center vertex that is connected to the rest of the tree by only one edge. Then, the first step of Riedl's algorithm is a reduction procedure that ensures every nonleaf has degree at least r: Intuitively, one repeatedly finds a vertex with degree less than r, include it to the minimal percolating set, remove it and all the edges attached to it, and for each of the connected components, add a new node with degree 1 connected to the node that was a neighbor of the node we removed.
Then, the algorithm identifies a trailing star or pseudostar, whose center shall be denoted by v and its set of leaves by L. Letting the original tree be T , if the number of leafs on v is less than r, then set T = T \ (v ∪ L); otherwise, set T = T \ L. Recursively set A as the smallest minimal percolating set of T under r-bootstrap percolation. Then, the smallest minimal percolating set for T is A ∪ L if |L| < r and A ∪ L \ v otherwise. Using Riedl's algorithm, we first note that there is a trailing star centered at 3 with 2 leaves, as seen in Fig. 10. Removing the leaf, there is a trailing star at 1 with 1 leaf. Removing 1 and 2, we have one node left, which is in our A . Adding the leaves back and removing 3, we have an A 0 of 2,3 and 5, a smallest minimal percolating set. Thus, the smallest minimal percolating set with Riedl's algorithm also has size 3, as expected. To compare with the work of Ref. [15], we shall run the algorithm with F (t ) = 2 (leading to 2-bootstrap percolation as considered in Ref. [15]) as well as linear-timed function on the following graph: With our algorithm, we see that nodes 2, 3, and 5 are isolated, respectively, and when we add them to the initial set, all nodes become infected. Thus, the smallest minimal percolating set with our algorithm has size 3.
We shall now compare our algorithm to that of Riedl. A key step in Riedl's algorithm, which is including the leaves of stars and pseudostars in the final minimal percolating set, assumes that these leaves cannot be infected as it is assumed that r > 1. However, in our algorithm, we consider functions that may have the value of 1 somewhere in the function, thus we cannot make that assumption. Further, in r-bootstrap percolation, time of infection of each vertex does not need to be taken into account when calculating the conditions for a node to be infected as that r is constant, whereas in the time-dependent case, it is necessary: Suppose a node has n neighbors, and there is only one t such that F (t ) n, so all neighbors must be infected by time n in order for n to become infected.
The problem our algorithm solves is a generalization of Riedl's, for that it finds one smallest minimal percolating set for functions including constant ones. It has higher computational complexity for that it is not guaranteed for an unisolated node to be infected once one other neighbor of it is infected without accounting for time limits.

VII. CONCLUDING REMARKS
This paper is dedicated to the introduction and study of a novel time-dependant percolation model. The set up generalises the standard r-bootstrap percolation by introducing a time-dependant percolation function F (t ), through which we define F (t )-bootstrap percolation (see Definition 1). Some basic properties of F (t )-bootstrap percolation are then studied, with particular attention given to the critical probability p c for certain recurrent functions F (t ), for which we give bounds in Sec. II.
Our motivation comes partially from the study of effective vaccination programs which would allow to contain an epidemic, and thus we are interested both in the percolation time of the model, as well as in minimal percolating sets. We study the former in Sec. III, where by considering equivalent functions to F (t ), we obtained bounds on the percolating time (see Fig. 2). In particular, the results in Sec. III we show that if F (t ) = is the smallest value the function takes on after some fixed time t 0 , and F (t ) has already taken on that value more than times than the number of nodes in the graph, then there will be no nodes that will be infected at that time and the value is safe to be "removed." The removal process is explained in the same section, and is characterized by obtaining an upper bound on percolation time on a specified tree and function F (t ).
In Secs. IV and V we introduce and study smallest minimal percolating sets for F (t )-bootstrap percolation on (nonregular) trees. Our main results appear in Sec. V D, and are given by an algorithm for finding the smallest minimal percolating sets. To show the relevance of our work, we shall conclude this note with a short comparison of our model with those existing in the literature.
Finally, we should mention that the work presented in previous sections could be generalized in several directions and, in particular, we hope to develop a similar algorithm for largest minimal percolating set; and study the size of largest and smallest minimal percolating sets in lattices.

ACKNOWLEDGMENTS
The authors are thankful to MIT PRIMES-USA for the opportunity to conduct this research together, and in particular Tanya Khovanova for her continued support, to Eric Riedl and Yongyi Chen for comments on a draft of the paper, and to Rinni Bhansali and Fidel I. Schaposnik for useful advice regarding our code. The work of Laura Schaposnik is partially supported through the NSF Grants No. DMS-1509693 and CAREER DMS No. 1749013, and she is thankful to the Simons Center for Geometry and Physics for the hospitality during part of the preparation of the manuscript. This material is also based upon work supported by the National Science Foundation under Grant No. DMS-1440140 while Laura Schaposnik was in residence at the Mathematical Sciences Research Institute in Berkeley, CA.

APPENDIX A: ON NICE FUNCTIONS
In what follows we shall prove Eq. (7). One should note that F (t ) is well-defined. Indeed, since the domain of F (t ) is I 2 , we have that t ∈ I 2 and thus γ −1 (t ) is a valid expression. Moreover, γ −1 (t ) exists because γ is surjective, and it is unique since I 2 is an initial segment of N and hence t = −1. Furthermore, for any a, b ∈ I 1 , if γ (a) = γ (b) = −1, then a = b. Since the domain of γ is I 1 , then γ −1 (t ) ∈ I 1 . This means that γ −1 (t ) is in the domain of F (t ) and thus one has that F (t ) is defined for all t ∈ I 2 .
To prove Eq. (7), note that since γ −1 (0) = 0 and the initially infected sets for the models with F (t ) and F (t ) are the same, it must be true that A F 0 ⊆ A F 0 , and in particular, To perform the inductive step, suppose that for some t ∈ I 2 and t + 1 ∈ I 2 , one has A F t ⊆ A F γ −1 (t ) . Moreover, suppose there is a node n such that n ∈ A F t+1 but n / ∈ A F γ −1 (t+1) . Then, this means that there exists a neighbor n of n such that n ∈ A F t but n / ∈ A F γ −1 (t+1)−1 . Indeed, otherwise this would imply that the set of neighbors of n infected prior to the specified times are the same for both models, and since F (t + 1) = F [γ −1 (t + 1)] for t ∈ I 2 , and thus n would be infected in both or neither models. From the above, since t < t + 1 one must have γ −1 (t ) < γ −1 (t + 1), and thus γ −1 (t ) γ −1 (t + 1) − 1.
Moreover, since n / ∈ A F γ −1 (t+1)−1 , then n / ∈ A F γ −1 (t ) . However, we assumed n ∈ A F t , and since A F 0 ⊆ A F 0 , we have a contradiction, so it must be true that the sets satisfy A F t+1 ⊆ A F γ −1 (t+1) . Thus, we have proven that for any initially infected set A 0 and t ∈ I 2 , one has that Eq. (7) is satisfied for all t ∈ I 2 .
Through Eq. (7) we can further understand when an F (t )percolation process finishes in the following manner. Given a percolation function F (t ) and a fixed time t ∈ N, let t p < t be such that F (t p ) < F (t ), and suppose there does not exist another time t i ∈ N where t p < t i < t such that F (t i ) < F (t ). Suppose further that we use this percolation function on a graph with vertices. Then, if then there are no nodes that becomes infected at time t. To see this, suppose some node n is infected at time t. Then, this would imply that all nodes are infected before time t. We can show this using contradiction: suppose there exists m nodes n i that there are not infected by time t. Then we know that there exists at least m of t j ∈ N such that t p < t j < t, for which F (t j ) = F (t ) and such that there is no node infected at t j . Matching each n i with some t j and letting t k ∈ N be such that t j < t k t, one can see that there is some node infected at t k , and F (t k ) = F (t ). Moreover, this implies that there is no t x ∈ N such that t j < t x < t k and such that there is some node infected at t x and F (t x ) = a. We know such a t k exists because there is a node infected at time t.
From the above, for each n i there are two cases: either the set of nodes infected by t j is the same as the set of nodes infected by t k , or there exists node p in the set of nodes infected by t k but not in the set of nodes infected by its t j . We have a contradiction for the first case: there must be a node infected at time t j is this is the case, as the set of infected nodes are the same as time t k , so the first case is not possible. So the second case must hold for all m of n i 's. But then, the second case implies that there is a node infected between t j and t k . This means that at least m additional nodes are infected, adding to the at least − m nodes infected at t i such that F (t i ) = a and there is a node infected at t i , we have at least − m + m = nodes infected before t. But if all nodes are infected before t, this would mean there are no nodes to infect at time t, so n does not exist.

APPENDIX B: ON EQUIVALENT FUNCTIONS
To show Eq. (8), note that intuitively the function γ constructed above is mapping the index associated to F (t ) to the index associated to F (t ). If omitted, then it is mapped to −1 by γ . To prove the statements, we will prove that P F (t ) (A) = P F (t ) (A). Suppose we have a node n in P F (t ) (A), and it is infected at time t 0 . Suppose F (t 0 ) = a for some a ∈ Z + , and let t prev be the largest integer t prev < a such that F (t prev ) < a. Suppose further that t 0 is the mth instance such that F (t ) = a for some t. Moreover, if m > v, then there cannot be any node infected at time t 0 under F (t ), and thus it follows that m v. But if m v, then γ (t 0 ) = −1, and therefore all nodes that are infected under F (t ) became infected at some time t where γ (t 0 ) = −1.
Recall that A F 0 = A F 0 , and suppose for some n such that γ (n) = −1, one has that A F n = A F γ (n) . We know that for any n < t < γ −1 [γ (n) + 1], γ (t ) = −1, so nothing would be infected under F (t ) after time n but before γ −1 [γ (n) + 1]. This means that the set of previously infected nodes at time γ −1 [γ (n) + 1] − 1 is the same as the set of nodes infected before time n leading to A F n = A F γ −1 (γ (n)+1)−1 .
Then, since F {γ −1 [γ (n) + 1]} = F [γ (n) + 1] and the set of previously infected nodes for both are A F n , we know that A F n+1 = A F γ (n+1) . Thus, for any time n in the domain of F (t ), there exist a corresponding time n for percolation under F (t ) such that the infected set at time n under F (t ) and the infected set at time n under F (t ) are the same, and thus A F ∞ = A F ∞ , leading to Eq. (8).