Self-stabilizing balls & bins in batches: The power of leaky bins [Extended Abstract] BerenbrinkPetra FriedetzkyTom KlingPeter Mallmann-TrennFrederik NagelLars WastellChris 2018 © 2016 ACM. A fundamental problem in distributed computing is the distribution of requests to a set of uniform servers without a centralized controller. Classically, such problems are modelled as static balls into bins processes, where m balls (tasks) are to be distributed to n bins (servers). In a seminal work, Azar et al. [4] proposed the sequential strategy Greedy[d] for n = m. When thrown, a ball queries the load of d random bins and is allocated to a least loaded of these. Azar et al. showed that d = 2 yields an exponential improvement compared to d = 1. Berenbrink et al. [7] extended this to m ≫ n, showing that the maximal load difference is independent of m for d = 2 (in contrast to d = 1). We propose a new variant of an infinite balls into bins process. In each round an expected number of n new balls arrive and are distributed (in parallel) to the bins and each non-empty bin deletes one of its balls. This setting models a set of servers processing incoming requests, where clients can query a server's current load but receive no information about parallel requests. We study the Greedy[d] distribution scheme in this setting and show a strong self-stabilizing property: For any arrival rate λ = λ(n) < 1, the system load is time-invariant. Moreover, for any (even superexponential) round t, the maximum system load is (w.h.p.) O ( 1/1-λ · log n/1-λ ) for d = 1 and O(log n/1-λ) for d = 2. In particular, Greedy[2] has an exponentially smaller system load for high arrival rates.