# Distributed computing at STOC 2011 – part one

In this blog entry and the next one, I will discuss two distibuted computing papers at STOC 2011: Tight Bounds for Parallel Randomized Load Balancing by Lenzen and Wattenhoffer (this entry), and Distributed Verification and Hardness of Distributed Approximation by Das Sarma et al (next entry).  There have been at least two tracks on distributed systems at STOC, so these papers are representative, not exhaustive by any means.  I am going to start, though, with a personal note about something sad, so if you’d like to jump directly to the science, please skip the subhead below, and go directly to the next one.

Michelle Le

Michelle Le is a nursing student at Samuel Merritt College, who disappeared when leaving the hospital after rounds eleven days ago.  A good friend of mine is in the same class as Le.  Monday night, the police changed the status of the case from disppearance to homicide.  (This has been in the local California news for days; one recent link is here.)  The family is asking the public to hold out hope, despite police pronouncements that the effort is now one of recovery.  Suffice it to say that I was on a long phone call Monday night, did not sleep much or well,and  I skipped the morning session of STOC Tuesday to be with my friend.  (This was the session where the award-winning papers were presented, so hopefully someone who attended can blog about them.)  According to my friend, Michelle Le is beautiful and hilarious, and is the only student in the nursing program who is also working twenty hours a week.  (Most students are not employed, to be able to keep up with the accelerated reading load.)  So Michelle is driven and dedicated as well.  I hope she returns home very soon.

Tight Bounds for Parallel Randomized Load Balancing

The first paper I will consider is Tight Bounds for Parallel Randomized Load Balancing, by Lenzen and Wattenhoffer.  Load balancing is the problem of distributing out work as evenly as possible to the different processes/servers capable of doing that work.  It can be generalized to a notion of placing $n$ balls into $n$ bins with as few “collisions” as possible, so the total bin size is minimized, and the time it takes to run the load balancing algorithm is minimized as well.  Parallel load balancing can be viewed as a load balancing problem where each ball acts as a separate agent.  (That way, there is no centralized control of the motion of the balls.)  As in other areas where centralized control must be avoided, randomized algorithms here are the key to success.

The previous best result on this topic was obtained by Adler et al, in Parallel Randomized Load Balancing (STOC 1995).  Those authors obtained a maximum bin load of $\Theta(\log \log n / \log \log \log n)$ with an algorithm that ran for the same number of rounds as the max bin load.  They also obtained a matching lower bound.  However, the lower bound assumed the load balancing algorithm was nonadaptive and symmetric, that is to say, the algorithm must choose which bins to place balls in ahead of time, before any communication between agent balls starts; and the bins must be chosen uniformly independently at random.

Lenzen and Wattenhoffer obtain an algorithm that beats the previous lower bound, because their algorithm is adaptive: ball placement is chosen depending on communication throughout the execution of the algorithm.  Their basic algorithm is just a few lines long, and I will present it after stating their main results.

Theorem 1 (upper bound): There exists an adaptive symmetric parallel randomized load balancing algorithm for $n$ balls and $n$ bins that obtains a bin load of 2 in $\log^* n + \mathcal{O}(1)$ communication rounds.

Theorem 2 (lower bound): Any symmetric randomized load balancing algorithm on $n$ balls and $n$ bins must use at least $(1 - o(1))\log^* n$ communication rounds, where the size of each message communicated is asymptotically $\mathcal{O}(n)$.

So the algorithm that provides Theorem 1 is essentially best possible, timewise.

The basic algorithm

Set $k(1) := 1$ and $i :=1$.  Execute the following loop until termination.

1. Balls contact $\lfloor k(i) \rfloor$ u.i.r. bins and ask to be placed in them.
2. Each bin admits permission to one of the requesting balls (if it receives requests) and denies all others.
3. Any ball that receives at least one permission chooses a permitting bin arbitrarily, informs that bin, and terminates.
4. Set $k(i+1) := \min \left\{ k(i)^{e^{\lfloor k(i) \rfloor / 5}}, \sqrt{\log n} \right\}$, and $i := i+1$.

That’s it.  The mysterious-looking expression for $k(i+1)$ can be explained as follows.  We don’t want $k(i)$ to be too large with respect to the number of balls placed, because otherwise there will be a blowup in the number of messages sent in a given round.  On the other hand, if almost no balls have been placed yet, then, with high probability, a lot of balls are going to be placed in the next round, so the number of balls that send messages will drop significantly.  So it works out to allow $k(i)$ to grow very fast, because the number of balls will drop quickly.  When there are few balls left (few meaning $e^{-\Omega(\log n)}$), it suffices to cap the growth of $k(i)$ at $\sqrt{\log n}$, because each ball will be placed with probability $1 - e^{-\Omega(\log n)}$, so the algorithm will conclude in a constant number of rounds from that point.

I won’t discuss the lower bound technique, but it is intruguing since it appears new: they bound the total communication available to a symmetric algorithm. Overall, I like this paper a lot.