Optimal Gossip With Direct Addressing

You had my attention with ‘gossip’ but throw in ‘optimal’ and I’ll have to read you.

Haeupler and Malkhi put together a very clever algorithm for disseminating information in a general network. It terminates in O(log log n) rounds with only O(log n) bits sent per node with a high probability of success. An included proof does show that this is optimal by some probabilistic argument.

The technique is composed from five phases. At any given time, the nodes are partitioned into groups, each under a given leader. The size of these groups are tightly controlled. Under the initial phase there is a brief burst of explosive random growth. Groups that fail to achieve sufficient size during this phase are decommissioned so they won’t add to the total algorithm cost. Groups that are too large are split up so that all groups surviving fall into a predictable size range. Groups are then merged by probabilistically activating only a few at a time. Finally, there is a burst of recruitment that pulls in all remaining unmerged nodes. The randomness and the deactivations prevent waste and the partitioning into single-leader groups allows the explosive growth to continue throughout.

As you read, the warnings start to appear in the corner of your mind.

  • The algorithm efficiency is measured in number of synchronous rounds? Well, partial synchrony is a pain to account for in your formulas, who wants to read all that extra notation, and it probably works out much the same.
  • Assuming the network is a complete graph for convenience? A robust transport protocol isn’t a terrible assumption and if you’re already living in a synchronous world then you’ve ditched questions of in-order arrival. Nevermind that a generic graph is not guaranteed to have a O(log log n) diameter.
  • UUIDs for every process? Your system is not to big that you can’t afford strong consistency when creating new nodes. Probably.
  • Global knowledge of the number of process involved in the algorithm? I don’t actually remember how much this calculation costs. Hopefully nothing compared to the complexity of the title algorithm.

The worst warning comes from the use of leaders. The authors claim in their abstract that the algorithm is robust to failures. However, well timed failures of leaders should be able to break the algorithm since they don’t employ any consensus algorithm to create leaders but instead lean on the uuids. Maybe the use of randomness saves them and maybe it doesn’t. It turns out that they’re making a weaker claim that they are robust to nodes failing at the very beginning, before the algorithm starts. This amounts to being robust to an inaccurate guess of the cluster size and doesn’t impress me. Seems to be tacked on as paper padding.

There’s certainly elements of interest here – the bookkeeping is clean and the phases are well-constructed. It’s not completely thought through though.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s