Consensus Algorithms

Finality overview. Three types of networks; Synchronous (Global clock), partial Synchronous (Local clock), and Asynchronous (No clock).

In their 1985 paper “Impossibility of Distributed Consensus with One Faulty Process,” researchers Fischer, Lynch, and Paterson (aka FLP) show how even a single faulty process makes it impossible to reach consensus among deterministic asynchronous processes. Basically, because processes can fail at unpredictable times, it’s also possible for them to fail at the exact opportune time that prevents consensus from occurring.

The paper “Consensus in the Presence of Partial Synchrony” by Dwork, Lynch, and Stockmeyer (hence the name “DLS” algorithm) introduced a major advancement in Byzantine fault-tolerant consensus: it defined models for how to achieve consensus in “a partially synchronous system.”

DLS was a major breakthrough because it created a new category of network assumptions to be made — namely, partial synchrony — and proved consensus was possible with this assumption. The other imperative takeaway from the DLS paper was separating the concerns for reaching consensus in a Byzantine and asynchronous setting into two buckets: safety and liveness.

  • There is a key distinction between Nakamoto consensus and “classical BFT” protocols that primarily exist in the permissioned setting. Classical techniques typically maintain safety at all times in the protocol execution; if conditions are favorable, the protocol achieves liveness too. On the other hand, Nakamoto consensus maintains liveness at all points in time; safety holds (probabilistically) only under favorable conditions.
  • In a purely permissionless setting (no identities associated with replicas), the only class of solutions known is Nakamoto consensus or variants therein, e.g., Ghost protocol. This is why “what the majority says” is decided using such a chain-based protocol. On the other hand, for the permissioned setting and protocols that use proof-of-stake, one can use classical BFT protocols as well as Nakamoto consensus.
  • In terms of setup assumptions, observe that the permissioned version of this protocol requires setup, e.g., PKI setup, to verifiably elect leaders. On the other hand, in a proof-of-work world, the only setup needed is to agree on a genesis block (which is a public setup). In both cases, we circumvent the FLM bound to tolerate more than one-third adversaries.
  • The expected time to mine a block depends on the difficulty of proof-of-work and the computation power invested by all miners participating in the protocol. Since the participants (and consequently the computation power) can change over time, proof-of-work mining difficulty is adjusted every two weeks in Bitcoin to still maintain a mean rate of one block every ten minutes. However, the protocol cannot tolerate a sudden surge or drop in computation power.

However, there is another way to overcome FLP impossibility: non-determinism.

Enter: Nakamoto Consensus

As we just learned, in traditional consensus, f(x) is defined such that a proposer and a bunch of acceptors must all coordinate and communicate to decide on the next value.

Traditional consensus doesn’t scale well.

This is too complex because it requires knowing every node in the network and every node communicating with every other node (i.e., quadratic communication overhead). Simply put, it doesn’t scale well and doesn’t work in open, permissionless systems where anyone can join and leave the network at any time.

The brilliance of Nakamoto Consensus is making the above probabilistic. Instead of every node agreeing on a value, f(x) works such that all of the nodes agree on the probability of the value being correct.

John toohey

John toohey

Co-Founder & CEO