The Simplex Consensus Protocol #

In this post, we present a new consensus protocol called “Simplex”, that is faster than the state-of-the-art, and (in our eyes) even easier to understand. It is suitable for both teaching (in a distributed algorithms or blockchain class) and for potential implementation in practice. We hope that Simplex makes byzantine consensus more accessible to a wider audience. In particular, it has the easiest liveness proof that we know of.

For the full paper, see eprint.iacr.org/2023/463 or the proceedings version in TCC 2023 (link). For slides, which also present the liveness proof, see (link). For questions, email byc at cs.cornell.edu.

What is Simplex Consensus? #

Simplex is a consensus protocol that is faster than the state-of-the-art (on paper), yet easier to understand (in our eyes). It solves Byzantine Fault Tolerant (BFT) Consensus, also known as the blockchain problem: there are \(n\) people communicating over an unreliable network, all trying to agree on the ordering of a set of transactions. Each player receives transactions over time, and wants to output the same transactions in the same order as everyone else does. Different players may receive transactions in different orders. Importantly, \(f\) out of the \(n\) players may be malicious or greedy and deviate from the protocol. We typically assume that \(f < n/3\). The protocol must also be able to withstand a flaky network that sometimes drops messages, or is partitioned in half.

As an example, we can imagine a blockchain system with \(n\) validators, trying to agree on which transactions are in the finalized blockchain and can be confirmed. In a blockchain setting, we also require that no single player gets too much power in deciding the order of transactions.

Paxos, PBFT, and Blockchains

Simplex is designed for the same network and fault-tolerance setting as for the seminal PBFT protocol, as well as for protocols such as Algorand Agreement, Hotstuff, Tendermint, and Streamlet. The famous Paxos protocol, as well as Raft, is for a similar setting, but where nodes can only crash and do not deviate from the protocol (i.e. “crash faults” vs “byzantine faults”). Paxos in particular has found widespread deployment in Internet infrastructure ranging from Apache Zookeeper to Google’s Chubby Lock Service.

In the last decade, byzantine consensus has found its way into the heart of most “Proof-of-Stake” blockchains, where byzantine fault tolerance (BFT) is necessary because validators may be malicious in nature. Protocols such as Ethereum and Algorand use a byzantine consensus protocol under the hood to reach agreement on transactions. Unlike Bitcoin, the consensus protocols used tolerate extended periods of network failure without sacrificing security, do not require Proof-of-Work, and offer much faster finality. Unlike PBFT, the underlying consensus protocols typically require each block to be proposed by a different player, for fairness. Despite these widespread applications, the underlying consensus protocols remain complex. We are primarily interested in designing a good consensus protocol for this blockchain setting.

Not only is Simplex potentially easier to understand than the state-of-the-art, it also provides faster transaction confirmation time than essentially all competing protocols designed for the blockchain setting1, under a theoretical analysis2. This makes the protocol an ideal candidate for both teaching and for potential deployment.

The Fastest Protocol Yet

For the sake of demonstration, imagine a situation where \(1/3\) of block proposers are missing or slow, and where every message takes exactly 1 second to be delivered. Suppose that we are considering a blockchain setting, and that a transaction is submitted right before the next opportunity to mine or propose a new block. How long (in expectation) does it take the transaction to be confirmed or finalized?

Algorithm Expected Confirmation time
Simplex (this post) 5 sec
Algorand Agreement 6 sec
ICC 7 sec
PaLa 15 sec
Pipelined Fast-Hotstuff/
Jolteon
20 sec
Chained Tendermint/
Casper FFG
23 sec
Hotstuff 31 sec
Streamlet 36 sec

Note that we can think of the base Algorand agreement protocol and Fast-Hotstuff as versions of PBFT adapted to the blockchain setting. (For Jolteon, Fast-Hotstuff, and Hotstuff, which don’t specify concrete leader selection mechanisms, we assume that the leader is chosen randomly. For Chained Tendermint and Casper FFG, we consider the version specified in this post.)

Simplex also has great performance in optimistic conditions without faults, when analyzed in our model. See the paper for a more thorough analysis for both the pessimistic and optimistic case, as well as for a deep-dive comparison of popular protocols.

Note that our protocol has no relation to the famous Simplex Method for linear programming.

Problem Statement #

The Permissioned Blockchain problem. (Also known as BFT Consensus, or State Machine Replication, in classical distributed systems. It is called “permissioned” because the set of players is known ahead of time.3) We want a protocol that works as follows:

  • Suppose that there are \(n\) players, \(f < n/3\) of whom are malicious.
  • Over time, players receive transactions \(\mathsf{txs}\) from the environment, and want to order them into a single log (an ordered sequence) of transactions. Each player outputs an updated version of their log periodically. If a player outputs a transaction \(\mathsf{tx}\) in its log, we say that the transaction \(\mathsf{tx}\) is finalized, or confirmed, by that player.
  • Consistency (i.e. no forks). Suppose two honest players output \(\mathsf{LOG}_1\) and \(\mathsf{LOG}_2\) respectively. Then either \(\mathsf{LOG}_1 = \mathsf{LOG}_2\), or \(\mathsf{LOG}_1\) is a prefix of \(\mathsf{LOG}_2\), or \(\mathsf{LOG}_2\) is a prefix of \(\mathsf{LOG}_1\). This is the key property that ensures that players reach consensus on a single log of transactions.
  • Confirmation Time. Suppose a transaction \(\mathsf{tx}\) arrives at some time \(t\). How long after time \(t\) does it take for the transaction \(\mathsf{tx}\) to be finalized by every honest player?

For scalability, players typically batch transactions together into “blocks”. The log is then comprised of blocks, instead of individual transactions, hence the name “blockchain”. It is then easy to extract an ordered log of transactions from a blockchain.

The Partially Synchronous network setting. In a real-world network, such as the internet, messages may sometimes be delayed, reordered, or dropped entirely. Entire partitions of the network may lose connectivity due to an implementation bug, or an internet outage. It is essential to prevent a consistency violation (e.g. a double spend) in such a situation. We assume a network that behaves as follows:4

  • Good network conditions. Under good network conditions, every message is delivered within \(\Delta\) seconds (e.g. we can pick \(\Delta = 3\) seconds), where \(\Delta\) is known ahead of time.
  • Bad network conditions. Occasionally, the network may be faulty. In these bad network conditions, there is no guarantee on message delivery.

We require the protocol to confirm transactions under good network conditions, and to preserve consistency even under bad network conditions.5

Why is this problem hard?

The challenge of faulty players. In our setting, faulty players can deviate arbitrarily from the protocol (i.e. they are “byzantine faults”). Such a fault model has been studied since the 1980s, and is essential for capturing real-world settings such as cryptocurrencies where some fraction of players cannot be trusted. (After all, it seems downright natural that some participants may try to subvert the protocol for their own benefit.) In particular, a faulty player may decide to stay offline, and can also send different (but conflicting) messages to different people, lying about their state. When combined with a unreliable network, this faulty behavior is difficult to detect and challenging to deal with. (Even crash faults are non-trivial to deal with, despite being more benign; see the famous Paxos protocol.)

A toy example. For example, suppose that there are \(n = 99\) players, choosing between two block proposals \(b\) and \(b'\). Suppose that each honest player “votes” for a single block proposal of its choice. Naïvely, one may think that if a player sees 50 votes for \(b\), then no one else will ever see 50 votes for \(b'\). However, this breaks down even if there is only one faulty player. 49 honest players could vote for \(b\), another 49 honest players could vote for \(b'\), and the faulty player could vote for both \(b\) and \(b'\). If the network is unreliable, some player may see 50 votes for \(b\) and no votes at all for \(b'\), and proceed even though some other player saw 50 votes for \(b'\).

As a consequence, we generally require \(\geq 2n/3\) votes on block proposals to make progress (instead of a simple majority), and additionally assume that \(f < n/3\). This is is the same approach taken by most prior work (i.e. see PBFT), and is the best we can do on an unreliable network. And like most solutions to the problem in the literature, we will also need to utilize public-key cryptography, to prevent adversaries from forging honest messages.

Protocol Description #

We start with some preliminaries and data structures that will set the stage for the main body of the protocol.

Digital Signatures #

Our protocol uses digital signatures, which requires a basic setup ahead of time.

  • Public key infrastructure. Every player has a public key, and a secret key (which it keeps secret), generated for the digital signature scheme. Each player’s public key is known to every other player at the beginning of the protocol execution.

For any string or message \(m\), we denote \(\langle m \rangle_i\) to be the same message but signed by player \(i\). That is, \(m\) is accompanied by a digital signature on \(m\) that is verifiable using player \(i\)’s public key.

Data Structures #

Simplex uses a number of different data structures, many of which should be familiar to a blockchain enthusiast.

  • Blocks and block heights. The protocol operates on “blocks” of transactions. A block is a tuple \((h, \mathsf{parenthash}, \mathsf{txs})\), where \(h\) is the height of the block, \(\mathsf{parenthash}\) is a cryptographic hash of the parent blockchain, and \(\mathsf{txs}\) is a set of transactions included in the block. Thanks to the parent hash, each block essentially “extends” and commits to a parent blockchain, which represents a history of transactions.

  • Genesis block. The genesis block is the default starting block \((0, \bot, \bot)\) of height 0. Here, \(\bot\) is the empty string "", representing some empty default value. The genesis block is always the first block in the blockchain.

  • Dummy blocks. In Simplex, sometimes we may mine a “dummy block” during periods of time when the network is faulty, or when there is too much malicious activity. A dummy block of height \(h\) is simply the tuple \((h, \bot, \bot)\). Note that the dummy block does not extend a specific parent chain.

  • Blockchains. A blockchain of height (or length) \(h\) is a sequence of blocks \(b_0, b_1, \ldots, b_h\) where \(b_0\) is the genesis block, and for each of other bocks, either \(b_i\) is the dummy block of height \(i\), or \(b_i = (i, H(b_0, \ldots, b_{i-1}), \mathsf{txs})\) for some choice of transactions. Here, \(H\) is a publicly known collision-resistant hash function.

  • Notarized blocks and blockchains. A “notarization” for a block \(b\) (which may be the dummy block) is a set of signed messages of the form \(\langle\textsf{vote}, h, b\rangle_i\) from \(2n/3\) unique players \(i \in \{1, \ldots, n\}\). A “notarized block” is a block augmented by a notarization for that block. A “notarized blockchain” is a blockchain, where every block has a notarization.

High Level Structure #

The protocol runs in iterations \(h = 1, 2, 3, \ldots\) where each player only advances to the next iteration \(h+1\) once it has seen a notarized blockchain of length \(h\). Thus, in each iteration \(h\), the protocol must ensure that every honest player eventually sees a new notarized block of height \(h\), and moreover that this block extends some notarized blockchain of length \(h-1\). In order to help shepherd players towards voting for the same block, each iteration \(h\) has a designated “leader player”, denote \(L_h\), chosen randomly ahead of time and known to all of the players. More specifically:

  • Random leaders. For each iteration \(h\), its leader \(L_h\) is chosen by hashing the iteration number \(h\) using some public hash function. In other words, \(L_h = H^*(h) \mod n\), where \(H^*\) is a predetermined hash function.6

As a preview, in each iteration \(h\), the leader \(L_h\) will propose a new block of height \(h\) extending a notarized blockchain of \(h-1\). Players will then vote for this block, to get it notarized. Of course, we will have to handle the case of a faulty network or faulty leader who sends invalid blocks, or sends different block proposals to different people. A faulty leader may even pretend to be offline, and not send a block proposal at all. We handle this in the full description below.

The Full Protocol Description #

We are now ready to describe the protocol in its entirety.

Starting iteration and protocol state. At the beginning of the execution, every player immediately enters iteration \(1\). As local state, each player keeps track of which iteration it is currently in, and also stores all of the notarized blocks and messages that it has seen so far.7

Execution. Each player \(i\), on entering iteration \(h\) does the following:

  1. Backup step in case of faulty network or faulty leader. Player \(i\) starts a new timer \(T_h\) set to fire locally after \(3\Delta\) seconds if player \(i\) still hasn’t finished iteration \(h\) by then. If the timer fires, and player \(i\) is still in iteration \(h\), player \(i\) sends to everyone a vote for the dummy block \[\langle \textsf{vote}, h, \bot_h \rangle_i\] Here, \(\bot_h = (h, \bot, \bot)\) is just the dummy block.

  2. Leader proposal. If player \(i\) is the leader, namely if \(i = L_h\), then player \(i\) sends to everyone a block proposal of the form \[\langle \textsf{propose}, h, b_0, b_1, \ldots, b_h \rangle_i\] where \(b_h = (h, H(b_0, \ldots, b_{h-1}), \textsf{txs})\) is player \(i\)’s choice of a new block extending some blockchain \(b_0, \ldots, b_{h-1}\) that is notarized in player \(i\)’s view. Note that the parent blockchain has length \(h-1\). Player \(i\) should include in \(\mathsf{txs}\) every transaction that it has seen that is not already in the parent blockchain.

  3. Voting step. Player \(i\) waits to see the leader’s block proposal. On seeing the first (and only the first) such proposal of the form \(\langle \textsf{propose}, h, b_0, \ldots, b_h \rangle_{L_h}\), player \(i\) first checks that the block is well-formed (i.e. that \(b_h = (h, H(b_0, \ldots, b_{h-1}), \textsf{txs})\) for some set of transactions) and that \(b_0, \ldots, b_{h-1}\) is indeed a notarized blockchain of length \(h-1\) in its view. If these checks succeed, then player \(i\) sends to everyone a vote \[\langle \textsf{vote}, h, b_h \rangle_i\] (Recall that on seeing \(2n/3\) votes for any given block \(b_h\), a player considers that block to be notarized.)

  4. Entering the next iteration. At any point during iteration \(h\), if player \(i\) sees a notarized blockchain of length \(h\), it moves on to iteration \(h+1\). At this point, if its timer \(T_h\) has not fired yet, player \(i\) cancels the timer, and sends everyone a message \[\langle \textsf{finalize}, h \rangle_i\]

Finally, at any point during the protocol execution, no matter the current iteration number:

  1. Finalizing transactions. At any point, if player \(i\) sees \(2n/3\) \(\langle\textsf{finalize}, h\rangle_j\) messages for some iteration number \(h\), each signed by a different player \(j\), player \(i\) deems that iteration to be finalized. If an iteration \(h\) is finalized, whenever player \(i\) sees a notarized blockchain of length \(h\), they can finalize all the transactions contained within (and output them to the client).

Correctness of the Protocol #

In this section, we argue that the Simplex protocol has all the properties that we desire. Importantly, the protocol is both consistent (that is, two players never finalize two conflicting transactions), and is also fast during periods of network stability. In particular, whenever a new transaction is delivered to some honest player, the protocol finalizes that transaction quickly. Here, our treatment is meant to be informal and intuitive; for a formal analysis, see the full paper.

Consistency Argument #

Consistency requires that if one player outputs a sequence of transactions \(\mathsf{LOG}_1\), and another player outputs a sequence of transactions \(\mathsf{LOG}_2\), then it must be that either the two logs are identical, or that one log is a prefix of the other. This is the key property that ensures agreement on a log of transactions.

For us, it suffices to show that if one player finalizes the contents of a blockchain \(b_0, \ldots, b_h\) of length \(h\), and another player finalizes the contents of \(b_0, b'_1, \ldots, b'_{h'}\) of length \(h'\), then either they are the same chain entirely, or one chain is the prefix of the other. Assume, without loss of generality, that \(h' \geq h\). The argument follows from two simple observations:

  1. Claim. If some honest player sees that a block proposal \(b_h\) is notarized, no honest player will ever see a competing block proposal \(b'_h\) of the same height notarized, unless one of \(b_h\) or \(b'_h\) is the dummy block of height \(h\).

    • Proof. In iteration \(h\), each honest player will only vote for one of \(b_h\) and \(b'_h\). A faulty player can vote for both \(b_h\) and \(b'_h\). There are at least \(\geq 2n/3\) honest players and \(< n/3\) faulty players. Thus there are \(< 4n/3\) votes combined for either \(b_h\) and \(b'_h\). This just isn’t enough votes: if honest players saw that both \(b_h\) and \(b'_h\) are notarized, then there must be \(\geq 4n/3\) votes. (This is also known as a quorum intersection argument.)
  2. Claim. If some honest player sees that iteration \(h\) is finalized, then no honest player will ever see that the dummy block of height \(h\) is notarized.

    • Proof. In iteration \(h\), each honest player will send either a vote for the dummy block, or a finalize message for \(h\), but never both. A faulty player can do both. Thus there are \(< 4n/3\) messages combined of either type. By the exact same argument as for the previous claim, there just aren’t enough votes to make it so that someone sees both \(\geq 2n/3\) finalize messages and \(\geq 2n/3\) votes for the dummy block of height \(h\).

Recall that if a player finalizes a chain \(b_0, \ldots, b_h\) it means that the player saw that iteration \(h\) is finalized, and that it also saw that each block in \(b_0, \ldots, b_h\) is notarized. By the second claim, no player can have seen a notarized dummy block at height \(h\). Thus, now applying the first claim, no player can see any other notarized block \(b'_h \neq b_h\) at height \(h\). Thus, if any player sees a notarized block at height \(h\), it must be the same block \(b_h\).

So whenever a player finalizes \(b_0, b'_1, \ldots, b'_{h'}\) of length \(h' \geq h\), their chain must contain \(b_h\). In other words, \(b'_h = b_h\). By the collision resistance of the hash function, recalling that \(b_h\) contains a hash \(H(b_0, b_1, \ldots, b_h)\) of its parent chain, then \(b_0, b'_1, \ldots, b'_{h} = b_0, b_1, \ldots, b_h\) as required.

Liveness Argument #

Simplex has a delightfully simple Liveness proof, in contrast with every single other protocol that we know of in the literature (in this setting).

We show that the protocol makes progress during good network conditions. Assume that every message takes \(\leq \delta\) seconds to be delivered.

First, observe:

  1. Observation. If some honest player has entered iteration \(h\) by time \(t\), then all honest players have entered iteration \(h\) by time \(t+\delta\). (Make sure you see why.)

The liveness argument breaks down into two cases:

  1. Claim (Honest Leaders). If the leader \(L_h\) of iteration \(h\) is honest, and they entered that iteration by time \(t\), then all honest players will have finished iteration \(h\) and moved on to \(h+1\) (or higher) by time \(t + 2\delta\). Additionally, they all see a finalized block proposed by \(L_h\) by \(t + 3\delta\).

  2. Claim (Faulty Leaders). If every honest player entered an iteration \(h\) by time \(t\), every honest player enters iteration \(h+1\) by time \(t+ 3\Delta + \delta\). This holds even if \(L_h\) is faulty!

Recall that \(\Delta\) is a parameter that configures the length of the timeout, which is usually taken to be a conservative estimate of the true network speed \(\delta\). \(\Delta\) is chosen ahead of time, when the protocol is deployed. (Analyzing the protocol this way gets us more granularity with regards to properties like optimistic responsiveness.)

Now let’s show the claims. In the following figure, we break down exactly what happens during an iteration \(h\) with an honest leader:

Progress During Iterations with Good Leaders

And likewise, for an iteration \(h\) with a missing or malicious leader:

Progress During Iterations with Good Leaders

The full proofs of the claims are in the paper.

Theoretical Performance Guarantees #

The last key property that we want to highlight is Simplex’s fast transaction confirmation time. In both the optimistic case when there are no faulty players, and the pessimistic (and more realistic) case where there are faulty players, Simplex has better transaction confirmation time than essentially all competing protocols that rotate their leaders (when analyzed in our model).

Simplex requires only a single honest leader to finalize blocks. Moreover, the design of the protocol also allows us to set the timer to \(3\Delta\) seconds instead of a larger value, minimizing the time it takes to recover from a faulty leader. Popular protocols such as Hotstuff typically require multiple honest leaders in a row to finalize a block, which may take a long time if many potential leaders are faulty. On the other hand, protocols like Algorand require only one honest leader but a larger time-out of \(4\Delta\) seconds. See the paper for a thorough comparison with related work.

Scalability #

Simplex can be made scalable using the same techniques as in Algorand, which is designed for a Proof-of-Stake setting with hundreds of thousands of validators.

Conclusion and Further Steps #

In this post, we presented the Simplex consensus protocol, which has potential to be the protocol of choice for both pedagogy and for deployment in practice (especially for blockchains). It offers faster transaction confirmation time (under a theoretical analysis), and also offers an easy-to-understand security proof. Work on Simplex is ongoing (e.g. see DispersedSimplex, which adds additional optimizations).

We hope you integrate Simplex into your own projects or your classes. For questions or to help out, email byc at cs.cornell.edu.


Footnotes #


  1. In certain cases, classic protocols like PBFT may still offer slightly better latency than protocols designed for blockchains. This is because, when a transaction is sent to the system, it doesn’t need to wait for the next block proposal, unlike in blockchain systems. However, PBFT achieves this by having the same player propose every block (as long as it does not detectably misbehave). In other words, PBFT adopts a “stable-leader policy.” However, this is unsuitable for a blockchain application, where for fairness we require that a different player propose each block (see problems with miner extractable value in the Ethereum blockchain, for instance). Of course, we could also modify Simplex to use “stable leaders”, to improve on the latency of PBFT at the expense of fairness. ↩︎

  2. An evaluation of the protocol in a real-world deployment remains to be done. ↩︎

  3. When used for an application such as cryptocurrency, we typically require a blockchain protocol be “permissionless”, where players can join (and leave) the network at will. Proof-of-Stake protocols like Ethereum and Algorand accomplish this by treating each coin as a player and then running a permissioned blockchain protocol (such as Simplex) on those players. If at any point the total number of coins changes, the number of players \(n\) can be changed accordingly, with some care. ↩︎

  4. For a more formal treatment of partial synchrony, see our paper, or this blog post, or the seminal paper of DLS88. Also note that, in practice, blocks may take longer to be delivered than, for instance, vote messages. ↩︎

  5. This is a different assumption than for Bitcoin. Indeed, Bitcoin may lose consistency if honest nodes cannot contact each other for long enough (i.e. due to an Internet outage) and start mining on different forks. ↩︎

  6. \(H^*\) can be a different hash function than the one used to generate the parent hashes in blocks, which we denoted \(H\). In more theoretical terms, when we formally prove the protocol to be correct, we model \(H^*\) as a random oracle, to ensure that leader election is truly random. As observed by PS17, this usage of a random oracle can itself be replaced by a pseudorandom function family and an additional common reference string. However, any hash function that is sufficiently pseudorandom will work in practice (whereas for \(H\), we required it to be collision resistant). ↩︎

  7. This may require additional optimization to reduce storage costs in practice. ↩︎