A very recent paper provides another foundation for us to reason about the archiving confirmation depth.
Previously, this depth was chosen to control the probability of safety violations (e.g., double spending). In other words, we only have probabilistic safety rather than deterministic safety. (By contrast, most BFT protocols enjoy deterministic safety.) Can we have deterministic safety for Nakamoto-style consensus? More ambitiously, can we have deterministic safety for Nakamoto-style consensus even under adversarial majority (say, 99% of farming storage is malicious)?
It turns out that the above paper provides a solution under the bounded delay assumption. More importantly, their solution is very compatible with our archiving process. Essentially, they turn any safety violation into a liveness violation. This is critical because “a single safety violation can cause users to lose money” but a liveness violation can be recovered (without losing any money). Hence, the probability of liveness violations can be made higher than the probability of safety violations, especially when we have some efficient liveness recovery method. Therefore, this paper allows us to use a relatively small confirmation depth while enjoying stronger security.
Finally, I’d like to point out that the paper doesn’t provide any liveness recovery method. Our research team at Subspace will work on this open problem.
Recovery from incorrect archiving would be very problematic because some farmers might have already plotted what they think is the canonical version of a particular segment. I didn’t read the papers linked though.
The paper implies that we don’t need to “recover from incorrect archiving” even under adversarial majority. Once a block is finalized (k-deep confirmation + 2\Delta waiting time), we archive it. That is, archiving is final! Previously, our archiving was not final…
Note that, according to the paper, this condition should also hold for newly-join nodes. That is, a node that joins and start syncing the network, needs to wait the 2\Delta waiting time, before it archives history, independent of the size of history.
The Stubborn Nakamoto proposal in the paper for consistency even under the harshest of conditions is simple and interesting (assuming bounded delay like @Chen_Feng_2023 mentioned). Some thoughts / questions:
k value in Nakamoto-style consensus is treated as the confirmation depth, and different entities can choose different values of k based on their risk tolerance level for the transaction under consideration (exchanges, for instance, need k to deposit crypto into a user’s account for inbound transfers). With Stubborn Nakamoto, k becomes part of the protocol itself so it needs to be chosen very carefully upfront and the protocol may also need to support increasing / decreasing it depending on the conditions (e.g. if the cost of attack goes below a certain threshold, then liveness failures can become more frequent, and k would have to be increased to bring it back to original levels).
We may express similar concerns for \Delta. How should we set it up and should we support changing it later?
Could it be the case that some honest nodes halt while others keep finalizing blocks? So while all the “alive” honest nodes will agree on all the finalized blocks, the halted nodes will not “comment” on the latest ones (blocks finalized after they halted).
If the synchrony assumption is violated for some nodes (slow connection, too much to sync, etc.), could we still claim that the chain of blocks finalized by them will always be a prefix of the canonical finalized chain? Seems like a useful property to have.
For 1, I agree with your observations. Note that in Stubborn Nakamoto k is the maximum one has to wait.
For 3, yes, this is possible, as there is no consensus on when a conflicting block b' arrives – before or after k blocks from the block b: those that hear of b' after k blocks from b will halt, while those hearing of it before that will continue normally.