After extensive discussions on PoT at Subspace research meetings, we realized that our PoT mechanism is a bit involved to understand mostly because it couples two roles, namely, c-correlation (to mitigate nothing-at-stake attacks) and a weaker version of VDF (to mitigate long-range attacks). Here, we try to decouple these two roles so that it might be easier for us to understand PoT.
For those who are unfamiliar with VDF, it is basically a sequence of function composition (together with a short proof that the sequential evaluation is done correctly). That is, f^{L}(x) = f \circ f \circ \cdots \circ f(x). Assume that it takes one second to compute the function L times (under static hardware model). Then, running VDF gives us an āarrow of time.ā
Conceptually, we can understand our PoT mechanism in just two sentences.
Each farmer runs its own VDF with entropy injection every c blocks (as discussed in our previous post).
Each farmer uses its VDF output at time slot t (based on its local longest chain) to check whether it has a āāwinning ticketāā.
This mechanism is very similar to the the POSAT paper and so we believe that we can reuse their analysis. More specifically, we expect to show the following
The growth rate of the honest block tree is at least \lambda_h \frac{1}{1 + \lambda_h \Delta};
The growth rate of the private block tree is at most \phi_c \lambda_a;
The adversary cannot āgo back in timeā due to the use of VDF.
Unlike the POSAT protocol, our VDF evaluation depends solely on a local longest chain. Therefore, it can be outsourced to a group of timekeepers as long as one of them is honest and always online.
The above new perspective allows us to compare different design choices.
Design choice 1: A new block of height (m+1)c triggers an entropy-injection event where m is a non-negative integer and c is relatively large. The entropy is from its ancestor block of height m c. The injection time is t_{mc} + T, where t_{mc} is the time slot the ancestor block is produced. If t_{mc} + T \le t_{(m+1)c}, we give up this injection opportunity.
Discussion: The growth rate of the honest block tree is still at least \lambda_h \frac{1}{1 + \lambda_h \Delta} (because honest farmers will not take advantage of entropy injection at all). The growth rate of the private block tree is still at most \phi_c \lambda_a (because we may lose some injection opportunities). Also, the adversary cannot go back in time.
Design choice 2: A new block of height mc triggers an entropy-injection event where m is a non-negative integer and c is relatively large. The entropy is simply from this new block. The injection time is t_{mc} + T, where t_{mc} is the time slot this new block is produced. Here, T is relatively large.
I feel that a very similar discussion applies here. I will double check.
Is this a new design we have not discussed in the spec or Slack yet? Why would we take entropy from the parent block m c? The original design had a lookback parameter that was much larger than 1. Iām also not sure why would we use t_{mc} + T when (m+1)c is already known (and getting an edge-case that it might be before the next block, in which case we skip injection completely).
And the original version written into the specification was like this, but entropy was taken from (m-1)c IIUC.
Design choice 1 is a simplified version of the design proposal suggested by Rahul. Now, we can formally reason about its security. Design choice 2 is a new proposal from you, when T is large. The attempt here is to provide a unified framework to compare different design proposals.
Note that when T = 0, the above design choice 2 reduces to the original design. I feel that the growth rate of the private block tree is still bounded by \phi_c \lambda_a by using a sample-path-based argument. If this is true, then design choice 2 seems to be better because it can help timekeepers handle āshort-rangeā forks directly without sacrifying security.
A quick follow-up of the previous reply: It seems that the optimal adversarial strategy to grow the private block tree fast is to only fork at the parents of godfather-blocks. That is, Lemma 13 in the original c-correlation paper also holds for our case. In other words, Design choice 2 seems to be good under a static hardware model.
An observation from the example we discussed where c=10 and injection happens at t_{60}+T and t_{70}+T but T>t_{70}-t_{60} we have an issue that there might not be c blocks produced during T.
In our protocol we have two things somewhat decoupled we sample entropy and we inject entropy (previously sampled) at different times. In c-correlation paper, these are the same event. We guarantee a sample every c blocks. However, when we inject we are currently not guaranteed c blocks. This makes the Lemma 13 and godfather-blocks notion is about the injection event frequency, not sampling.