Currently “confirmation depth” that triggers archiving (if there is enough data) is set to 100 blocks (~10 min in regular operation).
According to @nazar-pc, it was just something we considered “deep enough to not fork” and not too deep either for archiving to not take too long in case someone uploads data to the blockchain and has to wait for it to be retrievable from DSN.
We still have TODO in the chains spec to change it to correct value: https://github.com/subspace/subspace/blob/7745e54f52031e6e2430fd0dea90cb3007fc2d1b/crates/subspace-node/src/chain_spec.rs#L154
We contemplated the possibility of having it at say an hour, 600 blocks. In this case we will essentially have to make the node store that + buffer before pruning, which means higher space requirements for the node. @Barak could you list some reasons we discussed why to push this depth deeper and why avoid making it smaller?
There’s a tension between predictability and the costless simulation feasibility:
The larger the “confirmation depth” (how far back we take the block in consensus and inject it into the timechain) is the more ‘predictable’ the (future) chain is, because all challenges are determined given that injected block. In our case, where challenges are derived from the timechain, it is less severe. Still, someone with a faster implementation of the proof-of-time algorithm can “see into the future”.
The smaller it is the higher the chance that we will have a fork in the blockchain that affects the injected block, allowing one to grind on the injected block.
Generally, we would like to prevent the possibility of grinding on the injected block, and so making the “confirmation depth” large enough to avoid such forks is desired.
We would like to follow the analysis in the paper Bitcoin’s Latency–Security Analysis Made Simple to determine (or at least have some insights on) the probability of deep-enough fork, under certain assumptions. However, we currently have some complications that prevent us from using this work as is.
I thought we were talking about archiving, not entropy injection here Did I misunderstood it correctly?
Yes, the values are the same right now and we are thinking about keeping them equal, but I don’t think we should necessary confuse them as the same just yet.