Subspace node "Archiver exited with error"

After upgrading the Subspace node to the gemini-3h-2024-mar-20 release version, running the binary file has been quite problematic. I received error messages suggesting to try switching to another fork, which eventually resulted in the service shutting down.

2024-03-21T03:12:51.583200Z INFO Consensus: substrate: :gear: Preparing 0.0 bps, target=#723397 (9 peers), best: #722637 (0x7d02…7298), finalized #643607 (0x479a…cf9a), :arrow_down: 94.0kiB/s :arrow_up: 1.6kiB/s
2024-03-21T03:12:56.583730Z INFO Consensus: substrate: :gear: Preparing 0.0 bps, target=#723398 (8 peers), best: #722637 (0x7d02…7298), finalized #643607 (0x479a…cf9a), :arrow_down: 36.8kiB/s :arrow_up: 2.7kiB/s
2024-03-21T03:13:01.584012Z INFO Consensus: substrate: :gear: Preparing 0.0 bps, target=#723399 (9 peers), best: #722637 (0x7d02…7298), finalized #643607 (0x479a…cf9a), :arrow_down: 16.0kiB/s :arrow_up: 4.2kiB/s
2024-03-21T03:13:06.584667Z INFO Consensus: substrate: :gear: Preparing 0.0 bps, target=#723401 (9 peers), best: #722637 (0x7d02…7298), finalized #643607 (0x479a…cf9a), :arrow_down: 18.9kiB/s :arrow_up: 2.9kiB/s
2024-03-21T03:13:10.765563Z WARN Consensus: sc_proof_of_time::source: Proof of time chain reorg happened from_next_slot=4145113 to_next_slot=4142541
2024-03-21T03:13:10.765963Z INFO Consensus: sc_informant: :recycle: Reorg on #722637,0x7d02…7298 to #722638,0xfa6a…e450, common ancestor #722413,0x2412…3b5f
2024-03-21T03:13:10.770824Z INFO Consensus: substrate: :sparkles: Imported #722638 (0xfa6a…e450)
2024-03-21T03:13:10.770674Z ERROR Consensus: subspace_service: Archiver exited with error error=Attempt to switch to a different fork beyond archiving depth, can’t do it: parent block hash 0x57ef…b8e0, best archived block hash 0x0703…8737
2024-03-21T03:13:10.775064Z ERROR Consensus: sc_service::task_manager: Essential task subspace-archiver failed. Shutting down service.
2024-03-21T03:13:11.001871Z WARN Consensus: txpool: Failed to update background worker: Closed(…)
Error: SubstrateService(Other(“Essential task failed.”))

This happened because you were running an old version and a relatively large farm. So during time since runtime upgrade you managed to produce enough blocks to essentially create a small fork that you can’t jump off anymore. You’ll have to re-sync the node from scratch, but your farms should be fine.

I also encountered this problem. I deleted the ‘network’ folder under the ‘node’ directory, but did not delete the ‘db’ folder, and then the problem no longer occurred.

Deleting network has no (positive) effect on this.

The same error occurred, it resumed normal operation after restarting Node several times. It is likely due to redundant data resulting from a fork.

Hm… that is interesting, I don’t think I have a good explanation why node restart could fix this without full resync, I’ll need to think about it some.