Taurus cache Slow synchronization with warning prompts:

NODE:
VER: subspace-node-ubuntu-x86_64-skylake-taurus-2025-may-27
SERVER: AMD 7950X +64GB RAM NETWORK:X520-XR82599ES 10G + ubuntu 24.04 CLI

Run :NODE and Operators (Operators ID:3)

farmer & ploter
VER:subspace-farmer-ubuntu-x86_64-skylake-taurus-2025-may-27 OR subspace-farmer-ubuntu-x86_64-skylake-taurus-2025-may-15

farmer: Double Intel E5-2680 V4x2+ 96GB RAM + 169TB SSD(Samsung PM 1733 7.68Tx8 &Micron 7450 PRO 15.36Tx8 ext4 filesystem) NETWORK:X520-XR82599ES 10G + Nvidia 3080TIx1 ubuntu 24.04 CLI

Run:
nats-server V:2.11.2
& cluster
& controller
& cache
& farmer
& plotter

When there are many online transfers. For example, during XDM stress testing in the past. Warning more. And it’s difficult to synchronize the cache. Most of the time, restarting the cache takes several hours or even a dozen hours. Cache can only synchronize to 100% completion. There are very few transfers now. It can be completed synchronously in about a few minutes or ten minutes
nats-log.txt (1.7 KB)
cluster-controller.txt (5.7 KB)
cluster-cache.txt (371 Bytes)
farmer.txt (7.2 KB)
plotter.txt (7.4 KB)

Add a paragraph. I have a total of 3 farmers like this, each with 169T and a total of approximately 525T. I only run Taurus network and not mainnat node

As mentioned on Farmer Office Hours yesterday slow cache sync is related to problems with piece retrieval on Taurus. The issue is something we’ve been aware of and working on for a while. You can follow along with progress here: Piece downloads are unreliable on Taurus · Issue #3537 · autonomys/subspace · GitHub

Thanks for reporting but I’m afraid there is a pre-existing GitHub issue so this post does not qualify for The Watcher’s Oath. I’m sure you’ll be back with more though!

1 Like


Addendum: The network CPU and memory are not overloaded