I do not believe this is actually the case. On Windows memory usage reporting is odd, if you look at the farming process it uses much less memory and if there are apps that need RAM you’ll notice nothing crashing with out of memory. This was discussed on Discord, but now that this is on the forum hopefully more people will be able to find it.
Previous networks worked very differently and shouldn’t be compared to Gemini 3f directly. High network usage is expected during plotting, did you already finish that process? Once plotting is done your farmer will help the rest of the network to sync (both nodes and farmers are now sync from plots), but eventually should settle on lower bandwidth usage.
Node sync could is, this is likely an upstream Substrate issue that I’ll get to once I have more time (see sc_consensus_slots::check_equivocation is not guaranteed to catch equivocation · Issue #1302 · paritytech/polkadot-sdk · GitHub for upstream discussion on related topic). There are some things we control that are heavily multi-threaded, but they happen relatively infrequently (archiving of blockchain history).
20% usage is during auditing and the plan is to get it even lower, it is supposed to be space / I/O-heavy, not CPU-heavy. When you farmer does find a solution though, it has to generate a proof, which is indeed heavily multi-threaded and computationally expensive process. How often you find solutions depends on size of the network and amount of space pledged. So this is an expected behavior.
Yep, SATA is expected to be perfectly fine. Auditing is basically a lot of small random I/O, but when someone requests something from you, higher reads could be observed. However if you have a lot of RAM OS will likely put caches in RAM and you’ll have less actual disk reads anyway.
The goal of the protocol is to be bound by the amount of “fast enough” space. There is no goal to burn a lot of energy or destroy disks with unnecessary writes or anything like that, in fact we implement things in a way to minimize those (ask people that ran previous networks how high write amplification was there comparing to latest iteration for example).
Given how much RAM you have and the fact that I don’t think you use even 10% of that in practice the only thing that matters is space. But we don’t recommend people to buy hardware as they might get negative ROI in that case. The choice is yours of course.
As long as farmer is able to generate a solution in time you’re good, but those CPUs are quite old and probably not energy efficient by today standards.
There should be no need to run multiple nodes and farmers, just use multiple farms if you have multiple physical disks.
One physical disk - one farm. No RAID, no fancy file systems. In the future farmer will likely be able to work with raw disks directly with no file system on them at all to improve efficiency.
This is mostly an implementation inefficiency. Make sure to upgrade to latest releases, we’re tackling plotting bottlenecks one by one. The goal is to be either CPU or network bound (whichever is weaker) until plotting is finished. This is not quite the case right now. Also farmer uses little memory and tries to be very deliberate with it, we might introduce optional flags that would allow to accelerate plotting while eating more memory, which in your case will be very helpful given how much RAM you have available.
Eventually it should make a difference. In the meantime it can help with plotting and will result in proportionally larger memory usage.
To a degree. Right now blockchain history is still small, so your farmer will eventually cache everything locally and will stop reaching out to the network to pull pieces since it’ll have everything locally, but it will not be the case when we have a lot of data on mainnet, at mainnet you’ll eventually have to download as much data as the plot size, but it’ll probably take some time before we have terabytes of blockchain history.
Thanks for asking on Discord where people will be able to find these answers afterwards. These are great questions and the only thing I’d change is to ask independent questions in separate topics such that discussions can evolve around them separately.
I hope this helps.