I can report running the AMD on concurrency 3 over more than 24 hours with max memory of 27G as far as I noticed.
On INTEL (concurrency 2) I ran one instance plotting with about 40TB already plotted, memory was around 20G. Later on, added another instance to initiate piece cache sync. Sometime after it started plotting, I found the first instance with 38G memory. It fell to 12G Immediately after closing the second instance. I should say that the second instance was plotting to some same ssd’s as the first instance, though I don’t know if it’s connected to the memory issue.
Regarding many farms on one ssd instead of one farm - I still do that because the initial piece cache sync takes way longer for a 4TB farm than 500GB farm. Of course I could always start earlier…
Very odd! This might be related, but at the same time I wouldn’t expect memory to be attributed to the process
It only does so because it downloads all the pieces upfront. Without that you’ll have to re-download them from the network all the time, which is even slower. I don’t think you are saving time by doing this. In fact with the amount of space pledged it’ll be the fastest to all all disks at once with one farm per disk from the very beginning.
Thanks for the report, but the information isn’t really actionable. Please read the discussion above to see what information I need to be able to figure out what is going on. Simply saying you’re running farmer on Windows and it exits isn’t enough unless it happens to absolutely everyone all the time, which is quite clearly not the case.
It’d be nice to have logs from that run
I ran it again on Intel
one instance with all plots and after sometime added a second instance just plotting. At 18:24:
the first was replotting and with 11G
the second was plotting and with 32G. https://we.tl/t-Nwdd2CyLkD
Both with concurrency 2.
33G was the highest I’ve seen and after some time it came down.
Hm… so a single 4T farm resulted in 33G of memory usage, very interesting. If you restat it now that piece cache sync is done and everything has settled, does it use a lot of memory again? Does memory usage grow gradually or fairly quickly, I’m curious for example what is memory usage like right after first sector was plotted?
For things that I was not yet able to reproduce I’m looking for anecdotal evidence of various patterns.
You can also try farmer from Snapshot build · subspace/subspace@f6ed626 · GitHub that has some optimizations, though nothing that would obviously reduce memory usage in second case with just plotting dramatically.
I’m also very curious if --plot-cache false
make any difference to memory usage (it is true
on the second farmer where you’re plotting).
Assuming memory usage reason if of the same nature as the one fixed for some users in jun-11
, this farmer build should help (don’t use node from this build, just farmer): Snapshot build · subspace/subspace@2469498 · GitHub
Ok I can report that after using this version for a few days, its memory usage low and stable, also when running two instances on each machine (amd (concurrency 3) Intel (concurrency 2) ).
1 Like
Nice, thanks for confirming! We’ll ship it in the next release then. Appreciate all the experiments.
I wish I knew why it helps on Windows though