Solution receiver is closed, likely because farmer was too slow

Creating this thread in place of @Lazy_Penguin.

He tried enabling farming during the initial plotting on all of his servers, but on some, farming brought no rewards and the following lines started appearing in the node logs:

2023-11-02T17:49:22.928296Z Solution receiver is closed, likely because farmer was too slow slot=175615 sector_index=30 public_key=st83mXpkPYfLTGAv1XKxESt74Do6Mww6tkCFUeH3h7mLkwjSb

His servers:

  1. CPU: 5900X (12 cores, 24 threads); 32 GB RAM; plots: 2 x 750 GB, 2 x 900 GB; only NVMe SSDs; farmer arguments:
    --target-connections 100 --pending-in-connections 250 --in-connections 50 --farm-during-initial-plotting --farming-thread-pool-size 20 --plotting-thread-pool-size 4

  2. CPU: 5900X (12 cores, 24 threads); 64 GB RAM; plots: 6 x 2300 GB; only SATA SSDs; farmer arguments:
    --target-connections 500 --pending-in-connections 200 --in-connections 200 --farm-during-initial-plotting --farming-thread-pool-size 16 --plotting-thread-pool-size 8

  3. CPU: 5950X (16 cores, 32 threads); 128 GB RAM; plots: 7 x 900 GB, 16 x 850 GB; mixed NVMe/SATA SSDs; farmer arguments:
    --target-connections 100 --pending-in-connections 250 --in-connections 50 --farm-during-initial-plotting --farming-thread-pool-size 16 --plotting-thread-pool-size 16

  4. CPU: 2x2696v3 (36 cores, 72 threads); 64 GB RAM; plots: 10 x 900GB; only NVMe SSDs; farmer arguments:
    --target-connections 400 --pending-in-connections 150 --in-connections 150 --farming-thread-pool-size 10 --plotting-thread-pool-size 14 --farm-during-initial-plotting


They all have similar system settings.

The problem occurred on servers 1 and 2.

Some tests have been done on server 2. Increasing the size of the thread pool to 40 didn’t help. But after reducing plots sizes to already plotted space (6 x 161 GB) and several restarts the farmer started to bring rewards. Until several reboots, this problem was still observed.

It didn’t work on server 1.

Farmer logs from server 2 with RUST_LOG=info,subpsace_farmer=debug at the time these messages occurred:
https://oshi.at/qccw (link will be valid up to 09.11)

This is 100% expected behavior and the reason why that flag is not enabled by default.

Farming will most likely not work successfully during plotting with various timeouts like the one you have quoted. Yet it will use CPU trying, increasing plotting time in the process.

Also Gemini 3g software will log all cases when this happens, while 3f silently ignored some of them, leading to confusion why with large farms people don’t see rewards, now the reason should be clear.

1 Like

I had exactly this same error, and it went away when I changed my node script to look exactly as described in the farming doc

The difference between this and my old script (I kept copying between testnets) was that I had for both pruning arguments “archive” in place and also the “–execution wasm” argument still in there.

Now this may be coincidence, but this issue went away on 8 servers right after the restart of the node post changing the script. I noticed this, bc I setup some new servers where I used the script from the farming.md versus copying my old, and didn’t see the same error.

How’s the plotting speed with your configuration?

I indicated at the very top of the post that I was creating a thread instead of @Lazy_Penguin. You can contact him and ask about it.