Logs are not attached here, but were provided to me in Discord.
The reason here is that farmer wasn’t able to audit the whole sector fast enough.
Here is snippet of audit for one slot:
2023-08-30T22:05:18.090262Z DEBUG single_disk_farm{disk_farm_index=3}: subspace_farmer::single_disk_farm: New slot slot_info=SlotInfo { slot_number: 1693433118, global_challenge: ..., solution_range: 23941329583, voting_solution_range: 239413295830 }
2023-08-30T22:05:18.090287Z DEBUG single_disk_farm{disk_farm_index=2}: subspace_farmer::single_disk_farm: New slot slot_info=SlotInfo { slot_number: 1693433118, global_challenge: ..., solution_range: 23941329583, voting_solution_range: 239413295830 }
2023-08-30T22:05:18.090296Z DEBUG single_disk_farm{disk_farm_index=1}: subspace_farmer::single_disk_farm: New slot slot_info=SlotInfo { slot_number: 1693433118, global_challenge: ..., solution_range: 23941329583, voting_solution_range: 239413295830 }
2023-08-30T22:05:18.090304Z DEBUG single_disk_farm{disk_farm_index=0}: subspace_farmer::single_disk_farm: New slot slot_info=SlotInfo { slot_number: 1693433118, global_challenge: ..., solution_range: 23941329583, voting_solution_range: 239413295830 }
2023-08-30T22:05:18.090335Z DEBUG single_disk_farm{disk_farm_index=0}: subspace_farmer::single_disk_farm::farming: Reading sectors slot=1693433118 sector_count=1498
2023-08-30T22:05:18.674040Z DEBUG single_disk_farm{disk_farm_index=3}: subspace_farmer::single_disk_farm::farming: Reading sectors slot=1693433118 sector_count=2200
2023-08-30T22:05:19.392995Z DEBUG single_disk_farm{disk_farm_index=1}: subspace_farmer::single_disk_farm::farming: Solution found slot=1693433116 sector_index=692
2023-08-30T22:05:19.393824Z DEBUG single_disk_farm{disk_farm_index=1}: subspace_farmer::single_disk_farm::farming: Reading sectors slot=1693433118 sector_count=2200
2023-08-30T22:05:19.762537Z DEBUG single_disk_farm{disk_farm_index=2}: subspace_farmer::single_disk_farm::farming: Reading sectors slot=1693433118 sector_count=2200
There are 4 farms on this machine, they all receive new slot info within 1ms from each other.
Now we can see that one solution was found indeed, but it was found more than a second later, which is too long.
Some farms are have only started auditing more than a second later, meaning they were likely still busy processing previous sector by that time.
For 2350GB plot farmer would need to do ~2300 random 32 byte reads every second, which might be a bit too much for certain SSDs.
So either drives are slow or something else impacts the performance. Farmer needs to do A LOT of random reads.
Future Gemini 3 versions after 3f will have heavier compute component when producing a proof, but will also have more time for everything else and this audit time will not be an issue, but still if we can improve this it’d be great.
What are those U.2 NVMe drives BTW, ideally exact model?
And is there a significant CPU load on some of the cores during farming?
If compute impacts the audit we might do something about it.
In htop
or similar tool if you enable thread names, you should see what operations on the farmer consume the most CPU.
29% average CPU utilization seems quite high, I’d expect less on such a beefy processor.