I hope Nazar will be interested to look further into it, to tune ‘protocol’ if needed so crappy disks can still be good for subspace.
It is worth since if crappy disks are also good for subspace, it means there’ll be more more on the netspace. But it’s only my personal view, luckily I have only 6 x 4TB Silicoin Power SATA that can be considered as ‘crappy’ ones
There is no way to make something like Subspace to squeeze a prove off if drive is effectively locked up for 15 seconds.
And I don’t have any problems with Subspace working properly on the drive except for those latency spikes. Just hope to avoid having drive in incorrect mode in case I’m unlucky during farmer startup. So far that didn’t cause any issues with reward signing tbf (I had my drive set to full-sector proof mode for a day to test and all rewards were signed successfully).
If you will not consider adding it as a parameter to set method (and I really do understand why you don’t want to), please consider this alternate solution:
When the user has run the more scientific and accurate benchmark from the CLI, save the results. I actually think this is already being done, to a ‘target’ directory. Before benchmarking is about to start, consult that file, and if it has results, use those instead of delaying startup for a less rigorous result.
Pros:
Users have a means to remove delay for benchmarking at every farmer startup
Users can get consistent results regardless of what else their system is doing when they start their farmer
No additional parameters needed for the executable
Can probably reuse code already written for the benchmarks to read the target directory and get last results
Fulfills a request already made by multiple people
Aside from the user optionally running the benchmarks one time, remains effectively fully automated
The more scientific and accurate benchmark results can be used, rather than the abbreviated ones currently run at startup
I’m farming with almost 200 disks and I can say that I never see any disk having concurrent chunks prove time worse than whole sector prove time. Among 200, about 20 connecting via USB, both SATA and NVMe.
So I’m on same side with Qwinn here. If I can choose, I’ll just opt for concurrent chunk as default prove method for all of my disks. So that I can skip this step at start.
My suggestion is slightly different to him. We can add a flag, e.g. --whole-sector-prove 1, 3, 5. Only if this flag is raised, the whole sector prove will be applied for the specified farm# which is farm# 1, farm# 3, farm# 5 in this example.
So this will give another option for senior farmers, while it maintain the best proving method in 99% for normal farmers. And there’ll be no fast check at start, in which we already think/know that this check is just to confirm something that we’re pretty sure that ‘fastest prove’ is concurrent chunks.
This mandatory benchmark is really annoying, and unnecessary as I see it, especially currently we are at testing stage, many restarting of farmers, have to have the disk to be tested again and again. Really hope this can be removed or at least set as optional.
@nazar-pc so this is not yet in the latest release Apr-26. Please kindly still consider to have it. It will be helpful for me, and farmer like me who is farming with 20+ disks in a single PC, that we have option to skip it. After so much running, I know all of my disks are best with ‘random chunks’ proving, but we have to do this again and again everytime we have to restart farmer or upgrade farmer for new release.
I have almost 300 disks now, most of them are 2TB, so if I have option to skip it, it will save me a lot of time each time of new release upgrade.