–max-pieces-in-sector <MAX_PIECES_IN_SECTOR>: how to use?

–max-pieces-in-sector <MAX_PIECES_IN_SECTOR>:

Can this parameter be used? My hard disk parameters are as follows, the capacity is 7TB, but my CPU is not good enough, how should I configure it.

Can, but really shouldn’t, and as documentation for the option says it is mostly for developers. Also you’ll have to replot if you want to change the value, it affects the sector and requires replotting to take effect.

What it does is it affects the size of the sector. On protocol level we only care about max size, you can decrease it down to 1 even, while default is 1000. By going lower you decrease the cost of proving, but you increase number of IOPS required and CPU used for auditing, so it will most likely make things worse, not better. Especially with folks with large plots currently having challenges with auditing, this will make it proportionally worse for you, especially if CPU is not very high frequency.

Generally modern quad-core consumer CPU should be fine for farming. We will improve auditing so it works better for larger plots, but changing this parameter is not a solution.

You’re saying CPU is not good enough, can you name the exact model?

My configuration is as follows:
2x EPYC 7V13
256GB
13*SAMSUNG.U.2 pm1733 7.68TB
Ubuntu22.02

My node startup parameters:

./subspace-node-ubuntu-x86_64-skylake-gemini-3f-2023-sep-05 \
     --chain gemini-3f \
     --execution wasm \
     --blocks-pruning 256 \
     --state-pruning archive \
     --validator \
     --name "ooplay-node-2" \
     --in-peers 100 \
     --out-peers 100 \
     --rpc-methods unsafe \
     --rpc-external \
     --rpc-cors all \
     --no-private-ipv4 \
     --rpc-max-request-size 1000 \
     --rpc-max-response-size 1000 \
     --rpc-max-subscriptions-per-connection 8094 \
     --rpc-max-connections 200

I tried to use 13 exe to start the farm, the command is as follows:

./subspace-farMer-Ubuntu-X86_64-Skylake-Gemini-3F-2023-SEP-05 FARM-Reward-ADDRESS STBCTNW9RXIHZ5Q3VCNZKS5L3DXINDXAA-N ODE-RPC-URL = WS: //43.248.117.32: 58452 PATH =/F2/ 1,size=1.75TB path=/f2/2,size=1.75TB path=/f2/3,size=1.75TB path=/f2/4,size=1.75TB
./subspace-farMer-Ubuntu-X86_64-Skylake-Gemini-3F-2023-SEP-05 FARM-Reward-ADDRESS STBCTNW9RXIHZ5Q3VCNZKS5L3DXINDXAA-N ODE-RPC-URL = WS: //43.248.117.32: 58452 PATH =/F3/ 1,size=1.75TB path=/f3/2,size=1.75TB path=/f3/3,size=1.75TB path=/f3/4,size=1.75TB
./subspace-farMer-Ubuntu-X86_64-Skylake-Gemini-3F-2023-SEP-05 FARM-Reward-ADDRESS STBCTNW9RXIHZ5Q3VCNZKS5L3DXINDXAA-N ODE-RPC-URL = WS: //43.248.117.32: 58452 PATH =/F4/ 1,size=1.75TB path=/f4/2,size=1.75TB path=/f4/3,size=1.75TB path=/f4/4,size=1.75TB
./subspace-farMer-Ubuntu-X86_64-Skylake-Gemini-3F-2023-SEP-05 FARM-Reward-ADDRESS STBCTNW9RXIHZ5Q3VCNZKS5L3DXINDXAA-N ODE-RPC-URL = WS: //43.248.117.32: 58452 PATH =/F5/ 1,size=1.75TB path=/f5/2,size=1.75TB path=/f5/3,size=1.75TB path=/f5/4,size=1.75TB
./subspace-farMer-Ubuntu-X86_64-Skylake-Gemini-3F-2023-SEP-05 FARM-Reward-ADDRESS STBCTNW9RXIHZ5Q3VCNZKS5L3DXINDXAA-N ODE-RPC-URL = WS: //43.248.117.32: 58452 PATH =/F6/ 1,size=1.75TB path=/f6/2,size=1.75TB path=/f6/3,size=1.75TB path=/f6/4,size=1.75TB
./subspace-farmer-ubuntu-x86_64-skylake-gemini-3f-2023-sep-05 farm --reward-address stBCTNw9Rxin4YHz5q3VcnzhjYhSFjmcHDu8Ks5L3dXiNdxaa --node-rpc-url=ws://43.248.98.59:58452 path=/f7/ 1,size=1.75TB path=/f7/2,size=1.75TB path=/f7/3,size=1.75TB path=/f7/4,size=1.75TB
./subspace-farmer-ubuntu-x86_64-skylake-gemini-3f-2023-sep-05 farm --reward-address stBCTNw9Rxin4YHz5q3VcnzhjYhSFjmcHDu8Ks5L3dXiNdxaa --node-rpc-url=ws://43.248.98.59:58452 path=/f8/ 1,size=1.75TB path=/f8/2,size=1.75TB path=/f8/3,size=1.75TB path=/f8/4,size=1.75TB
./subspace-farmer-ubuntu-x86_64-skylake-gemini-3f-2023-sep-05 farm --reward-address stBCTNw9Rxin4YHz5q3VcnzhjYhSFjmcHDu8Ks5L3dXiNdxaa --node-rpc-url=ws://43.248.98.59:58452 path=/f9/ 1,size=1.75TB path=/f9/2,size=1.75TB path=/f9/3,size=1.75TB path=/f9/4,size=1.75TB
./subspace-farMer-Ubuntu-X86_64-Skylake-Gemini-3F-2023-SEP-05 FARM-Reward-ADDRESS STBCTNW9RXIHZ5Q3VCNZKS5L3DXINDXAA-N ODE-RPC-URL = ws: //43.248.98.59: 58452 PATH =/F10/ 1,size=1.75TB path=/f10/2,size=1.75TB path=/f10/3,size=1.75TB path=/f10/4,size=1.75TB
./subspace-farMer-Ubuntu-X86_64-Skylake-Gemini-3F-2023-SEP-05 FARM-Reward-ADDRESS STBCTNW9RXIHZ5Q3VCNZKS5L3DXINDXAA-N ODE-RPC-URL = WS: //43.248.98.59: 58452 PATH =/F11/ 1,size=1.75TB path=/f11/2,size=1.75TB path=/f11/3,size=1.75TB path=/f11/4,size=1.75TB
./subspace-farMer-Ubuntu-X86_64-Skylake-Gemini-3F-2023-SEP-05 FARM-Reward-ADDRESS STBCTNW9RXIHZ5Q3VCNZKS5L3DXINDXAA-N ODE-RPC-URL = WS: //43.248.98.59: 58452 PATH =/F12/ 1,size=1.75TB path=/f12/2,size=1.75TB path=/f12/3,size=1.75TB path=/f12/4,size=1.75TB
./subspace-farMer-Ubuntu-X86_64-Skylake-Gemini-3F-2023-SEP-05 FARM-Reward-ADDRESS STBCTNW9RXIHZ5Q3VCNZKS5L3DXINDXAA-N ODE-RPC-URL = ws: //43.248.98.59: 58452 PATH =/F13/ 1,size=1.75TB path=/f13/2,size=1.75TB path=/f13/3,size=1.75TB path=/f13/4,size=1.75TB

But I found that this would cause my CPU usage to always be 100%, and there would be no drawing on the farm. All the prompts were like this.

2023-09-09T06:29:27.487318Z  INFO single_disk_farm{disk_farm_index=1}: subspace_farmer_components::segment_reconstruction: Recovering missing piece succeeded. missing_piece_index=300
2023-09-09T06:29:27.489845Z  INFO single_disk_farm{disk_farm_index=1}: subspace_farmer_components::segment_reconstruction: Recovering missing piece... missing_piece_index=66
2023-09-09T06:29:27.632401Z  INFO single_disk_farm{disk_farm_index=3}: subspace_farmer_components::segment_reconstruction: Recovering missing piece succeeded. missing_piece_index=5015
2023-09-09T06:29:27.634838Z  INFO single_disk_farm{disk_farm_index=3}: subspace_farmer_components::segment_reconstruction: Recovering missing piece... missing_piece_index=524
2023-09-09T06:29:37.212317Z  INFO single_disk_farm{disk_farm_index=2}: subspace_farmer_components::segment_reconstruction: Recovering missing piece succeeded. missing_piece_index=1022
2023-09-09T06:29:37.214979Z  INFO single_disk_farm{disk_farm_index=2}: subspace_farmer_components::segment_reconstruction: Recovering missing piece... missing_piece_index=4742
2023-09-09T06:29:54.417780Z  INFO single_disk_farm{disk_farm_index=0}: subspace_farmer_components::segment_reconstruction: Recovering missing piece succeeded. missing_piece_index=4632
2023-09-09T06:29:54.419876Z  INFO single_disk_farm{disk_farm_index=0}: subspace_farmer_components::segment_reconstruction: Recovering missing piece... missing_piece_index=4444

You have tuned a bunch of parameters on the node unnecessarily. Also if you’re running all of this on the same machine, you don’t need multiple farmers, just use one farmer unless you run into limits.

Judging from messages you are still plotting, so no wonder there isn’t much CPU power left for farming purposes yet. Once plotting is over it should start farming properly.

My other E5 has already completed 10%, and my EPYC is only 0.12% so far. Why is this, AMD’s CPU seems to be slower?

There are multiple generations of both E5 Xeons and EPYC processors with vastly different performance characteristics. You need to be more specific. Also plotting is not always CPU-bound (at least right now, unfortunately).

Because all my farms are connected to the same node and the hard disk is the same, it should be a problem with the CPU, E5-2697A-V4, and the other one is epyc 7v13. At the same time, I saw many people in Discord saying that the drawing speed of EPYC too low,

Epyc 7v13 is for sure many times faster, there must be another reason, probably the other farmer has less luck downloading pieces from other farmers quickly.

So is there any way to improve this “luck”? I did some calculations. AMD EPYC has a drawing percentage of 0.12 per day. It may take me a year to complete the drawing.

I’m not sure there is much in your control, but try to upgrade to newer releases as we do them