An Ambassador (@Seryoga Leshii Леший#2191) described their experience with a 2.5 TiB plot size and reported that pledged value for space on Explorer is currently very low.
Server spec:
Linux
Modern 16-core (32-thread) CPU
128GB RAM
NVMe SSD
1 GBit/s network bandwidth
"I’m using Substrate-cli with a 2.5 TiB plot size.
During synchronization, the farmer worked together with the node. The average sync speed was around 2.2 bps. The maximum consumption of RAM was around 33 GiB. My friend, with a much larger plot size, talked about constant consumption around 16 GiB, but he periodically reboots his server. The main load was from the node (I can’t tell the exact values), the load from the farmer was no more than 100%. The load on the drives was around 220 write IOPS and almost no reads.
After the end of synchronization, the farmer began to load server more. The average load on the processor is around 416%. The load is peaking (most of the time it stays around 200%). The average CPU load increase in proportion to the increase in the size of the plot (to ~740% when the size of the plot was increased to 5 TiB). RAM consumption is very low, around 4 GiB for node + farmer. Almost all 128 GB of RAM are filled with buffer/cache, but this is not critical. At the moment, the average load on the drive is around 7700 IOPS for reading and 1000 IOPS for writing. Writing to the drive occurs periodically in large batches.
The node + farmer currently opened over 1,200,000 file descriptors! The number of connections was around 40k to 75k. After some tuning, their number decreased to ~25k."
Discord thread: Discord