The goal of this thread is to define a good initial solution range for our chain as well as validate whether current solution range and audit_chunk sizes are adequate for expected chain growth.
Right now initial solution range is set to

SolutionRange is held in u64 and SLOT_PROBABILITY is 1/6, so INITIAL_SOLUTION_RANGE = (2^{64}-1)*1/6
The goal is to determine a more precise bound.
We audit chunks of size 8 bytes so each ticket has a value uniformly distributed in the range 0...2^{64}-1.
For simplicity, assume the target challenge is c = 0.
For a given solution range t, 0<t<2^{64}, the number of winning tickets is the number of tickets with a value less than or equal to t. The probability of a given ticket having value below t is t/2^64 The expected number of winning tickets across all network can be calculated as n*t/2^{64}, where n is the number of all audited tickets. To find the solution range such that one ticket wins with probability 1/6, this should be solved for t.
If we start the network with 50 GiB of space pledged, this gives us 50k pieces or roughly 38 sectors plotted. In each sector, one s-bucket is audited. This s-bucket contains 1300(1-1/e) chunks, and 4 times as many tickets (since each audit_chunk_size is 8 bytes, not 32 of KZG chunk). In total for each challenge we audit n = 38*1300(1-1/e)*4 tickets in this initial phase.
Solving for t yields t\approx2^{44}. If we consider that our challenge c is not 0, and we would accept winning solutions on both sides of c, within range c\pm t/2 we further reduce t to 2^{43} and set INITIAL_SOLUTION_RANGE = 1<<43
Next, we should estimate the number n at which t tends to 1 to estimate the amount of space pledged that would cause this to happen. Ideally, we want never to reach the point where solution_range = 1

was simple: assuming we plot one piece, it will win on every slot with SolutionRange::MAX if we want it to win only sometimes, we multiply that by probability.

For updated consensus Iâ€™d need a similar reasoning for solution range with 1 piece. I can then adjust it depending on number of pieces we assume to be plotted at start (note: it has nothing to do with the side of the seeded history, only with amount of space plotted by farmers).

So my understanding in non-scientific words: if we plot one piece, we have certain probability that s-bucket will be winning multiplied by number of audit chunks in the bucket (we assume that all chunks will win).

It seems to me that probability of hitting s-bucket with a chunk in this case is 1/2 (half of them will be empty) and each s-bucket that has a chunk consists of 32/8 = 4 audit chunks.

So should it be this for one piece then (and we can additionally divide by 1300 if we assume 1300 pieces are plotted to start with)?:

You are right, it depends on the amount of plotted space, not seeded history. I have corrected the post.
Why do we assume that?

My understanding is (while this is ok for testing) we shouldnâ€™t start the network with this assumption, because it will cause many forks.
If we take a single plotted sector and want it to win a challenge every 1/6 slots, then your formula needs to be further divided by pieces_in_sector.
However if we want to apply to this to an actual network this should be further divided by amount of plotted sectors we expect initially.
At this rate we will reach solution_range = 1 at ~1400 ZiB

Simply because solution range starts with SolutionRange::MAX and goes down. So if we start with SolutionRange::MAX any solution distance will be within range and win.

This is fine, initially only genesis farmer can produce blocks and solution range will adjust to that. Once we enable participation by everyone we are able to set solution range for blocks and votes to whatever we want, so initial value doesnâ€™t matter as much.

The only question right now is what is the correct formula. If what I wrote above makes sense, I can go from there.