Very early (but supposedly functional) version of farming cluster is building: Snapshot build · subspace/subspace@1f61f5b · GitHub
It is not final and breaking changes are expected to network layer (so if you decide to run early version you’ll have to stop all instances and start new version rather than upgrading one by one).
There are docs in CLI that should be sufficient to start, but I’ll also provide short examples here.
nats.io server is required for this to work, running in Docker is recommended, but can be started on regular machines. Cluster configurations are also supported, but you’ll have to read their docs on how to set it up.
NATS should be started with a config file containing the following:
max_payload = 2MB
Simply save it into nats.config
file and start NATS server with nats -c nats.config
, with Docker will look something like this:
docker run \
--name nats \
--restart unless-stopped \
--publish 4222:4222 \
--volume ./nats.config:/nats.config:ro \
nats -c /nats.config
Now you have to start 4 components (as separate instances for now, it will be possible to start many of them in the same app in the future).
Controller (will create a few small files for networking purposes):
subspace-farmer cluster --nats-server nats://IP:4222 \
controller \
--base-path /path/to/controller-dir \
--node-rpc-url ws://IP:9944
Cache (supports multiple disks just like farmer):
subspace-farmer cluster --nats-server nats://IP:4222 \
cache \
path=/path/to/cache,size=SIZE
Plotter (stateless):
subspace-farmer cluster --nats-server nats://IP:4222 \
plotter
Farmer (supports multiple disks like usual):
cluster --nats-server nats://IP:4222 \
farmer \
--reward-address REWARD_ADDRESS \
path=/path/to/farm,size=SIZE
Note that all instances can be on different machines, but need to point to the same NATS server/cluster.
Most of familiar farmer CLI options are still available, but spread out across various subcommands accordingly, use --help
to discover them.
On-disk format of everything is compatible with regular farmer, hypothetically you can point all instances to the same directory and it will continue to work just fine.
Expect , , and all kinds of issues for now, though I would appreciate carefully composed bug reports with logs, etc.
More polish and better documentation to come, just wanted to share early progress with the community and get early feedback.
Please keep number of messages in this thread to minimum. Think carefully if your message contributes something substantially valuable to the discussion and for casual chatting post your message on Discord instead.
I may remove message that do not follow ^ policy.