Taurus node connection problems

Issue Report

Environment

Ubuntu 22.04.5 LTS (GNU/Linux 5.15.0-131-generic x86_64
Taurus Node 28 Jan

Problem

Hi! I have a problem with node start - no peers (wait fo several hours) :frowning_with_open_mouth: Ports are open and checked by nc command…
Mainnet node works fine on 30333 and 30433 on second unit. Please advise.

Start script:
[Unit]
Description=Subspace Node
After=network.target

[Service]
User=root
Type=simple
ExecStart=subspace-node run
–chain taurus
–name Inoutik
–base-path /sub/node
–farmer
–listen-on /ip4/0.0.0.0/tcp/31333
–dsn-listen-on /ip4/0.0.0.0/tcp/31433
–blocks-pruning archive-canonical
–state-pruning archive-canonical
–sync full
–
–domain-id 0
–blocks-pruning archive-canonical
–state-pruning 16000
Restart=on-failure
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

Logs.txt (7.0 KB)

At first glance it does look like a port forward issue. I know you have verified that they are open, but I would encourage you to double check.

In the meantime, can you make sure that you are starting with no data in your /sub/node folder and can verify that you are using the latest Taurus release?

Double checked and changed for 39333 and 39433 (and test with nc from outside). Also make new directory for node files in root… and still no peers. Also checked firewalls on node and tried dmz on router for this node. It’s look like some network problem to me, but mainnet node works well on the same lan and router… PS i use the latest release - version 0.1.0-21fdb9a19dbf58d932c4bf8a3a0b360dcadd7e0e
log.txt (4.2 KB)

I don’t think that there is a problem on the chain side of things as there are over 130 fully synced Taurus nodes. I think you are correct about it being a network issue. For me, that usually means just opening up the ports and forwarding them as well as making sure firewalls are open.

Sometimes ISP’s block some ports but you have tried different ones.

I think I am going to have to wait for someone else to chime in here. Anyone with zero peers that I have helped was due ports, network issues, or ISP blockages.

Note that options after -- are for domain, and domain has its own networking configuration like –listen-on. You’ll have to forward those ports too and in case of conflicts you may need to customize it as well.

See Port Forwarding & Firewall | Farm from Anywhere for details and run node with --help to see available options (--help can be added to both left and right of --help to see consensus and domain options accordingly).

@Randy-Autonomys @nazar-pc it’s definetely something wrong with the taurus node binaries - i started both mainnet and taurus nodes on the same server with the same config - and mainnet node started without any problem and taurus not. Please check logs
taurus.txt (8.4 KB)

According to original message you have customized DSN port to 31433, but logs indicate that you did not. Make sure you reloaded service config.

I can personally also recommend using Docker instead of systemd, which is much easier to configure and to update.

I changed port of course before starting the nodes. They use the same ports, the same config and the same server. I also deleted node files folder for both nodes. Mainnet finded external address and peers right away, but taurus didnt. So problem not in ports or network - its in binaries or network itself. Check fresh logs. Here is start comand for mainnet node
[Unit]
Description=Subspace Node
After=network.target

[Service]
User=root
Type=simple
ExecStart=/usr/local/bin/subspace-node run --chain mainnet --name Inoutik_Cluster --base-path /sub/node_m --sync full --farmer --listen-on /ip4/0.0.0.0/tcp/30333
Restart=on-failure
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

And here - for taurus:
GNU nano 7.2 /etc/systemd/system/subspaced-node-taurus.service
[Unit]
Description=Subspace Node
After=network.target

[Service]
User=root
Type=simple
ExecStart=/usr/local/bin/subspace-node-taurus run --chain taurus --name Inoutik_Cluster --base-path /sub/node_t --sync full --farmer --listen-on /ip4/0.0.0.0/tcp/30333
Restart=on-failure
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
taurus2.txt (10.6 KB)

You can’t run the two different nodes with the same port on the same server!
It will 100% cause hard to diagnose issues just like you see, which is why it was suggested to change the ports to make sure they do not overlap between Mainnet and Taurus nodes. Better yet, use Docker, it’ll fail to bind to the same port twice (at least by default with setup described in the documentation), helping you to correct configuration sooner.

I’m not running two nodes :melting_face: - i stopped mainnet before starting taurus. I used the same ports to be sure that ports are open and works.


Im sure it will not help but where i can get docker files for taurus node for check? here only mainnet - Install | Farm from Anywhere
Also i changed ports many times, its not the issue

Container images have the same name, just use taurus-whatever as the tag, which always corresponds to the release you want to use. The latest right now is taurus-2025-jan-28.

Your logs above only include a few seconds after start, so not clear what issue you’re having if any.

well as expected - no changes. Mainnet works perfect, taurus - not
docker mainnet.txt (18.4 KB)
docker - taurus.txt (11.8 KB)

@ved or @ning could you take a look at this, please?
It seems to indicate potential issues with infra.
I am not running Taurus nodes myself though.

@Alexander_Zamesov I have run a node on taurus from the scratch
This is the compose file I have used

services:
  node:
    image: ghcr.io/autonomys/node:taurus-2025-jan-28
    volumes:
      - ~/Subspace/taurus/node:/var/subspace:rw
    ports:
      - "0.0.0.0:30333:30333/tcp"
      - "0.0.0.0:30433:30433/tcp"
    restart: unless-stopped
    command: [
      "run",
      "--chain", "taurus",
      "--base-path", "/var/subspace",
      "--listen-on", "/ip4/0.0.0.0/tcp/30333",
      "--dsn-listen-on", "/ip4/0.0.0.0/tcp/30433",
      "--rpc-listen-on", "0.0.0.0:9944",
      "--rpc-cors", "all",
      "--rpc-methods", "unsafe",
      "--farmer",
      "--name", "ved_local",
    ]
    healthcheck:
      timeout: 5s
      interval: 30s
      retries: 60

I have a atleast 20 peers connected and synced to target block. I have not used systemd yet.

Apologies for asking again, have you double checked your firewall to be sure ?