This guide provides the requisite scripts for establishing and connecting to an NVMe-oF (NVMe over Fabrics) remote storage solution. NVMe-oF facilitates streamlined communication between NVMe storage devices and hosts across a network, boasting low latency and high throughput. Additionally, it represents a more seamless alternative to traditional NFS shares. We’ll specifically employ NVMe-oF utilizing TCP for the transport type.
For NVMe-oF (TCP) all you need is a system running a modern Linux kernel.
Remote Configuration:
#!/bin/bash
# Define common variables
ip_address="192.168.x.x"
subsystem_prefix="nvme-disk-"
port_number="4420"
transport_type="tcp"
address_family="ipv4"
# List NVMe drives (by-id)
nvme_drives=(
nvme-INTEL_SSDPE2NV153T8_PHLL151235PDGN
nvme-INTEL_SSDPE2NV153T8_PHLL151456PDGN
nvme-INTEL_SSDPE2NV153T8_PHLL151789PDGN
)
# Load necessary kernel modules
sudo modprobe nvme_tcp
sudo modprobe nvmet
sudo modprobe nvmet-tcp
# Capture number of drives to be used
array_length=${#nvme_drives[@]}
for ((i = 1; i <= array_length; i++)); do
subsystem_name="${subsystem_prefix}$(printf "%02d" "$i")"
nvme_drive=$((i - 1))
# Create the NVMe subsystem directory
sudo mkdir -p "/sys/kernel/config/nvmet/subsystems/$subsystem_name"
# Set the subsystem to accept any host
echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/${subsystem_name}/attr_allow_any_host > /dev/null
# Create and configure the namespace
sudo mkdir -p /sys/kernel/config/nvmet/subsystems/$subsystem_name/namespaces/$i
echo -n /dev/disk/by-id/${nvme_drives[$nvme_drive]} | sudo tee /sys/kernel/config/nvmet/subsystems/$subsystem_name/namespaces/$i/device_path > /dev/null
echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/$subsystem_name/namespaces/$i/enable > /dev/null
# Configure the NVMe-oF TCP port
sudo mkdir -p /sys/kernel/config/nvmet/ports/$i
echo $ip_address | sudo tee /sys/kernel/config/nvmet/ports/$i/addr_traddr > /dev/null
echo $transport_type | sudo tee /sys/kernel/config/nvmet/ports/$i/addr_trtype > /dev/null
echo $port_number | sudo tee /sys/kernel/config/nvmet/ports/$i/addr_trsvcid > /dev/null
echo $address_family | sudo tee /sys/kernel/config/nvmet/ports/$i/addr_adrfam > /dev/null
# Link the subsystem to the port
sudo ln -s /sys/kernel/config/nvmet/subsystems/$subsystem_name /sys/kernel/config/nvmet/ports/1/subsystems/$subsystem_name
done
Check Connection to the Remote on the Initiator:
sudo nvme discover -t tcp -a 192.168.x.x -s 4420
Initiator Connection:
#!/bin/bash
# Define common variables
ip_address="192.168.x.x"
subsystem="nvme-disk-01" # 01, 02, 03, depending on which disk you want to connect.
port_number="4420"
transport_type="tcp"
address_family="ipv4"
sudo nvme connect -t "$transport_type" -n "$subsystem" -a "$ip_address" -s "$port_number"
Notes:
These scripts can be scheduled to run using a simple cron job, for example: @reboot /path/to/script.sh. Ensure that the user has the necessary permissions to execute the script with sudo.
You have the flexibility to alter the drive identifier in nvme_drives from by-id to by-label, by-uuid, or any other valid identifier according to your preference. If you make this change, remember to update echo -n /dev/disk/by-id/ to use the appropriate method.
After establishing the connection to the remote using the initiator, you’ll proceed to mount the drive as if it were a local storage device.
A modification of the script to use the drive Label as an identifier if it exists, to help keep track of what is what
#!/bin/bash
# Define common variables
ip_address="192.168.1.x"
subsystem_prefix="HOSTNAME-"
port_number="4420"
transport_type="tcp"
address_family="ipv4"
# List NVMe drives (by-id)
nvme_drives=(
fb7cf28f-51ce-4d45-8e83-b222a774c253
fb7cf28f-51ce-4d45-8e83-b222a774c253
)
# Load necessary kernel modules
sudo modprobe nvme_tcp
sudo modprobe nvmet
sudo modprobe nvmet-tcp
# Capture number of drives to be used
array_length=${#nvme_drives[@]}
for ((i = 1; i <= array_length; i++)); do
nvme_drive=$((i - 1))
uuid=${nvme_drives[$nvme_drive]}
# Get the drive label
label=$(lsblk -no LABEL /dev/disk/by-uuid/$uuid 2>/dev/null)
# Use the label if it exists, otherwise fall back to using the number
if [ -n "$label" ]; then
subsystem_name="${subsystem_prefix}${label}"
else
subsystem_name="${subsystem_prefix}$(printf "%02d" "$i")"
fi
# Create the NVMe subsystem directory
sudo mkdir -p "/sys/kernel/config/nvmet/subsystems/$subsystem_name"
# Set the subsystem to accept any host
echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/${subsystem_name}/attr_allow_any_host > /dev/null
# Create and configure the namespace
sudo mkdir -p /sys/kernel/config/nvmet/subsystems/$subsystem_name/namespaces/$i
echo -n /dev/disk/by-uuid/${nvme_drives[$nvme_drive]} | sudo tee /sys/kernel/config/nvmet/subsystems/$subsystem_name/namespaces/$i/device_path > /dev/null
echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/$subsystem_name/namespaces/$i/enable > /dev/null
# Configure the NVMe-oF TCP port
sudo mkdir -p /sys/kernel/config/nvmet/ports/$i
echo $ip_address | sudo tee /sys/kernel/config/nvmet/ports/$i/addr_traddr > /dev/null
echo $transport_type | sudo tee /sys/kernel/config/nvmet/ports/$i/addr_trtype > /dev/null
echo $port_number | sudo tee /sys/kernel/config/nvmet/ports/$i/addr_trsvcid > /dev/null
echo $address_family | sudo tee /sys/kernel/config/nvmet/ports/$i/addr_adrfam > /dev/null
# Link the subsystem to the port
sudo ln -s /sys/kernel/config/nvmet/subsystems/$subsystem_name /sys/kernel/config/nvmet/ports/$i/subsystems/$subsystem_name
done
Need the drives accessible on multiple lan segments:
#!/bin/bash
# Define common variables
ip_addresses=("192.168.0.3" "192.168.1.208") # Lan IPs for the host
subsystem_prefix="HOSTNAME-" # prefix of the host
port_number="4420"
transport_type="tcp"
address_family="ipv4"
# List NVMe drives (by-id)
nvme_drives=(
5f425e17-784c-4c54-bd81-2f6278244e3f
a90078dd-81e7-4f82-a1ab-d3fa664644ae
cafd9e45-177a-4b28-8fc9-a1872871182b
58610d0c-1339-4700-9706-abc203b34810
a52c88ab-41e1-4c3a-9e0c-a276b4146b1e
)
# Load necessary kernel modules
sudo modprobe nvme_tcp
sudo modprobe nvmet
sudo modprobe nvmet_tcp
# Capture number of drives to be used
array_length=${#nvme_drives[@]}
for ip_address in "${ip_addresses[@]}"; do
for ((i = 1; i <= array_length; i++)); do
nvme_drive=$((i - 1))
uuid=${nvme_drives[$nvme_drive]}
# Get the drive label
label=$(lsblk -no LABEL /dev/disk/by-uuid/$uuid 2>/dev/null)
# Use the label if it exists, otherwise fall back to using the number
if [ -n "$label" ]; then
subsystem_name="${subsystem_prefix}${label}"
else
subsystem_name="${subsystem_prefix}$(printf "%02d" "$i")"
fi
# Create the NVMe subsystem directory
sudo mkdir -p "/sys/kernel/config/nvmet/subsystems/$subsystem_name"
# Set the subsystem to accept any host
echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/${subsystem_name}/attr_allow_any_host > /dev/null
# Create and configure the namespace
sudo mkdir -p /sys/kernel/config/nvmet/subsystems/$subsystem_name/namespaces/$i
echo -n /dev/disk/by-uuid/${nvme_drives[$nvme_drive]} | sudo tee /sys/kernel/config/nvmet/subsystems/$subsystem_name/namespaces/$i/device_path > /dev/null
echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/$subsystem_name/namespaces/$i/enable > /dev/null
# Configure the NVMe-oF TCP port
port_index=$(( (ip_address == "192.168.0.3") ? i*2-1 : i*2 ))
sudo mkdir -p /sys/kernel/config/nvmet/ports/$port_index
echo $ip_address | sudo tee /sys/kernel/config/nvmet/ports/$port_index/addr_traddr > /dev/null
echo $transport_type | sudo tee /sys/kernel/config/nvmet/ports/$port_index/addr_trtype > /dev/null
echo $port_number | sudo tee /sys/kernel/config/nvmet/ports/$port_index/addr_trsvcid > /dev/null
echo $address_family | sudo tee /sys/kernel/config/nvmet/ports/$port_index/addr_adrfam > /dev/null
# Link the subsystem to the port
sudo ln -s /sys/kernel/config/nvmet/subsystems/$subsystem_name /sys/kernel/config/nvmet/ports/$port_index/subsystems/$subsystem_name
done
done
This is exactly what I’m looking to test and learn NVMe/TCP in my homelab. Any idea, how to have it connect to VMware ESXi hosts (initators). It mentions something about nqn.
While I’m not familiar with ESXi or the setup required, VMware documentation and online resources should be able to guide you through configuring the storage adapter and verifying the connection.