Skip to main content

Linux NFS, FC, and iSCSI

Summary:

This guide is for all levels of Linux administrators from beginning to advanced and covers basic configuration and connectivity for various Linux  operating systems.  It does not cover  application specific changes, recommendations or tuning requirements.

Introduction:

This guide contains the recommended best practices for connecting your Linux host using NFS or iSCSI with your StorONE array.  Our best practices are designed to optimize host networking and TCP performance to fully unlock the performance of your StorONE array with Linux host and applications.

A vast majority of these changes are at the operating system and hardware levels,and will allow almost all applications to achieve performance gains.

Supported releases:

This guide is specifically written for RHEL 7.6 onwards based distributions, Debian 9.x onward and Ubuntu 20.04_LTS and onward.  The common Linux settings should benefit all releases at this level and beyond with some specific configuration steps documented for more popular releases

Common Linux performance recommendations

StorONE recommends the following best practices for optimum performance with Linux operating systems and StorONE arrays  These changes are documented in 

Linux host tuning for 10GbE or greater interfaces

Linux is used in a wide variety of configurations. Due to this flexibility in the operating system, Linux TCP buffer sizes are still too small for 10G and larger network connections. We can increase the values for certain aspects of the networking stack which can significantly improve the performance of these interfaces by increasing the interface buffer sizes, and optimizing the network stack allowing more modern hardware to work more efficiently.

Jumbo frames - MTU 9000

High speed interfaces capable of 10/25/40 GbE or greater are recommended to enable Jumbo Frames for optimum performance. While these interfaces usually have low latency, performance still benefits from increasing MTU packet payload sizes on these interfaces.  The MTU size for host interfaces can be set for various Linux operating systems which we will cover later in this guide.

CPU governance recommendations

CPU governance can increase certain aspects of IO processing however these do come with some tradeoffs.  Due to these changes being somewhat application dependent we will leave these changes to the end of the guide, and we recommend consulting both StorONE and your application provider before making changes to CPU governance.

DIsk IO Scheduler, writebacks, and max sectors

As Linux is designed for compatibility for a wide range of hardware, the default settings are designed around traditional spindle based disk drives and not around newer solid state drives, NVMEs or disks and shares backed by storage arrays.  By altering these default traditional hard disk IO settings for Linux, additional performance is gained with the StorONE high speed storage array.

Common Linux host and TCP Tuning for 10GbE or greater interfaces

Sysctl.conf changes for TCP window and buffer sizes

The following parameters in /etc/sysctl.conf file that can be modified to improve multi session and NFS performance

NOTE: Values commented out can be changed later after an initial performance review.

Add the following lines to /etc/sysctl.conf as root or with SUDO permissions

sudo vi /etc/sysctl.d/99-storone.conf


# StorONE Storage
#
# These settings are for maximizing 10/25/40/100 network performance

net.ipv4.tcp_wmem = 32768 262143 2147483647
net.ipv4.tcp_rmem = 32768 262143 2147483647
net.core.wmem_max = 2147483647
net.core.rmem_max = 2147483647
net.core.wmem_default = 16777216 
net.core.rmem_default = 16777216
net.core.netdev_budget = 600
net.core.netdev_max_backlog = 300000
net.core.optmem_max = 16777216
net.ipv4.tcp_mtu_probing = 1
net.ipv4.tcp_congestion_control = htcp
net.ipv4.tcp_low_latency = 1
net.ipv4.tcp_sack = 1
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_adv_win_scale = 1
sunrpc.udp_slot_table_entries = 128
sunrpc.tcp_slot_table_entries = 128
vm.min_free_kbytes = 2097152
kernel.sched_autogroup_enabled = 0
kernel.sched_migration_cost_ns = 5000000


To enable the changes in sysctl.conf issue the following command as root or with sudo permissions


sudo sysctl -p


NOTE: Due to release hardening, it can be required to reboot OEL systems after sysctl.conf and ifcfg interface file changes for the new configurations to properly load.

Setup S1 Linux Service


The easiest way to apply per LUN / share non persistent changes is to have a service which runs these commands on startup.  The service can also be restarted to make changes and run these commands if volumes or shares are added.


The commands below will setup the s1_linux_service


sudo vi /etc/systemd/system/s1_linux.service


Copy and insert the following text, including one empty line after:


[Unit]

Description=StorONE service

After=network.target

Conflicts=shutdown.target


[Service]

ExecStart=/usr/local/bin/s1_linux_service

Type=oneshot

RemainAfterExit=yes

TimeoutSec=90


[Install]

WantedBy=network.target



Save and quit


sudo chmod 644 /etc/systemd/system/s1_linux.service

sudo vi /usr/local/bin/s1_linux_service


Insert the following text:


#!/bin/bash

#Linux performance tuningdate > /root/s1_settings.log

sysctl -w vm.dirty_writeback_centisecs=100 >/root/s1_settings.log

sysctl -w vm.dirty_expire_centisecs=100 >/root/s1_settings.log


#logging of performance settings last applied

# /root/s1_settings.log


Save and quit


chmod 744 /usr/local/bin/s1_linux_service

sudo systemctl daemon-reload

sudo systemctl enable s1_linux.service

sudo systemctl start s1_linux.service

sudo systemctl status s1_linux.service


With the service properly configured and started, the results will look like this:


[root@localhost system]# systemctl status s1_linux.service

● s1_linux.service

   Loaded: loaded (/etc/systemd/system/s1_linux.service; enabled; vendor preset: enabled)

   Active: active (exited) since Tue 2023-02-28 21:49:58 CST; 7s ago

  Process: 22989 ExecStart=/usr/local/bin/s1_linux_servicet (code=exited, status=0/SUCCESS)

 Main PID: 22989 (code=exited, status=0/SUCCESS)


Feb 28 21:49:58 localhost.localdomain systemd[1]: Started s1_linux.service.

RHEL / OEL / CentOS base configuration NFS and iSCSI


Release Identification:


To determine your release type and version the following command can be used


cat /etc/redhat-release


Example:


[root@localhost ~]# cat /etc/redhat-release

CentOS Linux release 7.9.2009 (Core)


Review existing interface files:


To view your existing network interfaces and MAC addresses use the following command


ip a


Example


[root@localhost ~]# ip a

1: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

link/ether 00:50:56:9b:1b:06 brd ff:ff:ff:ff:ff:ff

inet xxx.xxx.xxx.xxx/24 brd xxx.xxx.xxx.xxx scope global noprefixroute ens192

    valid_lft forever preferred_lft forever

inet6 fe80::250:56ff:fe9b:1b06/64 scope link

    valid_lft forever preferred_lft forever

2: ens224: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

link/ether 00:50:56:9b:9a:01 brd ff:ff:ff:ff:ff:ff

3: ens256: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

link/ether 00:50:56:9b:8c:c4 brd ff:ff:ff:ff:ff:ff


We will need to capture the interface names and MAC addresses to use later in the configuration.


Interface DEVICE: ensDATA

Interface MAC:   link/ether 00:50:56:9b:9a:01

Interface DEVICE:  ensDATA2

Interface MAC    link/ether 00:50:56:9b:8c:c

To verify the that the interface files are present use the following command


ls -la /etc/sysconfig/network-scripts/ifcfg-<DEVICE>


The <DEVICE> will be the interface name found using the ip a command.  The interface files if present will need to be edited for each interface  DATA and DATA2 to configure them for static IP addresses.


To verify the contents of the interface files we can use the following command.


cat /etc/sysconfig/network-scripts/ifcif-<DEVICE>


NOTE: If no interface files are present on the host, create the management (MGMT) or non data IP interface file using the template below.


sudo vi /etc/sysconfig/network-scripts/ifcif-DEVICE


Template:


[root@localhost ~]$ cat /etc/sysconfig/network-scripts/ifcfg-<NAME>

TYPE=Ethernet

PROXY_METHOD=none

BROWSER_ONLY=no

BOOTPROTO=none

DEFROUTE=yes

IPV4_FAILURE_FATAL=no

IPV6INIT=no

IPV6_AUTOCONF=no

IPV6_DEFROUTE=no

IPV6_FAILURE_FATAL=no

IPV6_ADDR_GEN_MODE=stable-privacy

NAME=

UUID=

DEVICE=

HWADDR=

ONBOOT=yes

IPADDR=

PREFIX=

GATEWAY=

DNS1=

IPV6_PRIVACY=no


For each interface file not present, edit, insert the template, then save and quit.  Repeat as needed until all interface files are present. 

RHEL - CentOS & OEL Interface Configuration


Generate a UUID for new interface device configuration files


For each new interface configuration file created on the  host, a unique UUID, DEVICE, NAME, IP and HW value is required.  The command ip -a provides the HW and NAME and DEVICE information for each interface file of the corresponding DEVICE name as long as the network controller is installed and discovered by your Linux OS.


To generate a unique UUID, the command uuidgen DEVICE will create is  A new and unique UUID will need to be created for each new interface file.


sudo uuidgen <interface-device-name from ip a>


Example


[root@linux network-scripts]# uuidgen DATA

5681c055-b334-4e95-9322-135bc17c3f0b


The new UUID needs to be inserted into the corresponding ifcfg interface file.  Edit the ifcfg file for each device name and insert the following values


UUID

UUID (generated with uuigen)

HW (MAC address from ip a)

DEVICE (from ifcfg file name)

NAME (same as device and ifcfg file name)

IP (IP address for the interface)


sudo vi /etc/sysconfig/network-scripts/ifcfg-DATA


UUID

UUID

HWADDR

HWADDR

NAME

NAME

DEVICE

DEVICE

IP


Save and quit the editor to save changes and verify the file is correct before proceeding to the next interface file.   it should look like the example below





Example:


[root@linux ~]$ cat /etc/sysconfig/network-scripts/ifcfg-<NAME>

TYPE=Ethernet

PROXY_METHOD=none

BROWSER_ONLY=no

BOOTPROTO=none

DEFROUTE=yes

IPV4_FAILURE_FATAL=no

IPV6INIT=no

IPV6_AUTOCONF=no

IPV6_DEFROUTE=no

IPV6_FAILURE_FATAL=no

IPV6_ADDR_GEN_MODE=stable-privacy

NAME=DATA

UUID=5681c055-b334-4e95-9322-135bc17c3f0b

DEVICE=DATA

HWADDR=xx:xx:xx:xx:xx:xx

ONBOOT=yes

IPADDR=XXX.XXX.XXX.XXX

PREFIX=XX

GATEWAY=XXX.XXX.XXX.XXX

DNS1=XXX.XXX.XXX.XXX (only MGMT interface requires DNS for data leave empty)

IPV6_PRIVACY=no


A network restart is required to implement the changes made to ifcfg files

To apply the new interface setting after editing each ifcfg file.


sudo systemctl restart network.service


To verify IP NAME, DEVICE, UUID, HW, and IP for each interface use the command


ip a


Repeat the steps above as needed to verify all interface files have been created.  This process can also be used to create a management (MGMT) interface for the host as needed.


NOTE: The IP, PREFIX (subnet) DNS1 and GATEWAY should be provided by your network administrator.  DNS1 is only required for the management interface and can be left blank for all DATA interfaces.  Depending on your network layout GATEWAY may also be left blank if desired.



Interface Verification (without jumbo frames)


To verify our newly configured interfaces the following command can be used to ping an IP on the network segment as the newly configured interfaces


ping -I DATA-INF_1 storONE_DATA_ADDRESS -c 5


This should return a response from the address used which looks like


Example


root@linux:~$ ping -I DATA -c 5 xxx.xxx.xxx.xxx

PING xxx.xxx.xxx.xxx (xxx.xxx.xxx.xxx) from yyy.yyy.yyy.yyy ens192: 56(84) bytes of data.

64 bytes from xxx.xxx.xxx.xxx: icmp_seq=1 ttl=64 time=0.512 ms

64 bytes from xxx.xxx.xxx.xxx: icmp_seq=2 ttl=64 time=0.538 ms

64 bytes from xxx.xxx.xxx.xxx: icmp_seq=3 ttl=64 time=0.329 ms

64 bytes from xxx.xxx.xxx.xxx: icmp_seq=4 ttl=64 time=0.471 ms

64 bytes from xxx.xxx.xxx.xxx: icmp_seq=5 ttl=64 time=0.346 ms


--- xxx.xxx.xxx.xxx ping statistics ---

5 packets transmitted, 5 received, 0% packet loss, time 4076ms

rtt min/avg/max/mdev = 0.329/0.439/0.538/0.085 ms


NOTE: If using Jumbo Frames continue to page 14 before testing the interfaces with PING

Enabling Jumbo frames for RHEL / CentOS, OEL


When enable Jumbo Frames on Linux host interfaces, all virtual and physical switch ports need to be enable for jumbo frames (i.e. Virtual switching, kernel ports, StorONE Array, network switches, etc).  If MTU is not properly set from host to the array iSCSI discovery can be problematic and actual mounting and formatting of a LUN with an MTU mismatch most often fail.


To enable Jumbo frames, we will need to edit the ifcfg files for each interface which needs to be configured for jumbo frames and change the MTU value from 1500 to 9000


vi /etc/sysconfig/network-scripts/ifcfg-ensDATA


Example:

TYPE=Ethernet

PROXY_METHOD=none

BROWSER_ONLY=no

BOOTPROTO=none

DEFROUTE=yes

IPV4_FAILURE_FATAL=no

IPV6INIT=no

IPV6_AUTOCONF=no

IPV6_DEFROUTE=no

IPV6_FAILURE_FATAL=no

IPV6_ADDR_GEN_MODE=stable-privacy

NAME=DATA

UUID=f34cd81d-e190-4e4a-b01a-37efd849c3b5

DEVICE=DATA

ONBOOT=yes

HWADDR=00:50:56:9b:9a:01

IPADDR=xxx.xxx.xxx.xxx

PREFIX=24

GATEWAY=xxx.xxx.xxx.xxx

IPV6_PRIVACY=no

MTU=9000







Interface verification with Jumbo Frames:


The PING command below is set to "do not fragment" for the packet size.  If Jumbo frames are set correctly, the command below will return with a ping from the Linux iSCSI initiator ports to the StorONE ports, or any other port with JUMBO frames enabled.


ping -I DATA-INF_1 storONE_iSCSI_discovery_address -c 10 -M do -s 8972


This will return with a ping response only if the MTU is set correctly

If the MTU is not correctly set along the path, then the following command will return with a ping response.


ping -I DATA-INF_1 storONE_iSCSI_discovery_address -c 10 


If the ping command is not able to properly ping from storage to host, then the host, switch and array configurations will need to be reviewed for proper MTU settings along the path.


NOTE: For iSCSI setup - continue to page 15


With a completed and verified NFS linux setup the NFS shares from the array should now be visible for mounting using the commands below


Verify NFS shares from array

showmount -e <storone-floating-ip-address>



Mount options can also be added as part of the mount command using mount -o option command 


NFS fstab Example:

floating_ip:/shares/vol_name  /mountpoint nfs soft,sync,relatime,nconnect=4,vers=3,timeo=60,retrans=50,_netdev 0 0


NFS fstab Example:

floating_ip:/shares/vol_name  /mountpoint nfs soft,sync,relatime,nconnect=4,vers=3,timeo=60,retrans=50,_netdev 0 0


Manual NFS mount example

mount -t (nfs/smb) -o soft,sync,relatime,nconnect=4,vers=3,timeo=60,retrans=50,_netdev floating_ip:/shares/vol_name /mount_dir/vol_dir/mount


Manual SMB mount example

mount -t (nfs/smb) -o soft,sync,relatime,nconnect=4,vers=3,timeo=60,retrans=50,_netdev floating_ip:/shares/vol_name /mount_dir/vol_dir/mount

RHEL / OEL / CentOS configuring iSCSI and Multipath


Creating ifaces for data interfaces all releases


Creating ifaces through iscsiadm is a recommended best practice to isolate iSCSI traffic to the specified “DATA” interfaces, and is required when using multiple iSCSI interfaces and MPIO on your linux host.  The iscsiadm commands can be used for all releases covered by this guide.


To create the new ifaces we will need the interface names and mac addresses from ip a for the DATA or iSCSI interfaces


The iscsiadm -m iface command is to create the iface.  The iface name should be the same as the interface name for clarity.


iscsiadm -m iface -I <interface_name> --op=new


To bind the iface to the MAC address of the interface.


 iscsiadm -m iface -I <interface_name> -o update -n iface.hwaddress  -v  <MAC_address>



Adjusting arp for multiple interfaces in the same subnet:


This is only required if there are multiple interfaces configured for the SAME SUBNET.  If these adjustments are not made on a Linux host with multiple interfaces in the same IP segment or subnet, the configured interfaces will not respond correctly for IP and iSCSI traffic.


sudo vi /etc/systl.conf/etc/sysctl.d/99-storone.conf


net.ipv4.conf.all.rp_filter = 2

net.ipv4.conf.default.rp_filter = 2

net.ipv4.conf.all.arp_filter = 1

net.ipv4.conf.default.arp_filter = 1

net.ipv4.conf.all.arp_announce = 2

net.ipv4.conf.default.arp_announce = 2

net.ipv4.conf.all.arp_ignore = 1

net.ipv4.conf.default.arp_ignore = 1


When the changes are complete run the command

sudo sysctl -p

Iscsi.conf configuration 


The following adjustments to the default iscsi.conf values which need to be adjusted for performance and stability


We recommend the following values be set in iscsi.conf


node.session.timeo.replacement_timeout = 120

node.conn[0].timeo.noop_out_interval = 5

node.conn[0].timeo.noop_out_timeout = 10

node.session.nr_sessions = 4

node.session.cmds_max = 2562048

node.session.queue_depth = 641024


NOTE: The value node.session.nr_sessions is recommended to be set at 4 (four) for most applications.  In instances where extreme performance is required, this value can be increased to 8 (eight) to provide additional performance, however as this can create an extreme LUN path count is not recommended as a default value.


The service will need to be restarted for these values to take effect.


sudo systemctl restart iscsid
















Multipath configuration


The StorONE array specific information provided in the device field is required to be added to /etc/multipath.conf.

The defaults and blacklist sections are recommended to be added to multipath.conf

Values in default will affect all multipath devices, values under device are array specific.


A complete example of the required and recommended file is below


Example:


defaults {

Find_multipaths yes

User_friendly_names yes

Path_selector “round-robin 0”

path_grouping_policy multibus

No_path_retry 30

max_sectors_kb      1024

queue_without_daemon no

Max_fds max

Flush_on_last_del yes

Log_checker_err once

}

devices {

device {

vendor “STORONE*”

product “S1*”

detect_prio yes

prio “alua”

path_selector “queue-length 0”

path_grouping_policy group_by_prio

failback immediate

path_checker tur

}

Blacklist_exception {

wwid                  “36882e5a*”

}





Block volume mounting recommendations (all releases)



barrier,discard, noatime,nodiratime,_netdev


fstab Example:


/etc/fstab file:

/dev/mapper/volX /mountpoint ext4 _netdev,noatime,nodiratime,barrier=0,discard 0 0


Mount options can also be added as part of the mount command using mount -o option command 


CLI example


Mount command:

[root@linux_host ~]# mount -t ext4 -o _netdev,noatime,nodiratime,barrier=0,discard /dev/mapper/volX /mountpoint


Once mounted, volumes can be formatted with file systems as needed or used as RAW block devices.

Debian / Ubuntu base configuration NFS and iSCSI


Determine your release:


To determine your release type and version use the following command


cat /etc/lsb-release


Example

root@linux:~# cat /etc/lsb-release

DISTRIB_ID=Ubuntu

DISTRIB_RELEASE=20.04

DISTRIB_CODENAME=focal

DISTRIB_DESCRIPTION="Ubuntu 20.04.5 LTS"


Review existing interface files:


To view your existing interfaces use the following command


ip a


Example

root@tw-linux:~# ip a

1: ensMGMT: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

link/ether 00:50:56:bd:8c:60 brd ff:ff:ff:ff:ff:ff

inet xxx.xxx.xxx.xxx/24 brd xxx.xxx.xxx.xxx scope global ens160

    valid_lft forever preferred_lft forever

inet6 fe80::250:56ff:febd:8c60/64 scope link

    valid_lft forever preferred_lft forever

2: ensDATA: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000

link/ether 00:50:56:bd:72:e2 brd ff:ff:ff:ff:ff:ff

inet xxx.xxx.xxx.xxx/24 brd xxx.xxx.xxx.xxx scope global ens192

    valid_lft forever preferred_lft forever

inet6 fe80::250:56ff:febd:72e2/64 scope link

    valid_lft forever preferred_lft forever

3: ensDATA2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000

link/ether 00:50:56:9b:4a:d0 brd ff:ff:ff:ff:ff:ff

inet xxx.xxx.xxx.xxx/24 brd xxx.xxx.xxx.xxx scope global ens224

    valid_lft forever preferred_lft forever

inet6 fe80::250:56ff:fe9b:4ad0/64 scope link

    valid_lft forever preferred_lft forever

Configure interfaces with netplan network manager


For this guide we used the netplan interface configuration setup as this is a default setup for Ubuntu and Debian releases and is the easiest way to make and verify changes from the Linux CLI.  Other networking methods can be used, but we found this the easiest to use and document considering that netplan will perform rudimentary verification of the configuration before implementation.


Backup the existing netplan config:


To make a copy of our current netplan, we should first look at the directory where the file(s) are contained.


ls -la /etc/netplan


You should see something like this,


root@tlinux:~# ls -la /etc/netplan/

total 16

drwxr-xr-x  2 root root 4096 Nov 14 22:48 .

drwxr-xr-x 99 root root 4096 Nov 14 20:04 ..

-rw-r--r--  1 root root  706 Nov 14 22:48 01-network-manager-all.yaml


Note: You might have a configuration file with the name other than the 01-network-manager-all.yaml. So make sure you use the right configuration file name in the commands.

First let’s make a backup of the file before making any changes.


sudo cp /etc/netplan/01-network-manager-all.yaml /etc/netplan/01-network-manager-all.yaml.bak

Gathering interface information for changes in Netplan


To gather the interface names and MAC addresses of our data interfaces using the command:

ip a


Example


root@linux:~# ip a

ensMGMT: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

link/ether 00:50:56:bd:8c:60 brd ff:ff:ff:ff:ff:ff

inet xxx.xxx.xxx.xxx/24 brd xxx.xxx.xxx.xxx scope global ens160

    valid_lft forever preferred_lft forever

inet6 fe80::250:56ff:febd:8c60/64 scope link

    valid_lft forever preferred_lft forever

ensDATA: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

link/ether 00:50:56:9b:4a:d0 brd ff:ff:ff:ff:ff:ff

inet6 fe80::250:56ff:fe9b:4ad0/64 scope link

    valid_lft forever preferred_lft forever

ensDATA2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

link/ether 00:50:56:9b:6a:d1 brd ff:ff:ff:ff:ff:ff

inet6 fe80::250:56ff:fe9b:6ad1/64 scope link

    valid_lft forever preferred_lft forever



From the above example, take note of the following data as it will be needed throughout the guide.

Interface name:
ensDATA

Interface name:MAC:  ensDATA

Interface MAC: link/ether 00:50:56:9b:4a:d0


Interface name:  ensDATA2

Interface MAC    link/link/ether 00:50:56:9b:6a:d1

Adding new interfaces with netplan


To add a new interface in the netplan configuration file copy the existing interface configuration and insert this into the file below the current interface.   It is recommended to have the management interface configured for the host first before performing data interfaces as netplan uses a YAML file format, which is space sensitive.  This means the spacebar must be used, and TAB can not be used.


NOTE: This is a YAML file, the spaces are critical, and a tab key should NEVER be used while editing the file.  The network connection names of MGMT, DATA, and DATA2 are just for example only


Example

vi /etc/netplan/01-network-manager-all.yaml 


network:

  version: 2

  renderer: NetworkManager

  ethernets:

ensMGMT:

   addresses: [xxx.xxx.xxx.xxx/24]

   gateway4: xxx.xxx.xxx.xxx

     nameservers:

    addresses: [xxx.xxx.xxx.xxx, xxx.xxx.xxx.xxx]

    #dhcp4: false


To add a new interface to the YAML file, copy the lines for an existing interface iwthing the YAML file and then insert this below the original interface information in the file to create the additional entries as needed.  The example below shows two additional interfaces added to the previous example.


NOTE: the example below can also be used as the spacing is correct

NOTE: DNS is commented out for the additional interfaces on the DATA / iSCSI network.


Example 


network:

network:

ethernets:

   ensMGMT:

   addresses: [xxx.xxx.xxx.xxx/YY]

   gateway4: xxx.xxx.xxx.xxx

   nameservers:

     addresses: [xxx.xxx.xxx.xxx, xxx.xxx.xxx.xxx]

   ensDATA1:

   addresses: [xxx.xxx.xxx.xxx/YY]

   gateway4: xxx.xxx.xxx.xxx

   #nameservers:

     #addresses: [xxx.xxx.xxx.xxx, xxx.xxx.xxx.xxx]

   ensDATA2:

   addresses: [xxx.xxx.xxx.xxx/YY]

   gateway4: xxx.xxx.xxx.xxx

  #nameservers:

    #addresses: [xxx.xxx.xxx.xxx, xxx.xxx.xxx.xxx]

  #dhcp4: false

  #version: 2


After making the inserting the new lines stop and save the file.  Now re-open the file and insert the  correct IP, NAME, MAC address and GATEWAY values for the additional interfaces.


Using the information we gathered previously using  ip a


Interface name: ensDATA

Interface MAC:   link/ether 00:50:56:9b:4a:d0


Interface name:  ensDATA2

Interface MAC    link/ether 00:50:56:9b:6a:d1


NOTE: Do NOT set for jumbo frames at this time.  The interface changes need to be tested first before enabling Jumbo frames.


Example:


network:

    version: 2

    # renderer: NetworkManager

   ethernets:

      ensMGMT:

      addresses: [xxx.xxx.xxx.xxx/YY]

      gateway4: xxx.xxx.xxx.xxx

      nameservers:

        addresses: [xxx.xxx.xxx.xxx, xxx.xxx.xxx.xxx]

      mtu:1500

      ensDATA:                         <—----(Interface name)

      addresses: [xxx.xxx.xxx.xxx/YY]  <—----(Interface IP)

      gateway4: xxx.xxx.xxx.xxx        <—----(Interface Gateway)

      match:

       macaddress: 00:50:56:9b:4a:d0  <—----(MAC address)

mtu:1500                         <-----(MTU: do not change yet)

      ensDATA2:   <—----(Interface name)

      addresses: [xxx.xxx.xxx.xxx/YY]  <—----(Interface IP)

      gateway4: xxx.xxx.xxx.xxx   <—----(Interface Gateway)

      match:

        macaddress: 00:50:56:9b:6a:d1     <—----(MAC address)

Mtu:1500   <-----(MTU: do not change yet)

   # dhcp4: false


After editing the fields:for interface name,  addresses, gateway4,, macaddress, for each interface, save and exit the file.


Now test the new configuration using the following command:

$ sudo netplan try


If it validates the configuration, you will be given an option to accept the existing configuration.  There may be warnings for no receive the configuration accepted message; otherwise, it rolls back to the previous configuration after 120 seconds.  If the new configuration is applied, the network services are restarted automatically to apply the new settings.


Example


root@linux:~# sudo netplan try


** (generate:3290): WARNING **: 19:35:20.277: Problem encountered while validating default route consistency.Please set up multiple routing tables and use `routing-policy` instead.

Error: Conflicting default route declarations for IPv4 (table: main, metric: default), first declared in ensDATA but also in ensMGMT


** (process:3288): WARNING **: 19:35:20.664: Problem encountered while validating default route consistency.Please set up multiple routing tables and use `routing-policy` instead.

Error: Conflicting default route declarations for IPv4 (table: main, metric: default), first declared in ensDATA but also in ensMGMT

Do you want to keep these settings?


Press ENTER before the timeout to accept the new configuration


These warnings are expected, and it is ok to proceed and apply the network changes.  Applying netplan will restart the networking services.


After this, confirm the IP address of your machine using the following command:

$ ip a


Should reflect the the new ip addresses added to netplan

Interface Verification (without jumbo frames)


To verify our newly configured interfaces the following command can be used to ping an IP on the network segment as the newly configured interfaces


ping -I DATA-INF_1 storONE_DATA_ADDRESS -c 5


This should return a response from the address used which looks like


Example


root@linux:~$ ping -I DATA -c 5 xxx.xxx.xxx.xxx

PING xxx.xxx.xxx.xxx (xxx.xxx.xxx.xxx) from yyy.yyy.yyy.yyy ens192: 56(84) bytes of data.

64 bytes from xxx.xxx.xxx.xxx: icmp_seq=1 ttl=64 time=0.512 ms

64 bytes from xxx.xxx.xxx.xxx: icmp_seq=2 ttl=64 time=0.538 ms

64 bytes from xxx.xxx.xxx.xxx: icmp_seq=3 ttl=64 time=0.329 ms

64 bytes from xxx.xxx.xxx.xxx: icmp_seq=4 ttl=64 time=0.471 ms

64 bytes from xxx.xxx.xxx.xxx: icmp_seq=5 ttl=64 time=0.346 ms


--- xxx.xxx.xxx.xxx ping statistics ---

5 packets transmitted, 5 received, 0% packet loss, time 4076ms

rtt min/avg/max/mdev = 0.329/0.439/0.538/0.085 ms


NOTE: If using Jumbo Frames continue to page 27 before testing the interfaces with PING

Enabling Jumbo frames for Debian and Ubuntu


When enable Jumbo Frames on Linux host interfaces, all virtual and physical switch ports need to be enable for jumbo frames (i.e. Virtual switching, kernel ports, StorONE Array, network switches, etc).  If MTU is not properly set from host to the array iSCSI discovery can be problematic and actual mounting and formatting of a LUN with an MTU mismatch most often fail.


To change the MTU for the data interfaces on the host, edit the following file


vi /etc/netplan/01-network-manager-all.yaml


For the data interface(s) change the mtu from the 1500 we set earlier in the guide to 9000


Example:


network:

    version: 2

    # renderer: NetworkManager

   ethernets:

      ensMGMT:

      addresses: [xxx.xxx.xxx.xxx/YY]

      gateway4: xxx.xxx.xxx.xxx

      nameservers:

       addresses: [xxx.xxx.xxx.xxx, xxx.xxx.xxx.xxx]

      mtu: 1500

      ensDATA:

      addresses: [xxx.xxx.xxx.xxx/YY]

      gateway4: xxx.xxx.xxx.xxx

      match:

       macaddress: 00:50:56:9b:4a:d0

mtu: 9000

      ensDATA2:

      addresses: [xxx.xxx.xxx.xxx/YY]

      gateway4: xxx.xxx.xxx.xxx

      match:

        macaddress: 00:50:56:9b:6a:d1

mtu: 9000

   #  dhcp4: false


Remember to save the netplan YAML file, and issue the following command


sudo netplan try


Interface verification with Jumbo Frames


The PING command below is set to "do not fragment" for the packet size.  If Jumbo frames are set correctly, the command below will return with a ping from the Linux iSCSI initiator ports to the StorONE ports, or any other port with JUMBO frames enabled.


ping -I DATA-INF_1 storONE_iSCSI_discovery_address -c 10 -M do -s 8972


This will return with a ping response only if the MTU is set correctly

If the MTU is not correctly set along the path, then the following command will return with a ping response


ping -I DATA-INF_1 storONE_iSCSI_discovery_address -c 10 -s 8972


If the ping command is not able to properly ping from storage to host, then the switch and array configurations will need to be reviewed.


With a completed and verified NFS linux setup the NFS shares from the array should now be visible for mounting using the commands below


Verify NFS shares from array

showmount -e <storone-floating-ip-address>



Mount options can also be added as part of the mount command using mount -o option command 


NFS fstab Example:

floating_ip:/shares/vol_name  /mountpoint nfs soft,sync,relatime,nconnect=4,vers=3,timeo=60,retrans=50,_netdev 0 0


NFS fstab Example:

floating_ip:/shares/vol_name  /mountpoint nfs soft,sync,relatime,nconnect=4,vers=3,timeo=60,retrans=50,_netdev 0 0


Manual NFS mount example

mount -t (nfs/smb) -o soft,sync,relatime,nconnect=4,vers=3,timeo=60,retrans=50,_netdev floating_ip:/shares/vol_name /mount_dir/vol_dir/mount


Manual SMB mount example

mount -t (nfs/smb) -o soft,sync,relatime,nconnect=4,vers=3,timeo=60,retrans=50,_netdev floating_ip:/shares/vol_name /mount_dir/vol_dir/mount

Debian / Ubuntu  configuring iSCSI and Multipath


Creating ifaces for data interfaces all releases

Creating ifaces through iscsiadm is recommended for clarity, and is required for multiple iSCSI interfaces and MPIO. The iscsiadm commands can be used for all releases covered by this guide.


To create the new ifaces we will need the interface names and mac addresses from ip a for the DATA or iSCSI interfaces


The iscsiadm -m iface command is to create the iface.  The iface name should be the same as the interface name for clarity.


iscsiadm -m iface -I <interface_name> --op=new


To bind the iface to the MAC address of the interface.


 iscsiadm -m iface -I <interface_name> -o update -n iface.hwaddress  -v  <MAC_address>



iSCSI.conf tuning all releases

The following adjustments to the default iscsi.conf values which need to be adjusted for performance and stability


We recommend the following values be set in iscsi.conf


node.session.timeo.replacement_timeout = 120

node.conn[0].timeo.noop_out_interval = 5

node.conn[0].timeo.noop_out_timeout = 10

node.session.nr_sessions = 4

node.session.cmds_max = 2562048

node.session.queue_depth = 641024


NOTE: The value node.session.nr_sessions is recommended to be set at 4 (four) for most applications.  In instances where extreme performance is required, this value can be increased to 8 (eight) to provide additional performance, however as this can create an extreme LUN path count is not recommended as a default value.


The service will need to be restarted for these values to take effect.


sudo systemctl restart iscsid

Adjusting arp for multiple interfaces / same subnet


This is only required if there are multiple interfaces configured for the SAME SUBNET.  If these adjustments are not made on a Linux host with multiple interfaces in the same IP segment or subnet, the configured interfaces will not respond correctly for IP and iSCSI traffic.


sudo vi /etc/systl.conf/etc/sysctl.d/99-storone.conf


net.ipv4.conf.all.rp_filter = 2

net.ipv4.conf.default.rp_filter = 2

net.ipv4.conf.all.arp_filter = 1

net.ipv4.conf.default.arp_filter = 1

net.ipv4.conf.all.arp_announce = 2

net.ipv4.conf.default.arp_announce = 2

net.ipv4.conf.all.arp_ignore = 1

net.ipv4.conf.default.arp_ignore = 1


net.ipv4.conf.all.arp_announce = 1

net.ipv4.conf.all.arp_ignore = 2

net.ipv4.conf.all.rp_filter = 0


When the changes are complete run the command

sudo sysctl -p




Installing Device Mapper Multipath for Ubuntu


Verify and install device-mapper-multipath

To check if the device mapper multipath package is installed, run the following command.


yum list installed | grep -i ‘multipath’ 


Example:


root@localhost /]# systemctl status | grep -i 'multipath'

        │ └─4049 grep --color=auto -i multipath


The example shows that the multipath package is not installed.

To install device-mapper-multipath and any dependent packages


yum install device-mapper-multipath



Installing Multipath tools for Debian


Verify and install multipath-tools

To check if the multipath tools package is installed, run the following command.


sudo dpkg -l | grep ^ii | grep -i multipath


If no values are returned from grep then multipath needs to be installed


sudo dpkg –install multipath-tools


Example:


root@linux:~$ sudo dpkg -l | grep ^ii | grep -i multipath

ii  multipath-tools    0.8.3-1ubuntu2.1    amd64    maintain multipath block device access


This example shows multipath installed, with multipath-tools not installed.  If the system will be used to boot from an array LUN then multipath-tools is needed for proper functionality


sudo dpkg –install multipath-tools-boot

Configuring multipath (all releases)


The StorONE array specific information provided in the device field is required to be added to /etc/multipath.conf.

The defaults and blacklist sections are recommended to be added to multipath.conf

Values in default will affect all multipath devices, values under device are array specific.


A complete example of the required and recommended file is below


Example:


defaults {

Find_multipaths yes

User_friendly_names yes

Path_selector “round-robin 0”

path_grouping_policy multibus

No_path_retry 30

max_sectors_kb      1024

queue_without_daemon no

Max_fds max

Flush_on_last_del yes

Log_checker_err once

}

devices {

device {

vendor “STORONE*”

product “S1*”

detect_prio yes

prio “alua”

path_selector “queue-length 0”

path_grouping_policy group_by_prio

failback immediate

path_checker tur

}

Blacklist_exception {

wwid                  “36882e5a*”

}


Common Linux block storage performance tuning


Disk IO Scheduler (all releases)


IO Scheduler needs to be set at “deadline” for all array backed LUNS.  To set IO Scheduler for all LUNs currently connected to the host, run the below command.


NOTE: multipath must be installed and configured first before using this command. Any additional LUNs added or server reboot will not automatically change to this parameter.



Example:

[root@linux ~]# multipath -ll | grep sd | awk -F":" '{print $4}' | awk '{print $2}' | while read LUN; do echo deadline > /sys/block/${LUN}/queue/scheduler ; done


To run this command on startup, add this line to the file /usr/local/bin/s1_linux_service and restart the service

sudo systemctl restart s1_linux.service

Block volume mounting recommendations (all releases)



barrier,discard, noatime,nodiratime,_netdev


fstab Example:


/etc/fstab file:

/dev/mapper/volX /mountpoint ext4 _netdev,noatime,nodiratime,barrier=0,discard 0 0


Mount options can also be added as part of the mount command using mount -o option command 


CLI example


Mount command:

[root@linux_host ~]# mount -t ext4 -o _netdev,noatime,nodiratime,barrier=0,discard /dev/mapper/volX /mountpoint


Once mounted, volumes can be formatted with file systems as needed or used as RAW block devices.