Basics of iSCSI
In the computing world, iSCSI is an acronym for Internet Small Computer Systems Interface, an (IP Internet Protocol)-based storage networking standard for linking data storage facilities. It provides block-level access to storage devices by carrying SCSI commands over a TCP/IP network.
With the emergence of high-speed networks which includes 2.5 Gbps, 5 Gbps, 25 Gbps, 40 Gbps, 50 Gbps and 100 Gbps speeds iSCSI becoming more popular where Fiber channel still dominates in production environments but for the non-critical environment and the customers looking for a cheap storage solution iSCSI is the best option for them. In recent years vSphere has tons of improvement in the iSCSI software initiator especially with the jumbo frames support iSCSI is spreading widely in the industry.
Let’s starts with the basics, it is one of the main IP storage standards especially for the none critical load. In the figure below a server is on network is accessing block storage.Type-1 hypervisors are capable to support different storage technologies and protocols for presenting external storage devices. We will discuss here mostly about VMware and a little bit about KVM. So vSphere is providing support for iSCSI storage since it’s the greatest version of all times “virtual infrastructure 3”.
Adoption of iSCSI
We do not need to create the new network as we do in FC, we can have SCSI in our common network for LAN, MAN & WAN. TCP/IP have no limits on distance. Manpower and TCP/IP opensource tools are widely available, so these are the main benefits we have over FC if we implement iSCSI
Nowadays you do not need a storage admin as companies were in need 10 years ago. Storage arrays evaluated a lot in recent years and it is now so easy to configure them. Management software will do all the raids and hardware monitoring for you. One of the main benefits of ISCSI implementation is it is inexpensive as compared to its other counterpart storage protocols such as fibre channel protocol.
“The Bitterness of poor performance lasts long after the sweetness of a cheap price is forgotten” Michael Webster VMworld 2013
When iSCSI use network interface cards rather than dedicated iSCSI adapters, interfaces expected to use significant amount of CPU resource of your servers. There are many ways to overcome this problem but one of them is to use TOE ( TCP Offload Engine) capable NIC.
What TOE do?
It just simply move the TCP packet processing tasks from server CPU to specialized TCP processor on the network adopter or it could be possible that move to the Storage device. The concept of offloading work from the main processor is similar to that governing graphics coprocessors, which offload 3D calculations and visual rendering tasks from the main CPU.
The ability of TOE to perform full transport layer functionality is essential to obtaining tangible benefits. The important aspect of this layer is it being the process-to-process layer.
In my point of view the cost is unquestionably the main issue that has hindered the adoption of TOE in general enterprise community. The normal TOE capable cards can range in price from $400 to $2000 and in some of the server you need to use the expansion slot or even raiser so additional cost and the benefit you get is not that big that everybody consider to buy TOE cards.
Moreover, in my point of view over the time VMware has improved its vSphere a lot specially with the liberty to enable Jumbo Frames I would prefer to use the software iSCSI and on other side with TOE you don’t have it. For VMware please go to VMware HCL
Difference between iSCSI and Fiber Channel
One of the main difference between iSCSI and Fibre channel is the ways to handle the I/O congestion. So when an iSCSI path is overloaded or it drops packet and become substantially oversubscribed, this bad situation quickly grows and become worse. The performance further degrades because dropped packet must be resent, where as FC protocol is having a built in pause mechanism when congestion occurs. So both protocols are having different mechanism to handle congestion.
Currently many vendors implemented delayed Ack and congestion avoidance as a part to there TCP/IP stack. VMware recommends consulting the iSCSI array vendor for specific recommendations around Delayed Ack.
TCP delayed acknowledgment
TCP delayed acknowledgment is a technique used to improve network performance. In essence, several ACK responses may be combined together into a single response, reducing protocol overhead.
Difference between iSCSI and NAS
NAS presents devices at the file level. NAS is specialized for serving files either by its hardware, software or configuration. Its often manufactured as a computer appliance.
iSCSI is an Internet Engineering Task Force (IETF) standard for encapsulating SCSI control and data in TCP/IP packets. In shown below figure you can see how iSCSI is encapsulated in TCP/IP and Ethernet frames.
VMware iSCSI Names
Named globally unique and they are not bound to any ethernet adopters or IP addresses. iSCSI support two forms, one is Extended Unique Identifier (EUI) and iSCSI Qualified names IQN
Basically, iSCSI is the client-server architecture. The clients of an iSCSI interface are known as initiators and the server that shares the storage area is known as targets.
There are two basic iSCSI components;
It functions as an iSCSI client. An iSCSI initiator sends an SCSI command over an IP network. There are two kinds of initiators;
A software initiator uses code to implement iSCSI, typically, this happens in the kernel device driver that uses the network card and network stack to emulate SCSI devices for a computer by speaking the iSCSI protocol.
Nowadays all most all the popular operating system comes with the software initiators. In the table below you can find the dates when operating systems released their software intiators.
|Operating System||First day release||Version||Features|
|VMware ESX||2006||ESX 3.0-7.X||Target,Multipath|
The iSCSI refers to a storage resource located on an iSCSI server as a target. These are typically data providers. This is your storage array and it provides distinct iSCSI targets for numerous clients.
In the context of vSphere, iSCSI initiators fall into three distinct categories.
Software iSCSI Adaptor
This is VMware code built into the vmkernel, it enables your host to connect to the iSCSI storage device through the standard network adapter.
Depended hardware iSCSI Adapter
Provided by VMware this type of adapter can be a card that presents a standard network adapter and iSCSI offload functionality for the same port. An example of a dependent adapter is the iSCSI licensed Broadcom 5709.
Independent hardware iSCSI Adapter
This kind of adapter represents an independent hardware iSCSI adapter which is a card that presents either iSCSI offload functionality and standard nic functionality. The iSCSI offload functionality has independent configuration management that assigns the IP address, MAC address, and other parameters used for the iSCSI sessions. This is the kind we talked about earlier TOE cards. To identify if TOE and other TCP features are enabled or not, run the command:
To check the status of TSO in ESX/ESXi, run the command:
ethtool -k vmnicX
To disable TSO within a Linux guest OS, run the command:
ethtool -K ethX tso off
Simplest Topology of An iSCSI Array
In the figure below four ESX hosts are connected in the simplest form, so each ESX is having two uplinks and they are connected with two switches and on other side storage array is connected to the switch. All the connections are redundant.
Try to avoid vSphere NIC teaming and use port binding. with port binding you can utilise the multipathing for availability of access to the iSCSI targets.
There will be some senarios where you need to use the teaming. If this is the case then turn off port security on the switch for the two ports on which the virtual IP address is shared. By turning off the security setting you can prevent spoofing of IP address.
How to add an iSCSI initiator to the vSphere ESXi
Before adding a new iSCSI initiator here are some recommendations.
- Make sure that the host recognizes LUNs at start-up.
- SCSI controller driver in the guest operating system should have a large queue. for Windows OS increase the value of the SCSI Timeout value parameter to tolerate delays I/O resulting from path failover.
- Configure your environment to have only one VMFS datastore for each LUN.
Select on the ESX hosts click configure and click +Add Software Adapter it will pop-up one more windows in the that windows chose “Add Software iSCSI Adopter.
- ESXi does not support iSCSI-connected tape devices.
- You cannot use virtual-machine multi-pathing software to perform I/O load balancing to a single physical LUN.
- ESXi does not support multi-pathing when you combine independent hardware adapters with either software or dependent hardware adapters.
Frequently Asked Questions
- Software iSCSI and hardware iSCSI, If enabled on the same host is supported?
Yes, it is supported but not to access the ISCSI target.
- Will VMware support iSCSI with less than a gigabit ethernet connection?
In a production environment, the gigabit ethernet is essential.
- Can we mix the iSCSI traffic with general traffic.
It is recommended to not mix iSCSI traffic with general traffic. So implement it with layer-2 vlans.
- What is the no.1 problem with iSCSI?
Oversubscription is the biggest enemy of iSCSI.
- What is recommended port binding or NIC teaming.
VMware recommends using port binding rather than nic teaming.
- Is ipv6 support available?
Starting from VSphere 6 it is supported & since RHEL 6 fully supported
- what about Disk Alignment?
Disk alignment may harm performance on block storage.
- Does MS cluster support iSCSI?
Yes, it supports Microsoft clustering. I think it was started with the VSphere 5.1
CSI – Admin steps
- Some basic commands used to get the logs in the ESX
#grep -r iscsid /var/log/* | less
- Run this command in the log directory and you will get all the errors in your all directories.
#grep -i error *
- Refer to VMware KB article for iSCSI Error codes https://kb.vmware.com/s/article/2012171
- Is your NIC on VMware hardware compatibility list.
- You can confirm if everything is working by login to your initiator to target.
- Ensure all routing, DNS and network information is correctly configured ( esxcfg-route -l)
- In-case of iSCSI software initiator a VMkernel port must be set up on the same subnet as your iSCSI storage array with the proper IP addressing.
- Can you see your iSCSI storage array’s IP in either the dynamic or static discovery tab.
- Try ping ( vpkping ip_Storage_Array ) & check firewall (if any) (esxcli network firewall ruleset list | grep iSCSI)
- Jumbo frames if used must be configured on all the devices.(vmkping -s 8972 -d < ip_Storage_Array>)
- Check your advance settings such as LoginTimeout, RecoveryTimeout and Header and Data Diagests. Default settings of LoginTimout is settle to 5, means initiator will wait 5 seconds for login response. You can check the logs and find if you need to increase the time or not. Default value of RecoveryTimeout is 10 seconds. It specifies the time in seconds that can elapse while a session recovery is performed. Default value of the Header and Data Diagests is set to Prohibited it can be enabled to ensure data integrity.