In this post we will learn how to achieve multipathing when using software iSCSI adapters to connect to Storage Array.
What is Multipathing?
Multipathing is having more than one path to storage devices from your server. At a given time more than one paths are used to connect to the LUN’s on storage device. It provides the ability to load-balance between paths when all paths are present and to handle failures of a path at any point between the server and the storage. Multipathing is a de-facto standard for most Fibre Channel SAN environments.
Multipathing for software iSCSI
For environments which uses software iSCSI to connect to Storage Array, multipathing is possible at the VMkernel network adapter level, but it is not the default configuration. The default iSCSI configuration creates only one path from the software iSCSI adapter (vmhba) to each iSCSI target.
To enable failover at the path level and to load-balance I/O traffic between paths, we have to configure port binding to create multiple paths between the software iSCSI adapters on Esxi servers and the storage array.
Without port binding, all iSCSI LUN’s will be detected using a single path per target. By default, Esxi host will use only one vmknic port to connect to each target, and you will be unable to use path failover or to load balance I/O between different paths to the iSCSI LUN’s.
This holds true even if we are using NIC Teaming and have more than one uplink for the VMkernel port group used for iSCSI. In case of simple NIC Teaming, traffic will be redirected at the network layer to the second network adapter during connectivity failure through the first network card, but failover at the path level will not be possible, nor will load balancing between multiple paths.
vmknic-based software iSCSI multipathing is also known as “Port Binding” or simply as “software iSCSI multipathing”
Configuring vmknic-Based iSCSI Multipathing
To enable vmknic-based software iSCSI multipathing, you must:
- Create two VMkernel port groups and connect one uplink to each of them.
- Bind each VMkernel network adapter to the software iSCSI adapter. Then run a rediscovery of iSCSI targets to detect multiple paths to them.
Follow the steps outlined below to achieve multipathing for software iSCSI
1: Configure the Network: When you enable multipathing it removes the ability to route to storage and due to this reason storage and the VMkernel port must have an IP address in the same network. Network configuration involves the following steps:
a) Create a new Standard vSwitch (say for e.g. iscsivSwitch
b) Add two physical uplinks to iscsivSwitch
c) Create 2 VMkernel Port-group say by name iSCSI-PG1 and iSCSI-PG2 and provide the IP address on both VMkernel adapter.
Initially all network adapters that are added to the vSwitch appears as active for each VMkernel port on the vSwitch. We have to override this configuration so that each VMkernel port maps to only one active adapter.
d) In the Ports tab of the vSwitch Properties dialog box, select a VMkernel port and click Edit.
e) Click the NIC Teaming tab and check Override switch failover order.
f) We have to keep only one adapter under Active Adapters and use Move Down to move other adapters under Unused Adapters.
g) Repeat steps d-f for each VMkernel port on the vSwitch, ensuring that each port has its own unique active adapter.
Enabling the iSCSI Software Adapter
To access iSCSI targets, we must first enable the software iSCSI initiator on the ESXi host. Follow the steps outlined below to do so:
1: Connect ot hte ESXi esrver suing Svphere Client.
2: Go to Configuration > Storage Adapters.
3: Click Add and check Add Software iSCSI Adapter.
4: Right click the newly added software adapter and select properties.
5: Enter the iSCSI target address in Static/Dynamic Discovery.
Dynamic Discovery – Specify the addresses for Send Targets discovery. The iSCSI initiator sends a Send Targets request to each of the specified addresses, and the discovered targets are added to the static discovery list.
Static Discovery – A list of IP addresses and iSCSI names of targets to connect to. This list can be filled in based on dynamic Send Targets requests or entered individually. This is a list of targets with which Esxi host attempts to establish sessions.
6: Click Close to finish iSCSI initiator configuration.
Activate vmknic-Based Multipathing for Software iSCSI
vSphere 5.0 has added a new UI interface to support multipathing configuration for the software iSCSI adapter using port binding.
1: Click the Configuration tab and select Storage Adapters.
2: Select iSCSI Software Adapter and click Properties.
3: Click t he Network Configuration tab and click Add to bind the VMkernel network adapter to the software iSCSI adapter.
4: The bind with the VMkernel adapter window is displayed, listing all the VMkernel adapters compatible whit iSCSI port binding requirements. Select the VMkernel network adapter you want to bind to the software iSCSI adapter and click OK.
5: Repeat steps 3-4 until you bind all the required VMkernel adapters to the iSCSI adapter.
6: Close the iSCSI Initiator Properties window.
7: Select the software iSCSI adapter and perform rescan to verify that multiple paths are available for iSCSI LUN’s.
Note: If the port group policy report as non-compliant for the VMkernel network adapter then please ensure:
• The VMkernel network adapter is connected to an active physical network adapter or it should not be connected to more than one physical network adapter.
• The VMkernel network adapter should not be connected to any standby physical network adapters.
I hope you enjoyed reading this post. Feel free to share it on social media if this post is informational to you. Be Sociable 🙂
Pingback: Configuring Port Binding Using CLI | Go Virtual.
Pingback: Configuring Port Binding Using CLI – Virtual Reality