Multipathing and Claim Rules

What is Multipathing?

Multipathing is having more than one path to storage devices from your server. At a given time more than one paths are used to connect to the LUN’s on storage device. It provides the ability to load-balance between paths when all paths are present and to handle failures of a path at any point between the server and the storage.

The vSphere host supports multipathing in order to maintain a constant connection between the server and the storage device, in case of failure that results in an outage of an HBA, fabric switch, storage controller, or Fibre Channel cable.

Multipathing support does not require specific failover drivers. However, in order to support path switching, the server does require two or more HBAs, from which the storage array can be reached by using one or, typically, multiple fabric switches.

Virtual machine I/O might be delayed for up to sixty seconds while path failover takes place.Read More

Storage IO Control Deep Dive

Storage IO control (SIOC) was first introduced in vSphere 4.1 and since then its getting better and better with every release of vSphere. This is one of those feature which easily escapes eye of a vSphere administrator while architecting/configuring environment.

As we know storage is the slowest among it counterparts like CPU and Memory and when bottleneck occurs in an environment, virtual machine can suffer serious performance.

Introduction of SDRS in vSphere 5.0 made life of vSphere administrator a bit easy as SDRS tends to balance datastores when IO imbalances starts to occur in environment. Although this sounds great, but there is one caveat in this. SDRS can’t prevent a virtual machine from monopolizing IO consumption. In other words SDRS was unable to ensure fair distribution of IO’s among virtual machine when contention occurs and as a result of this few virtual machine tends to suffer performance impacts.

So what is Storage IO Control?Read More

VMFS Locking Mechanisms

In a shared storage environment, when multiple hosts access the same VMFS datastore, specific locking mechanisms are used. These locking mechanism prevent multiple hosts from concurrently writing to the metadata and ensure that no data corruption occurs. This type of locking is called distributed locking.

distributed lock.png

To counter the situation of Esxi host crashing, distributed locks are implemented as lease based. An Esxi host that holds lock on datastore has to renew the lease for the lock time and again and via this Esxi host lets the storage know that he is alive and kicking. If an Esxi host has not renewed the locking lease for some time, then it means that host is probably dead.

If the current owner of the lock do not renews the lease for a certain period of time, another host can break that lock.  If an Esxi host wants to access a file that was locked by some other host, it looks for the time-stamp of that file and if it finds that the time-stamp has not been increased for quite a bit of time, it can remove the stale locks and can place its own lock.Read More

Deploy Virtual Volumes

What is Virtual Volumes?

From VMware KB-2113013

Virtual Volumes (VVols) is a new integration and management framework that virtualizes SAN/NAS arrays, enabling a more efficient operational model that is optimized for virtualized environments and centered on the application instead of the infrastructure. Virtual Volumes simplifies operations through policy-driven automation that enables more agile storage consumption for virtual machines and dynamic adjustments in real time, when they are needed.

VVOL was first introduced in vSphere 6.0 and it changed the way how virtual machines are stored on storage. Before VVOL’s, the underlying datastores/LUN’s were provisioned and tagged as gold, silver and bronze type of model that forces a virtualization administrator to pick the storage tier that most closely matches their needs.

VVOL’s enables an administrator to apply a policy to a VM which defines the various performance and service-level agreement requirements, such as RAID level, replication or deduplication. The VM is then automatically placed on the storage array that fits those requirements.Read More

Create,Configure and Manage Datastore Clusters

What is Datastore Cluster?

A datastore cluster is a collection of datastores with shared resources and a shared management interface. Datastore clusters are to datastores what clusters are to hosts.

When you add a datastore to a datastore cluster, the datastores resources become part of the datastore cluster’s resources. Datastore clusters are used to aggregate storage resources, which enables you to support resource allocation policies at the datastore cluster level. Also datastore cluster provides following benefits:

  • Space utilization load balancing
  • I/O latency load balancing
  • Anti-affinity rules

We will talk about these in greater details a bit later in this post.

Datastore Cluster Requirements

Before creating a datastore cluster, one should keep following important points in mind:

1: Datastore clusters must contain similar or interchangeable datastores: A datastore cluster can contain a mix of datastores with different sizes and I/O capacities, and can be from different arrays and vendors. However, the following types of datastores cannot coexist in a datastore cluster.Read More

Configure and Manage vSphere Flash Read Cache

What is vSphere Flash Read Cache aka vFlash?

Flash Read Cache helps in accelerating virtual machine performance through the use of flash devices residing in Esxi host as a cache.

vFlash was first introduced in vSphere 5.5. It allows you to use local SSD disks of Esxi host to create a caching layer for your virtual machines. By using host local SSD’s, you can offload some of the IO from your SAN storage to these local SSD disks.

vFlash aggregates local flash devices into a pool and this pool is called “Virtual flash resource” (vFRC). For example if you have 3 x 60 GB SSD you end up with a 180 GB virtual flash resource. Each local SSD configured for vFRC is formatted with a filesystem called VFFS  aka “Virtual Flash File System”.

12

 

vFRC helps reducing lowering application latency as the read IO don’t have to go all the way down to SAN across all the physical network controllers/storage controllers etc and instea they just go to vFRC.Read More

iSCSI Port Binding in vSphere 6

My first interaction with iSCSI and port binding was back in 2013 when we introduced an iSCSI based SAN (Dell MD3200i) in our environment. We were a small SMB entity and introduction of SAN for our vSphere environment was a very big thing for me as an administrator.

This was our architecture back then

13.PNG

I clearly remember that before starting the SAN implementation, I was contacted by Dell engineer to do some pre-work which included creating 2 vmkernel portgroup and each with only one vNIC as active and one as unused so as to achieve multipathing with iSCSI.

At that time I was aware of multipathing and what it does, but I was confused on Active/Unused adapter configuration (as I was still learning) and when the actual implementation started, the implementation guy explained it and that was first time when I heard the word Port Binding.

Its time to refresh the concepts now as I am going through my VCAP preparations.Read More

VMFS Re-Signaturing

When you create a new datastore in vSphere, each VMFS volume is assigned a unique identifier (UUID) and this UUID info is stored in a metadata file as unique hexadecimal number.

You can see these UUID via ssh console of Esxi host.  In the /vmfs/volumes directory, each VMFS volume has a long string and a human readable names (which we configure from GUI while creating a datastore) as links to the UUID.

lun-1.PNG

The UUID is comprised of four components. Lets understand this by taking example of one of the vmfs volume’s UUID : 591ac3ec-cc6af9a9-47c5-0050560346b9

  • System Time (591ac3ec)
  • CPU Timestamp (cc6af9a9)
  • Random Number (47c5)
  • MAC Address – Management Port uplink of the host used to re-signature or create the datastore (0050560346b9)

In my example, the mac address was of the Management NIC of my first Esxi host

ln-2.PNG

When a LUN is replicated or its snapshot is taken on the storage side, the copied LUN is identical to original one.Read More

My Notes on Raw Device Mapping (RDM)

Raw Device Mapping aka RDM is a way for providing virtual machine direct access to LUN on the SAN storage. The LUN presented to VM can be then formatted with any filesystem like NTFS or FAT for Windows OS and thus there is no need to format the LUN with VMFS filesystem and then place a vmdk on it.

RDM can be think of as a symbolic link from a VMFS volume to the Raw LUN. When an RDM is mapped to a virtual machine, a mapping file is created.This mapping file acts as a proxy for the physical device and contains metadata used for managing and redirecting access to the raw disk.

When the virtual machine tries to access the LUN, the mapping file is read to obtain the reference to the raw LUN and then the reads and writes go directly to the raw LUN rather than going through the mapping file.Read More

Upgrade Virtual Machine Hardware and VMware Tools

The virtual machine compatibility setting determines the virtual hardware available to the virtual machine, which corresponds to the physical hardware available on the host. The latest virtual machine hardware version available yields the best performance and most reliable behavior from the applications running in your virtual machine.

When to upgrade virtual machine hardware version?

Upgrading virtual machine hardware version is applicable in 2 cases:

1: You have upgraded the Esxi hosts to  a new version say from 6.0 to 6.5. The highest hardware version present in 6.5 is v13 whereas in vSphere 6.0 its v11. In this case you will have a choice to upgrade the hardware version to latest to avail all the advantages offered by the new HW version.

At the time of VM creation, you might have observed that the wizard asks you for the VM compatibility. If you have not selected the latest version available, you can upgrade it anytime later.Read More