Tagging a disk as SSD Disk in vSphere 6

There is a a very cool feature available in vSphere 6 which allows you to tag (mark) a devise as SSD/Flash device. Some time ESXi host do not recognize certain devices as flash when their vendors do not support automatic flash disk detection. The Drive Type column for the devices shows HDD as their type.

Why you need to do so?

Case 1: If you’re planning to deploy vSAN in your environment, then as a vSAN prerequisites you need some SSD disks (at least one per disk group).

Case 2: You want to use Host Cache Configuration feature so that host cache can be configured to use SSD drives and the virtual machine’s swapfile can be stored on this SSD drive for better performance as the SSD has much faster latency than a traditional mechanical disk

So if in your environment, your SSD disks has not been recognized automatically as SSD and in vSphere Web Client it is still showing as HDD, then you can manually mark the drive as SSD.Read More

Exporting/Importing vDS Configuration in vSphere 6

This will be a very short post on how to export vDS configuration from one vCenter and importing it in another vCenter. Lets get started.

If you have several vCenter server in your datacenter and you want to have a consistent and identical naming schemes for your distributed switch across all your vCenters, then you can save your time and efforts of creating vDS/Port groups on each vCenter manually by using the export configuration feature of vDS.

Login to vSphere Web Client and select the vDS (which is fully configured) and right click on it and select Settings > Export Configuration.

vds-1

You can export either only vDS settings or can include port group settings (Security policy, NIC teaming, vLAN ID etc) as well. I wanted both in my lab so I chose first option.

vds-2

Click yes to save the exported file.

vds-3

Login to destination vCenter via Web Client and navigate to Networking view and right click on virtual datacenter and select Distributed Switch > Import Distributed Switch.Read More

How to locate iso file uploaded in vCloud Director on backend datastore

Some time back I got a case where one customer deployed a Cisco ASA v10 appliance in his on-prem and attached 2 CD drives in that VM and then transferred that VM in vCloud Air. Post transfer of VM, customer was not able to power on the VM as the second CD drive of VM was not mapped to iso which customer uploaded in his catalog.

If you are familiar with vCD UI, then you might be aware of the fact that vCD do not provides an option to end user to specify particular CD ROM device for inserting ISO file. The only option which user gets is “Insert CD/DVD from Catalog’ and when an iso is inserted, it is always mapped to first CD ROM device at vCenter level.

cd-catalog.PNG

Customer was looking for mapping the uploaded iso to his second CD ROM device from backend (vCenter) if possible.

Now being an administrator, it was easy for me to do the mapping, but the challenge was to find the datastore location where the ISO file was sitting.Read More

Multipathing and Claim Rules

What is Multipathing?

Multipathing is having more than one path to storage devices from your server. At a given time more than one paths are used to connect to the LUN’s on storage device. It provides the ability to load-balance between paths when all paths are present and to handle failures of a path at any point between the server and the storage.

The vSphere host supports multipathing in order to maintain a constant connection between the server and the storage device, in case of failure that results in an outage of an HBA, fabric switch, storage controller, or Fibre Channel cable.

Multipathing support does not require specific failover drivers. However, in order to support path switching, the server does require two or more HBAs, from which the storage array can be reached by using one or, typically, multiple fabric switches.

Virtual machine I/O might be delayed for up to sixty seconds while path failover takes place.Read More

Storage IO Control Deep Dive

Storage IO control (SIOC) was first introduced in vSphere 4.1 and since then its getting better and better with every release of vSphere. This is one of those feature which easily escapes eye of a vSphere administrator while architecting/configuring environment.

As we know storage is the slowest among it counterparts like CPU and Memory and when bottleneck occurs in an environment, virtual machine can suffer serious performance.

Introduction of SDRS in vSphere 5.0 made life of vSphere administrator a bit easy as SDRS tends to balance datastores when IO imbalances starts to occur in environment. Although this sounds great, but there is one caveat in this. SDRS can’t prevent a virtual machine from monopolizing IO consumption. In other words SDRS was unable to ensure fair distribution of IO’s among virtual machine when contention occurs and as a result of this few virtual machine tends to suffer performance impacts.

So what is Storage IO Control?Read More

VMFS Locking Mechanisms

In a shared storage environment, when multiple hosts access the same VMFS datastore, specific locking mechanisms are used. These locking mechanism prevent multiple hosts from concurrently writing to the metadata and ensure that no data corruption occurs. This type of locking is called distributed locking.

distributed lock.png

To counter the situation of Esxi host crashing, distributed locks are implemented as lease based. An Esxi host that holds lock on datastore has to renew the lease for the lock time and again and via this Esxi host lets the storage know that he is alive and kicking. If an Esxi host has not renewed the locking lease for some time, then it means that host is probably dead.

If the current owner of the lock do not renews the lease for a certain period of time, another host can break that lock.  If an Esxi host wants to access a file that was locked by some other host, it looks for the time-stamp of that file and if it finds that the time-stamp has not been increased for quite a bit of time, it can remove the stale locks and can place its own lock.Read More

Deploy Virtual Volumes

What is Virtual Volumes?

From VMware KB-2113013

Virtual Volumes (VVols) is a new integration and management framework that virtualizes SAN/NAS arrays, enabling a more efficient operational model that is optimized for virtualized environments and centered on the application instead of the infrastructure. Virtual Volumes simplifies operations through policy-driven automation that enables more agile storage consumption for virtual machines and dynamic adjustments in real time, when they are needed.

VVOL was first introduced in vSphere 6.0 and it changed the way how virtual machines are stored on storage. Before VVOL’s, the underlying datastores/LUN’s were provisioned and tagged as gold, silver and bronze type of model that forces a virtualization administrator to pick the storage tier that most closely matches their needs.

VVOL’s enables an administrator to apply a policy to a VM which defines the various performance and service-level agreement requirements, such as RAID level, replication or deduplication. The VM is then automatically placed on the storage array that fits those requirements.Read More