Configuring and Managing VMkernel TCP/IP Stacks

While working through VCAP6-Deploy blueprint I stumbled upon topic TCP/IP stack configuration. I have seen this many times in my lab while configuring networking stuffs and had a basic idea of what is the purpose of using custom TCP/IP stack. Suprisingly this feature is there with vSphere 5.5, but I never noticed it.

I decided to discover more about TCP/IP stack in my lab and share my experience through this write up. I will start with very basic.

What is VMkernel TCP/IP and why to use it?

The purpose of a TCP/IP Stack configuration in VMware vSphere Hosts is to setup the Networking Parameters which will allow the communication between the Hosts themselves including the Virtual Machines, other Virtual Appliances and last but not least the Network Storage.

We all know that as per networking best practices, its good to have dedicated VMkernel adapters for different type of traffics such as management, vMotion, vSAN, iSCSI and vSphere Replication etc.Read More

Configuring and Administering Storage Policies in vSphere 6

vSphere storage profiles were first introduced with vSphere 5, then renamed to storage policies with the release of vSphere 5.5.

Storage policy aims to logically separate storage by using storage capabilities and storage profiles in order to guarantee a predesignated quality-of-service of storage resources to a virtual machine.

The storage policy is used to map the defined storage capabilities to a virtual machine,
and specifically, its virtual disks. The policy is based on the storage requirements for each virtual disk that guarantees a certain storage feature or capability.

Gone are the days when administrator was only concerned about free space on datastore. Now a days you need to ensure that your application/servers are getting enough IOPS and datastore latency is not impacting any server performance.

In a SDDC infrastructure, software policy based profile management (SPBM) is very crucial and using SPBM is making life of a vSphere administrator easier than ever.  Read More

Administer Hardware Acceleration for VAAI

vStorage API for Array Integration (VAAI) is an API framework developed by VMware that enables certain storage tasks to be offloaded from Esxi hosts to the storage array and thus lessens the processing workload on the Esxi host. VAAI was first introduced with vSphere 4.1.

It significantly improves the performance of storage -intensive operations such as cloning storage, block zeroing , and so on and thus reduces the overhead on the ESXi host. The main goal of VAAI is to help storage vendors to provide hardware assistance to speed up VMware I/O operations.

The APIs create a separation of duty between the hypervisor and its storage devices, enabling each to focus on what it does best: virtualization-related tasks for the hypervisor and storage-related tasks for the storage arrays.

What vSphere functions can be offloaded with VAAI?

1: Copy offload: Certain operations such as deploying a new VM from template or cloning a VM etc involves copying of virtual disk files.Read More

Tagging a disk as SSD Disk in vSphere 6

There is a a very cool feature available in vSphere 6 which allows you to tag (mark) a devise as SSD/Flash device. Some time ESXi host do not recognize certain devices as flash when their vendors do not support automatic flash disk detection. The Drive Type column for the devices shows HDD as their type.

Why you need to do so?

Case 1: If you’re planning to deploy vSAN in your environment, then as a vSAN prerequisites you need some SSD disks (at least one per disk group).

Case 2: You want to use Host Cache Configuration feature so that host cache can be configured to use SSD drives and the virtual machine’s swapfile can be stored on this SSD drive for better performance as the SSD has much faster latency than a traditional mechanical disk

So if in your environment, your SSD disks has not been recognized automatically as SSD and in vSphere Web Client it is still showing as HDD, then you can manually mark the drive as SSD.Read More

Exporting/Importing vDS Configuration in vSphere 6

This will be a very short post on how to export vDS configuration from one vCenter and importing it in another vCenter. Lets get started.

If you have several vCenter server in your datacenter and you want to have a consistent and identical naming schemes for your distributed switch across all your vCenters, then you can save your time and efforts of creating vDS/Port groups on each vCenter manually by using the export configuration feature of vDS.

Login to vSphere Web Client and select the vDS (which is fully configured) and right click on it and select Settings > Export Configuration.

You can export either only vDS settings or can include port group settings (Security policy, NIC teaming, vLAN ID etc) as well. I wanted both in my lab so I chose first option.

Click yes to save the exported file.

Login to destination vCenter via Web Client and navigate to Networking view and right click on virtual datacenter and select Distributed Switch > Import Distributed Switch.Read More

How to locate iso file uploaded in vCloud Director on backend datastore

Some time back I got a case where one customer deployed a Cisco ASA v10 appliance in his on-prem and attached 2 CD drives in that VM and then transferred that VM in vCloud Air. Post transfer of VM, customer was not able to power on the VM as the second CD drive of VM was not mapped to iso which customer uploaded in his catalog.

If you are familiar with vCD UI, then you might be aware of the fact that vCD do not provides an option to end user to specify particular CD ROM device for inserting ISO file. The only option which user gets is “Insert CD/DVD from Catalog’ and when an iso is inserted, it is always mapped to first CD ROM device at vCenter level.

Customer was looking for mapping the uploaded iso to his second CD ROM device from backend (vCenter) if possible.

Now being an administrator, it was easy for me to do the mapping, but the challenge was to find the datastore location where the ISO file was sitting.Read More

Multipathing and Claim Rules

What is Multipathing?

Multipathing is having more than one path to storage devices from your server. At a given time more than one paths are used to connect to the LUN’s on storage device. It provides the ability to load-balance between paths when all paths are present and to handle failures of a path at any point between the server and the storage.

The vSphere host supports multipathing in order to maintain a constant connection between the server and the storage device, in case of failure that results in an outage of an HBA, fabric switch, storage controller, or Fibre Channel cable.

Multipathing support does not require specific failover drivers. However, in order to support path switching, the server does require two or more HBAs, from which the storage array can be reached by using one or, typically, multiple fabric switches.

Virtual machine I/O might be delayed for up to sixty seconds while path failover takes place.Read More

Storage IO Control Deep Dive

Storage IO control (SIOC) was first introduced in vSphere 4.1 and since then its getting better and better with every release of vSphere. This is one of those feature which easily escapes eye of a vSphere administrator while architecting/configuring environment.

As we know storage is the slowest among it counterparts like CPU and Memory and when bottleneck occurs in an environment, virtual machine can suffer serious performance.

Introduction of SDRS in vSphere 5.0 made life of vSphere administrator a bit easy as SDRS tends to balance datastores when IO imbalances starts to occur in environment. Although this sounds great, but there is one caveat in this. SDRS can’t prevent a virtual machine from monopolizing IO consumption. In other words SDRS was unable to ensure fair distribution of IO’s among virtual machine when contention occurs and as a result of this few virtual machine tends to suffer performance impacts.

So what is Storage IO Control?Read More

VMFS Locking Mechanisms

In a shared storage environment, when multiple hosts access the same VMFS datastore, specific locking mechanisms are used. These locking mechanism prevent multiple hosts from concurrently writing to the metadata and ensure that no data corruption occurs. This type of locking is called distributed locking.

To counter the situation of Esxi host crashing, distributed locks are implemented as lease based. An Esxi host that holds lock on datastore has to renew the lease for the lock time and again and via this Esxi host lets the storage know that he is alive and kicking. If an Esxi host has not renewed the locking lease for some time, then it means that host is probably dead.

If the current owner of the lock do not renews the lease for a certain period of time, another host can break that lock.  If an Esxi host wants to access a file that was locked by some other host, it looks for the time-stamp of that file and if it finds that the time-stamp has not been increased for quite a bit of time, it can remove the stale locks and can place its own lock.Read More

Deploy Virtual Volumes

What is Virtual Volumes?

From VMware KB-2113013

Virtual Volumes (VVols) is a new integration and management framework that virtualizes SAN/NAS arrays, enabling a more efficient operational model that is optimized for virtualized environments and centered on the application instead of the infrastructure. Virtual Volumes simplifies operations through policy-driven automation that enables more agile storage consumption for virtual machines and dynamic adjustments in real time, when they are needed.

VVOL was first introduced in vSphere 6.0 and it changed the way how virtual machines are stored on storage. Before VVOL’s, the underlying datastores/LUN’s were provisioned and tagged as gold, silver and bronze type of model that forces a virtualization administrator to pick the storage tier that most closely matches their needs.

VVOL’s enables an administrator to apply a policy to a VM which defines the various performance and service-level agreement requirements, such as RAID level, replication or deduplication. The VM is then automatically placed on the storage array that fits those requirements.Read More