VLAN tagging and PVLAN’s in vSphere 6

VLAN’s enable a single physical LAN segment to be further isolated so that groups of ports are isolated from one another as if they were on physically different segments. Using VLAN’s, administrator get following advantages:

  • Integrates the host into a pre-existing environment
  • Isolate and secure network traffic
  • Reduce network traffic congestion

In a physical environment, servers are equipped with dedicated physical NIC that are in turn connected to a physical switch. VLANs in physical world are usually controlled by setting the VLAN ID on the physical switch port and then setting the server’s IP address to correspond to that NIC’s VLAN.

In a virtual environment, dedicating a physical NIC (pNIC) to each VM that resides on the host is not possible. In reality, a physical NIC of the Esxi host service many VMs, and these VM’s may need to be connected to different VLANs. So the method of setting a VLAN ID on the physical switch port doesn’t work.Read More

Switch Discovery Protocols

In physical networking space, switches are connected to one or more adjacent switch forming a web of switches which can talk to each other. This web of switches is referred as “neighbourhood of switching”.

Virtual switches (standard or vDS) are connected to these physical switches via physical uplinks. These uplinks are terminating at a particular port of the physical switch and that port itself have some characteristics like a VLAN ID etc defined there. These characteristic values are not exposed to virtual switches by default.

What I mean by this is by just looking at virtual switch diagram in vSphere client, we can’t tell which uplink of vSwitch is connected to which port of physical switch, or what is the make and model of backend physical switch.  

Switch discovery protocols allow vSphere administrators to determine which physical switch port is connected to a given vSphere standard switch or vSphere distributed switch.Read More

Configuring QoS and Traffic Filtering in vSphere 6

During my VCAP6-Deploy exam preparation, I found this topic quiet a bit interesting and difficult as well as I have never ever laid my hands on Quality of Service type of thinks in respect of networking. Also my concepts were not very clear on topics like DSCP, QoS, COS etc, so I decided to learn more about these this time and write a blog post on the same.

What is Quality of Service (QoS) and Traffic filtering?

In a vSphere distributed switch 5.5 and later, by using the traffic filtering and marking policy, you can protect the virtual network from unwanted traffic and security attacks or apply a QoS tag to a certain type of traffic.

The goal of using QoS for network is to ensure that the most important network traffic gets to where it needs to go while suffering least amount of latency when there is congestion in network.  Read More

Configuring NetFlow in vSphere 6

NetFlow is a mechanism to analyze network traffic flow and volume to determine where traffic is coming from, where it is going to, and how much traffic is being generated. NetFlow-enabled routers export traffic statistics as NetFlow records which are then collected by a NetFlow collector.

Traffic flows are defined as the combination of source and destination IP addresses, source and destination TCP or UDP ports, IP, and IP Type of Service (ToS). Network devices that support NetFlow, tracks and report information on the traffic flows, and send this information to a NetFlow collector. Using the data collected, network admins gain detailed insight into the types and amount of traffic flows across the network.

Netflow was originally developed by Cisco and has become a de-facto industry standard for analysing network traffic. VMware introduced Netflow for vDS in vSphere v5.

Note: Netflow is only supported with vDS and not standard switches.

There are various versions of NetFlow ranging from from v1 to v10.Read More

LACP Configuration in vSphere 6

I am currently going through VCAP 6 objective 3.2 and the very firts task in this objective is to deploy a LAG and Migrate to LACP. Although I have read about this in past and we use this in our production environment, I never got a chance to configure this in my homelab because of hardware restrictions (this is true till date 🙁

Well before jumping into LACP configuration, lets discuss a bit about few networking terms here (networking was/is always a big headache for me).

  • Link Aggregation Group (LAG): The simplest definition of LAG can be defined as bonding of ethernet links in order to achieve greater throughput.
  • Link Aggregation Control Protocol (LACP): This is a protocol which is defined in the 802.1AX standard, and it provides a method for automating LAG configurations. LACP-capable devices discover each other by sending LACP packets (called  LACPDUs) to the Slow_Protocols_Multicast address 01-80-c2-00-00-02. They then negotiate the forming/not forming of the LAG.
Read More

Configuring and Managing VMkernel TCP/IP Stacks

While working through VCAP6-Deploy blueprint I stumbled upon topic TCP/IP stack configuration. I have seen this many times in my lab while configuring networking stuffs and had a basic idea of what is the purpose of using custom TCP/IP stack. Suprisingly this feature is there with vSphere 5.5, but I never noticed it.

I decided to discover more about TCP/IP stack in my lab and share my experience through this write up. I will start with very basic.

What is VMkernel TCP/IP and why to use it?

The purpose of a TCP/IP Stack configuration in VMware vSphere Hosts is to setup the Networking Parameters which will allow the communication between the Hosts themselves including the Virtual Machines, other Virtual Appliances and last but not least the Network Storage.

We all know that as per networking best practices, its good to have dedicated VMkernel adapters for different type of traffics such as management, vMotion, vSAN, iSCSI and vSphere Replication etc.Read More

Configuring and Administering Storage Policies in vSphere 6

vSphere storage profiles were first introduced with vSphere 5, then renamed to storage policies with the release of vSphere 5.5.

Storage policy aims to logically separate storage by using storage capabilities and storage profiles in order to guarantee a predesignated quality-of-service of storage resources to a virtual machine.

The storage policy is used to map the defined storage capabilities to a virtual machine,
and specifically, its virtual disks. The policy is based on the storage requirements for each virtual disk that guarantees a certain storage feature or capability.

Gone are the days when administrator was only concerned about free space on datastore. Now a days you need to ensure that your application/servers are getting enough IOPS and datastore latency is not impacting any server performance.

In a SDDC infrastructure, software policy based profile management (SPBM) is very crucial and using SPBM is making life of a vSphere administrator easier than ever.  Read More

Administer Hardware Acceleration for VAAI

vStorage API for Array Integration (VAAI) is an API framework developed by VMware that enables certain storage tasks to be offloaded from Esxi hosts to the storage array and thus lessens the processing workload on the Esxi host. VAAI was first introduced with vSphere 4.1.

It significantly improves the performance of storage -intensive operations such as cloning storage, block zeroing , and so on and thus reduces the overhead on the ESXi host. The main goal of VAAI is to help storage vendors to provide hardware assistance to speed up VMware I/O operations.

The APIs create a separation of duty between the hypervisor and its storage devices, enabling each to focus on what it does best: virtualization-related tasks for the hypervisor and storage-related tasks for the storage arrays.

What vSphere functions can be offloaded with VAAI?

1: Copy offload: Certain operations such as deploying a new VM from template or cloning a VM etc involves copying of virtual disk files.Read More

Tagging a disk as SSD Disk in vSphere 6

There is a a very cool feature available in vSphere 6 which allows you to tag (mark) a devise as SSD/Flash device. Some time ESXi host do not recognize certain devices as flash when their vendors do not support automatic flash disk detection. The Drive Type column for the devices shows HDD as their type.

Why you need to do so?

Case 1: If you’re planning to deploy vSAN in your environment, then as a vSAN prerequisites you need some SSD disks (at least one per disk group).

Case 2: You want to use Host Cache Configuration feature so that host cache can be configured to use SSD drives and the virtual machine’s swapfile can be stored on this SSD drive for better performance as the SSD has much faster latency than a traditional mechanical disk

So if in your environment, your SSD disks has not been recognized automatically as SSD and in vSphere Web Client it is still showing as HDD, then you can manually mark the drive as SSD.Read More

Exporting/Importing vDS Configuration in vSphere 6

This will be a very short post on how to export vDS configuration from one vCenter and importing it in another vCenter. Lets get started.

If you have several vCenter server in your datacenter and you want to have a consistent and identical naming schemes for your distributed switch across all your vCenters, then you can save your time and efforts of creating vDS/Port groups on each vCenter manually by using the export configuration feature of vDS.

Login to vSphere Web Client and select the vDS (which is fully configured) and right click on it and select Settings > Export Configuration.

vds-1

You can export either only vDS settings or can include port group settings (Security policy, NIC teaming, vLAN ID etc) as well. I wanted both in my lab so I chose first option.

vds-2

Click yes to save the exported file.

vds-3

Login to destination vCenter via Web Client and navigate to Networking view and right click on virtual datacenter and select Distributed Switch > Import Distributed Switch.Read More