vSphere Data Protection-Part 2: Installation & Configuration

In last post of this series we discussed about VDP and its capabilities. We also discussed about VDP architecture and different deployment options available. In this post we will learn how to install and configure VDP. Lets get started.

Requirements for installing VDP

Make sure your infrastructure meets following requiremnts met before deploying VDP:

  • A static IP address is required for the VDP appliance and any additional proxy appliances.
  • DNS entries created ahead of time for forward and reverse lookup.
  • Ensure enough capacity is available on the datastore where backups will reside.
  • Editions of vSphere Essentials Plus and above (or vSphere with Operations Management / vCloud Suite) include licensing for vSphere Data Protection.
  • The vCenter Server and attached ESXi hosts must be configured with an NTP server. 
  • vCenter Server 5.5 or higher. If you are using vCenter 5.5 U3 and want to deploy VDP 6.1, 6.1.1, or 6.1.2, then see VMWare KB-2146825
  • Esxi host v5.1 or higher.
Read More

vSphere Data Protection-Part 1: Introduction

I am now in final legs of my VCAP6-Deploy exam prepration and objective 7.2 revolves all around VDP. Since I have no prior experience with VDP, this is the best time for me to explore this product.

I have broken down this series in various part so that the posts do not get too lengthy and this is the first part where we will be discussing about what VDP is and what it offers when it comes to backing up and recovering vSphere deployments. 

What is vSphere Data Protection (VDP)?

vSphere Data Protection is a backup and recovery solution designed for vSphere environments which is powered by EMC Avamar. It provides agentless, image-level virtual-machine backups to disk. It also provides application-aware protection for business-critical Microsoft applications (such as Exchange, SQL Server and SharePoint) along with WAN-efficient, encrypted backup data replication. 

Capabilities of vSphere Data Protection

The key capabilities of VDP are (not limited to):

  • Agent-less virtual machine backup and restore that reduces complexity and deployment time
  • Integration with EMC Data Domain for additional scale, efficiency, and reliability
  • Flexibility to restore replicated backup data at both the source and target locations
  • Automated backup verification that provides the highest level of confidence in backup data integrity
  • Appliance and backup data protection via a checkpoint-and-rollback mechanism
  • File Level Restore (FLR), which enables granular file and folder restoration without the need for an agent in Microsoft Windows and Linux virtual machines
  • Significantly reduced backup data disk space requirements using Avamar variable-length deduplication technology
  • VDP make use of vSphere Storage APIs and Changed Block Tracking (CBT) technique to reduce load on the vSphere host infrastructure and minimize backup window requirements
  • Reliable, efficient replication of backup data between vSphere Data Protection appliances for redundancy and offsite data protection

Consult this whitepaper by VMware to know more about these capabilities in greater details and also what other capabilities lies within VDP.Read More

Distributed Switch Port Group Bindings

In a vSphere environment where vDS is being used for networking connectivity, there are several options available for what should be the type of port binding that is to be used for a portgroup. Have you ever wondered which Port Binding setting is most suitable for the distributed portgroups to get optimal performance? 

In this post we will be talking about some use cases for using different type of port bindings with vDS.

There are 3 types of Port Binding that is available at portgroup level

  1. Static Binding
  2. Dynamic Binding
  3. Ephemeral Binding

We will discuss about these one by one.

Static Binding

When you connect a virtual machine to a port group configured with static binding, a port is immediately assigned and reserved for it, guaranteeing connectivity at all times. The port is disconnected only when the virtual machine is removed from the port group. You can connect a virtual machine to a static-binding port group only through vCenter Server.Read More

Network IO Control in vSphere 6

In this post we will discuss about what is NIOC and why we need it. We will also configure NIOC in lab. 

What is Network IO Control (NIOC)?

Network I/O Control (NIOC) was first introduced with vSphere 4.1 and it is a vDS feature that allows a vSphere administrator to prioritize different type of network traffic by making use of Resource pools and shares/limits etc. NIOC does the same for network tarffic which SIOC does for storage traffic.

What problem NIOC is solving?

In old days physical servers were equipped with as many as 8 (or more) ethernet cards and administrators (as a best practice) configured vSphere to use dedicated NIC for passing various network traffic like management traffic or vMotion or fault tolerance. Managing these many physical cables were a bit cumbersome.

Modern day servers addressed this issue by introducing servers with support for 10 GBPS/40 GBPS network speed and these servers have only 2 NIC’s and all the traffic is passed via these 2 NIC’s.Read More

vSwitch NIC Teaming and Network Failure Detection Policies

What is NIC Teaming and why you need it?

Uplinks is what provides connectivity between a vSwitch and a physical  switch. This uplinks passes all the traffic generated by virtual machines or the vmkernel adapters. 

But what happens when that physical network adapter fails, when the cable connecting that uplink to the physical network fails, or the upstream physical switch to which that uplink is connected fails? With a single uplink, network connectivity to the entire vSwitch and all of its ports or port groups is lost. This is where NIC teaming comes in.

NIC teaming means that we are taking multiple physical NICs on a given ESXi host and combining them into a single logical link that provides bandwidth aggregation and redundancy to a vSwitch. NIC teaming can be used to distribute load among the available uplinks of the team.

Below diagram illustrates vSwitch connectivity to physical world using 2 uplinks.Read More

Configuring vCD 9.0 To Send Metric Data to Cassandra DB

In last post of this series, we learnt how to install and configure Cassandra DB for collecting metrics data from vCD. We also discussed that kairosdb is no longer needed to be installed alongwith cassandra for this purpose.

In this post we will learn how to configure vCD 9.0 to send metrics data to Cassandra DB.

This configuration is done by using cell management tool utility which is located in /opt/vmware/vcloud-director/bin directory. 

Run cell-management-tool cassandra –help command to see all available options which you need to specify to configure vCD correctly so that it start sending all metrics data to cassandra.

Typically this is the command to do so:

[root@vcd90 ~]# /opt/vmware/vcloud-director/bin/cell-management-tool cassandra –configure –create-schema –cluster-nodes 192.168.109.53 –username cassandra –password cassandra –port 9042 –ttl 15

Read More

Installing Cassandra DB for collecting vCD 9.0 Metrics Data

Cassandra DB is needed for capturing and storing vCloud Director metrics data so that it can be displayed in portal to end users so that users are aware of VM resource utilization etc.

Prior to vCD v9.0, we needed kairosdb + cassandra together for capturing and storing the metrics data, but things have changed now. VMware has removed the requirement of kairosdb and now metrics data can be sent straight to cassandra database.  This metric data in turn can be viewed in tenant UI.

As per vCD 9.0 documentation 

Cassandra is an open source database that you can use to provide the backing store for a scalable, high-performance solution for collecting time series data like virtual machine metrics. If you want vCloud Director to support retrieval of historic metrics from virtual machines, you must install and configure a Cassandra cluster and use the cell-management-tool to connect the cluster to vCloud Director.

Read More

Find vCloud Director Orphaned VM’s

We all are familiar with concept of orphaned VM’s in vSphere. However orphaned VM’s in vCloud Director have slightly different meaning. 

From vCD perspective, virtual machines that are referenced in the vCenter database but not in the vCloud Director database are considered orphan VMs because vCD cannot access them even though they may be consuming compute and storage resources. This kind of reference mismatch can arise for a number of reasons, including high-volume workloads, database errors, and administrative actions.

Starting with vCD 8.2, VMware added one more option to cell management utility to locate such orphaned VM’s so that they can be removed or re-imported into vCloud Director. This utility is not available with any vCD version prior to 8.20.

The command to find orphaned VM’s is find-orphan-vms command which is used in conjunction with cell-management-tool and enables an administrator to list these VMs .

To list the options available with this command, run command: 

# /opt/vmware/vcloud-director/bin/cell-management-tool find-orphan-vms –help

If you are using self-signed certificates in vCD, then you have to specify truststore file and truststore password along with supplying vcd username/password and vcenter credentials etc.Read More

Migrate vCloud Director 9.0 DB from MSSQL to Postgres

With vCloud Director 9.0, VMware introduced postgres as supported database for vCD. If you are planning to use postgres as DB, then you should install Postgres v 9.5 on a supported OS.

In our last Post I mentioned that I purposefully configured MSSQL as DB from my new vCD 9.0 installation, as I wanted to test the migration of vCDDB from MSSQL to Postgres. This post is focused on how to do so.

If you are new to postgres and do not know how to install it, then follow this blog for installation instructions which are pretty easy and straight forward.

Once you have installed postgres and started services, next is to create database for vCD. Follow below commands to do so

1: Create Database

2: Verify presence of newly created database

3: Create vCloud user and assign password to user

4: Enable the database owner to log in to the database

5: Grant full permission to vCloud user to vCloud database

6: Test the vcloud user access to database

Read More