vRealize Automation 7.3-Simple Installation: Part 3: Installing IaaS DB

One of the requirement for installing vRA is to have a database server for IaaS components. This DB server can reside on same host where IaaS role will be configured or can be external to IaaS host.

To see list of supported database that can be used for IaaS DB, please see vRealize Automation Support Matrix document from VMware. 

Below table shows list of database that is suported with vRA 6.x/7.x

Note: Express versions of MSSQL are not supported. If you don’t have MSSQL enterprise edition for lab setup and you want to use express edition in your lab then please see this article by Rob Bastiaansen.

Hardware Requirements for database server: 

  • CPU: 2 vCPU
  • RAM: 2 GB
  • Disk: 40 GB

MSSQL Installation

Launch the MSSQL installer and click on “New SQL server stand-alone installation”

If you are using licensed version of MSSQL, then enter the product key.

Accept Eula and hit Next.Read More

vRealize Automation 7.3-Simple Installation: Part 2: NSX Deployment and Configuration

In last post of this series I discussed about my lab setup. In this post we will learn how to deploy and configure NSX.

Last year I did a complete lab on NSX and posted few blog articles on installation and configuration stuffs. So in this post I will not go into much details on NSX stuffs. If you are new to NSX then make sure you read VMware documentation on NSX deployment.

Also you can view below articles from my blogs on NSX.

1: Installing and Configuring NSX

2: Deploying NSX Controllers

3: Preparing Esxi Hosts and Cluster

4: Configure VXLAN on the ESXi Hosts

Lets first start with deploying NSX.

Nothing fancy here. NSX deployment involves same steps as deploying any other virtual appliance.  Here is a slideshow for deployment steps.

Once deployment completes and NSX manager boots up, login to the appliance by typing https://NSX-fqdn/. Credentials are admin/pwd set during deployment.Read More

vRealize Automation 7.3-Simple Installation: Part 1: Lab Setup

One of the first goal for 2018 is to pass my VCP7-CMA exam and thus I decided to set up a home lab on latest version of vRealize Automation i.e 7.3 and this post is all about my lab setup.

1: In my lab I am using vSphere 6.0 U3 and I have deployed 4 hosts and each host have 4 vCPU and 32 GB RAM.

2: Deployed vCSA 6.0 U3 (vCenter with Embedded PSC)

3: Purely using vDS (v6.0) in my lab and I have created port groups for separation of traffic. Each Esxi host have 4 NIC’s :

  • vmnic0 for Management network : 192.168.109.X/24 
  • vmnic1 for vMotion network: : 192.168.108.X/24
  • vmnic2 and vmnic3 for iSCSI storage connectivity: 192.168.106.X/24

4: Deployed openfiler and created 2 volumes. Both mapped to single target and servicing all 4 Esxi host that are in mgmt cluster. Thus each esxi host is mounting 2 luns from openfiler appliance. Read More

VCAP6-DCV Deploy Study Guide

Section 1 – Create and Deploy vSphere 6.x Infrastructure Components

Objective 1.1 – Perform Advanced ESXi Host Configuration

Objective 1.2 – Deploy and Configure Core Management Infrastructure Components

Objective 1.3 – Deploy and Configure Update Manager Components

Objective 1.4 – Perform Advanced Virtual Machine Configurations

Read More

Back To Basics: Migrating from vSS to vDS in vSphere 6

In this post we will see how to migrate from vSphere Standard Swith to vSphere Distributed Switch. Let’s get started.

Before performing any migration, make sure you have a vDS deployed and fully configured i.e portgroups created, uplinks created, appropriate uplinks placed in respective portgroups.

Here is a review of my environment.

1: I have a vDS created and different port groups for separation of duties. 

2: Uplinks created and meaningfully named.

3: Teaming and Failover configured. Each of the portgroup in my lab have only one active uplink. Rest of them I have placed in unused. 

4: And this is how the networking is layout for the host which I will be migrating to vDS. This host have 2 vSS.

  • vSwitch0 have Management and vMotion VMkernel portgroup along with a VM Network portgroup to which my vCSA is connected.

  • vSwitch1 have 2 portgroups configured for iSCSI storage connectivity. Port binding is enabled here to achieve multipathing. 
Read More

VCAP6-DCV Deploy Objective 3.1

In this post we will cover following topics:

  • Create and manage vSS components according to a deployment plan:
    • VMkernel ports on standard switches
    • Advanced vSS settings
  • Configure TCP/IP stack on a host
  • Create a custom TCP/IP stack
  • Configure and analyze vSS settings using command line tools

Lets get started by going through each topic one by one.

                                          Create and Manage vSphere Standard Switch

When Esxi is installed, a standard switch aka vSS is also created by default. Working mechanism of a standard switch is very similar to a physical switch in the sense that a standard switch works at layer 2, forwards frames to other switch ports based on the MAC address, and supports features such as VLANs and port channels.

Esxi host physical NIC’s serves as uplinks to the standard switches and through these uplinks vSS communicate with the rest of the network. A vSS provide the network connectivity:

  • between virtual machines within the same ESXi host.
Read More

My VCAP6-DCA Deploy (3V0-623) Exam Experience

I haven’t blogged for quite a bit of time as I was busy in my VCAP6-Deploy exam and finally I passed my exam last saturday. There is a lot of things which I want to share about my exam experience and the things I learned during my preprations. 

I passed my VCP 6 exam back in june 2017 and since then a strong feeling about going for VCAP exam started darting every now and then in my mind. I have few certifications but none of them were advance level and this thought pumped me up for going for this exam.

I work as a operations engineer in OVH vCloud Air division and interacts with virtualization/Networking/Storage things on day to day basis and this certainly was an advantage as I already have hands on few of the topics mentioned in VCAP exam blueprint. 

My preparation

I started my preparation by downloading the VCAP6-Deploy exam blueprint and had a rough look on all the objectives.Read More

How To Perform LUN Masking in vSphere 6

What is Lun Masking?

LUN masking is a way to control which LUNs to be made visible to Esxi host. If you have a storage array with multiple LUN’s and you want that an Esxi host should only be seeing a subset of LUN’s and not all, you can use lun masking technique.

Lun masking is totally opposite of lun zoning, where the storage array configuration determines which LUNs are visible to a host.

Last year I was doing a lab on vSphere Replication setup and wanted a subset of LUN’s from my openfiler appliance to be visible in my source site and remaining lun’s in my protected site. That was the first time when I felt need for masking the paths to storage array so that all my Esxi host from both sites, should not be seeing/mounting all the Lun’s which I created on my openfiler appliance.

Although I ended up doing the configuration change on openfiler side (same like zoning), but the idea remained always in my mind to use Lun masking someday.Read More

VCAP6-DCV Deploy Objective 2.3

Objective 2.3 of VCAP6-Deploy exam covers following topics

  • Analyze and resolve storage multi-pathing and failover issues
  • Troubleshoot storage device connectivity
  • Analyze and resolve Virtual SAN configuration issues
  • Troubleshoot iSCSI connectivity issues
  • Analyze and resolve NFS issues
  • Troubleshoot RDM issues

Lets discuss each topic one by one

                               Analyze and resolve storage multi-pathing and failover issues

There can be hundreds of reason for multipathing and failover issues and troubleshooting these issues comes with experience only. Issues with multipathing can be because of issues on storage side (SAN Switch, Fibre configuration etc)  or from vSphere side. In this post we will focus only on vSphere side troubleshooting.

In my lab I am using openfiler appliance for shared storage and my vSphere hosts are configured to use software iSCSI to reach to openfiler. Each host has 2 physical adapters mapped to two disting portgroups configured for iSCSI connection and both portgroups are complaint with iSCSI Port Binding settings

VMware KB-1027963 explains in great details about storage path failover sequence in vSphere. Read More

VCAP6-DCV Deploy Objective 3.4

Objective 3.4 of VCAP6-Deploy exam covers following topics

  • Perform a vDS Health Check for teaming, MTU, mismatches, etc.
  • Configure port groups to properly isolate network traffic
  • Use command line tools to troubleshoot and identify configuration issues
  • Use command line tools to troubleshoot and identify VLAN configurations
  • Use DCUI network tool to correct network connectivity issue

Lets discuss about these topics one by one.

                      Perform a vDS Health Check for teaming, MTU, mismatches, etc.

The network configuration for the vSphere infrastructure is a very cumbersome task and if the process is not automated then there are chances of configuration error. Typical network configuration includes tasks like configuring VLAN, Setting uplinks, NIC teaming, configuring VLAN etc. 

Now if anyone of the above configuratin is misconfigured, it can lead to host disconnection, VM traffic not traversing to destination, storage disconnection (if using iSCSI) or any other issues.

In earlier versions of vSphere, there were no tools available that could help resolve such misconfigurations across the physical and virtual switches.Read More