Getting Started With vROPs Tenant App For vCloud Director 10.x

In this post, I will walk through step by step installation for the vRealize Operations Manager Tenant App for vCloud Director. But before I jump into the lab, I want to take a moment to explain what this solution is all about and what it looks like from an architectural point of view.

What is vROPs Tenant App for vCloud Director?

vROPs Tenant App is a solution that helps in exposing vRops performance metrics to tenants in a vCD environment. Each tenant can only see metrics data relevant to their organization.

From a service provider point of view, this is an awesome solution as Tenant App enables tenants to have complete visibility of performance metrics of their environment. If an environment is not performing as per expectations, tenants can leverage this solution to root cause analysis of performance issues and they can perform L1-L2 level of maintenance/troubleshooting tasks themselves without raising service tickets with service providers. Read More

Upgrading Clustered vRLI Deployment

In this post I will walk through steps of upgrading a clustered vRLI deployment. Before preparing for upgrade, make sure to read VMware documentation for the supported upgrade path.

One very important consideration before you start upgrading vRLI:

Upgrading vRealize Log Insight must be done from the master node’s FQDN. Upgrading using the Integrated Load Balancer IP address is not supported.

To start vRLI upgrade, login to the web interface of master node and navigate to Administration > Cluster and click on Upgrade Cluster button.

Note: In my lab I am upgrading vRLI from 4.5 to 4.6. Upgrade bundle for this version is available here

vrli-upgrade01.JPG

Hit Upgrade button to start upgrade.

Note: Make sure you have taken snapshots of the master/worker nodes before starting the upgrade.

vrli-upgrade02

Upload the .pak upgrade bundle file. 

vrli-upgrade03

On accepting EULA, upgrade process starts. 

vrli-upgrade04

vrli-upgrade05

Wait for 5-7 minutes for upgrade to complete. 

vrli-upgrade06

Upgrade is performed in a rolling fashion.Read More

Configuring AD Authentication in vRealize Log Insight

vRealize Log Insight supports 3 Authentication methods:

  • Local authentication.
  • VMware Identity Manager authentication.
  • Active Directory authentication.

You can use more than one method in the same deployment and users then select the type of authentication to use at log in.

To AD authentication to vRLI, login to web interface and navigate to Administration > Authentication page

vrli-auth01

Switch to Active Directory tab and toggle the “Enable Active Directory support” button.

vrli-auth02

Specify your domain related details and hit Test Connection button to test whether vRLI is able to talk to AD or not. Hit Save button if test is successful. 

vrli-auth03

Now we need to specify the users/groups who should have access to vRLI. To do so navigate to Access Control tab and select “Users and Groups” and click on New User to add an AD account.

vrli-auth04

Select AD as authentication method and specify the domain name and the user who should have access to vRLI.Read More

Scaling Up Standalone vRealize Log Insight Deployment

vRealize log insight can be deployed as a standalone or as a clustered solution. In a clustered deployment the first node is the master node and the remaining nodes are termed as worker nodes. The process of scaling up is pretty straight forward and in this post I will walk through the steps of doing so.

Few things which you should consider before expanding a vRealize Log Insight deployment are:

  • vRealize Log Insight does not support WAN clustering (also called geo-clustering or remote clustering). All nodes in the cluster should be deployed in the same Layer 2 LAN. 
  • Configure a minimum of three nodes in a vRealize Log Insight cluster. 2 node cluster is not supported.
  • Verify that the versions of the vRealize Log Insight master and worker nodes are same. Do not add an older version vRealize Log Insight worker to a newer version vRealize Log Insight master node.
  • External load balancers are not supported for vRealize Log Insight clusters.
Read More

Distributed vRA Automated Upgrade via vRLCM

In this post I will walk through steps of upgrading a distributed vRA 7.4 environment to v7.5. This is continuation of my earlier post where I deployed vRA 7.4 via vRLCM.

Upgrade Prerequisites

This post assumes that you have met all the prerequisites of vRA upgrade mentioned in this document

Important: If you are doing upgrade in a distributed environment, then make sure you have disabled the secondary members of pool and all monitors removed for the pool members during the upgrade process. 

To upgrade a vRA deployment, login to vRLCM and navigate to Home > Environments and click on view details.

vra-up-2

Click on the 3 dotted lines and select Upgrade.

vra-up-3

Change the Repository type to “vRLCM Repository” and make sure to check mark the IaaS snapshot option to take snapshots of your backend vm’s. There is only one caveat here, vRLCM doesn’t snap the IaaS DB vm and you have to do this manually.Read More

Cancelling Request in vRealize Suite Lifecycle Manager via API

vRLCM is a great tool but the only shortcoming which is still there with v 2.0 is the ability to cancel any running task via GUI. I faced this situation when I was trying to add a remote collector node to an existing vROPS deployment and task kept running for more than 4 hours.

While searching on internet for how we can stop/cancel/delete a request in vRLCM, came across this thread on VMware Code website, where it was mentioned that it’s not possible from GUI and we need to use REST API.

Below steps shows how to use vRLCM API

1: Get Auth token: First of all we need to generate the auth token which we will be using in our next command.

To do so, type https://<vRLCM-FQDN>/api and make sure API v1 is selected as shown below.

Expand the /login section and click on “try it out” button. It will ask you to enter the credentials for admin account. Read More

vRA Distributed Install using vRealize Suite Lifecycle Manger

In first post of this series, I talked briefly about what vRealize Suite Lifecycle Manager is and its capabilities. Also I covered the installation and initial configuration settings of the appliance.

In this post I will walk through steps of deploying vRA 7.4 distributed install in automated fashion using vRLCM.

Before trying vRLCM, I did a vRA distributed install manually because I wanted to understand the flow of distributed install. If you are new to his topic then I would suggest reading below posts before you can start using vRLCM to automate deployments:

1: Introduction & Reference Architecture

2: Lab Setup

3: Load Balancer Configuration

4: vRA Distributed Install (Manual)

Let’s get started with vRLCM.

First we have to create an environment. From home page click on Create Environment.

vRA-LCM (1).PNG

Specify following:

  • Datacenter: Which you created earlier
  • Environment type: Valid selection are Development, Test, Staging and Production. Since this is my test environment I selected test.
Read More

Installing & Configuring vRealize Suite Life Cycle Manager 2.0

vRealize Suite Life Cycle Manager 2.0 was released in September 2018 and with this release a lot of new features were added. Please refer to this post to learn What’s new in vRLCM 2.0.

What is vRealize Suite Lifecycle Manager?

vRealize Suite Lifecycle Manager automates install, configuration, upgrade, patch, configuration management, drift remediation and health from within a single pane of glass, and automates Day 0 to Day 2 operations of the entire vRealize Suite, enabling simplified operational experience for customers.

In this post I will walk through the important configuration steps that needs to be in place before you can start consuming vRLCM to automate your stuffs.

vRLCM Deployment and Configuration

Deployment of vRLCM appliance is very straightforward like any other VMware va based deployment.

Once the vRLCM appliance is deployed and boots up, connect to the appliance by typing https://<vrlcm-fqdn>/vrlcm

vlcm-12.PNG

Initial login credentials are: admin@localhost/vmware

vlcm-13.PNG

Post login you need to change the root password of the vRLCM appliance.Read More

vRA 7.4 Distributed Install: Part 4: vRA Distributed Install

In last post of this series , I talked about how to configure NSX based load balancer for vRA environment. In this post I will walk through vRA appliance deployment.

If you are not following along this series, then I recommend reading earlier posts of this series from below links:

1: Introduction & Reference Architecture

2: Lab Setup

3: Load Balancer Configuration

Download vRA 7.4 appliance and deploy 2 instances of vRA VM’s.

Once both the appliance boots up, connect to the vami of first appliance by typing https://<vra1-fqdn>:5480/

At first login, the deployment wizard will automatically launch in the UI. Hit Next to continue.

vra-dd01

Accept EULA and hit Next.

vra-dd02

For distributed install, select the type as Enterprise deployment model.

Installation wizard provides recommendation for minimum number of vm’s needed for each service.

If you are planning to include IaaS along with vRA then make sure Install Infrastructure as a Service box is selected.Read More

vRA 7.4 Distributed Install: Part 3: Load Balancer Configuration

In last post of this series, I talked about my lab setup. In this post I will walk through the load balancer configuration that needs to be in place for supporting the distributed install.

If you are not following along this series, then I recommend reading earlier posts of this series from below links:

1: Introduction & Reference Architecture

2: Lab Setup

Although it’s not mandatory to have the load balancer configured when kicking the distributed install, as we can configure it post vRA deployment, but it is recommended to configure this before attempting the install.

I am using NSX 6.4 in my lab for load balancer. You can choose the supported load balancer of your choice.

I deployed a dedicated Edge GW (size=large) and attached my dvPortgroup named “Production” as uplink. I also added 3 IP’s on the uplink interface (one primary and 2 secondary ip) which will be used during VIP configuration.Read More