Configuring vCenter Adapter in vROPs via API

In this post we will learn how to install Management Packs in vROPs via API. 

For vCenter we don’t have to install any management pack as its shipped with vROPs by default and we just have to create vCenter credentials and configure adapter. Below steps can be followed to configure VC adapter.

1: Obtain Session Token: vROPs session token is obtained via POST call. 

Sample Output:

The token id obtained in output is passed as “Authorization: vRealizeOpsToken token_value” header in all subsequent GET and POST calls.Read More

Configuring MYVMware Account in vRSLCM via API

One of the method of installing vRealize products via vRSLCM is to configure myvmw repository first and then pulling available product binaries via internet. In this post I will walk through steps of configuring the same via API.

1:  Add myvmw credentials to password Store

2:  Fetch vmid associated with alias created in step-1: When a new password is added to locker store, vRSLCM assigns a unique vmid to that password and all future references to this password is made via assigned vmid. 

Sample Output

Read More

vRLI Distributed Install/Upgrade via vRSLCM API

In my last Post I demonstrated how to install and upgrade a distributed vROPs via vRSLCM API. In this post I will walk through how to perform install/upgrade a distributed vRLI environment.

Steps for deploying/upgrading any vRealize product via vRSLCM API are pretty much same, only the json payload varies. 

vRLI Deployment Procedure

Step 1: Pre-Validate Environment Creation

Read More

vROPs Distributed Install/Upgrade via vRSLCM API

vRSLCM is such a great tool that it lifts all the complexity of deploying vRealize suite in just few clicks. Whether you are looking for standalone deployment or a distributed one, you are shielded from all the complexity that goes behind in setting up an environment.

Recently while working on my new project, I was tasked to find out a way where all the UI operations related to vRSLCM can be automated. In my last post I listed all the API’s that are needed to perform end to end vRSLCM configuration. We need to fetch few details before starting install/upgrade of any vRealize product.

This post is extension to my last post and I will be demonstrating how to deploy/upgrade various vRealize products using vRSLCM API. The first in this list will be vROPs. So lets get started.

At this point, I assume you have already fetched details about:

  • Certificate Name/UUID (From Locker)
  • Default Password Name/UUID (From Locker)
  • License Name/UUID (From Locker)
  • Infra details (DNS, NTP, Datacenter name, cluster name etc)
  • Local Repo configured and all product install/upgrade binaries uploaded there.
Read More

Automating vRSLCM 8.x End to End Configuration via API

In this post we will learn how to leverage vRealize Suite Life Cycle Manager (vRSLCM) API to automated deployment & configuration. 

With release of vRSLCM 8.x, VMware introduced simple installer (similar to vCSA) for deployment of vRSLCM. Below is screenshot of how the simple installer looks like. 

As of now we don’t have any API which can be leveraged to automate appliance deployment, so we will perform this task via ovftool.

Deploying vRSLCM via ovftool

Read More

Getting Started With vROPs Tenant App For vCloud Director 10.x

In this post, I will walk through step by step installation for the vRealize Operations Manager Tenant App for vCloud Director. But before I jump into the lab, I want to take a moment to explain what this solution is all about and what it looks like from an architectural point of view.

What is vROPs Tenant App for vCloud Director?

vROPs Tenant App is a solution that helps in exposing vRops performance metrics to tenants in a vCD environment. Each tenant can only see metrics data relevant to their organization.

From a service provider point of view, this is an awesome solution as Tenant App enables tenants to have complete visibility of performance metrics of their environment. If an environment is not performing as per expectations, tenants can leverage this solution to root cause analysis of performance issues and they can perform L1-L2 level of maintenance/troubleshooting tasks themselves without raising service tickets with service providers. Read More

How To Update/Patch Multi Cell vCD 10.x Environment

In this post we will learn how to patch/update a Multi cell vCD 10.x environment.

Note: Above steps are for updating vCD from one build to another (patch release) within same version. Please do not confuse this with upgrading a vCD deployment where we jump from one major version to another.

I will breakup this post in 4 sections:

  • Pre Update Checks.
  • vCD Update Process.
  • Post Update Checks.
  • Post Update Tasks.

Pre Update Checks

There are number of checks which must be performed before attempting to update a vCD environment.

1: vCD Health Check & Primary Node identification: Before updating multi cell vCD environment, please ensure all vCD cells in server group are healthy and functioning correctly.

Also we need to identify vCD primary node as we will be needing this info later when we will be taking backup of the embedded database. To fetch this info we can make use of vCD Appliance API as shown below:

Method: GET

URL: https://<vcd-fqdn>:5480/api/1.0.0/nodes

Headers: x-vcloud-authorization: auth-token

Response Output:

From the above output we can infer that all 3 vCD nodes are healthy here, cluster health is also healthy and node “vcd1” is the current primary node.Read More

UI and Deployment Parameter Changes in Cloud Foundation 3.9.1

VMware Cloud Foundation 3.9.1 was released yesterday and this release brought few changes with it. In this post I will jot down few of the those changes. 

1: Updated BOM: VCF 3.9.1 deploys SDDC with latest release of vSphere 6.7 U3b. It also includes NSX-V 6.4.6 (latest) for management domain. For complete list of updated BOM, please refer to the below screenshot

2: Changes in Cloud Builder UI: VCF 3.9.1 is based on Cloud Builder 2.2.1.0. This release of cloud builder have incorporated few changes in UI. Few of them are listed below:

2A: Workflow Migration from OVA Setting to UI: In older releases of Cloud Builder, SDDC workflow type (vcf, vvd or vxrail) was defined while deploying the OVA. Now this has been moved from the OVA settings into the main UI wizard. 

2B: Easy Navigation Through Workflows: There’s a new progress bar to help you navigate more easily through the workflows.Read More

Configuring VCF 3.9 Multi-Instance Management (aka Federation)

One of the cool features that was introduced in VMware Cloud Foundation 3.9 is multi-instance management of VCF. This feature allows you to monitor multiple VCF instances from a single pane of glass. For customers who have SDDC deployed via VCF across regions, it was very difficult task to manage them all from one place. 

To solve this problem, VCF 3.9 introduced concept of Federation.

Federation allows multi VCF instance to connect together for aggregated visibility and ease of management. Customers can now have the ability to view the health of all workloads running across all VCF instances globally. VCF multi-instance enables customers to view their data centers as a single resource pool. The main features of federation are:

  • Connects and pairs multiple VCF private cloud instances.
  • Provides aggregated and site level visibility of connected VCF instances.
  • View and monitor existing utilization, capacity and pending updates of each VCF instances.
Read More

Deploying NSX-T Based Workload Domain in VMware Cloud Foundation

In this post I will walk through steps of deploying a VI workload domain based on NSX-T. Note: We can only deploy VI domain with NSX-T. As of now Management workload domain is only NSX-V based.

Before kicking NSX-T based VI workload domain, please ensure you have met following prerequisites:

1: NSX-T license has been added to SDDC-Manager under Administration > Licensing

2: NSX-T install bundle have been downloaded from repository. Below is screenshot of a downloaded bundle.

nsx-t-bundle.PNG

3: Network Pool have been created for the workload domain. This pool will have IP address for the vMotion & vSAN network.

wld-nsxt01

4: Esxi hosts that will take part in workload domain have been configured and commissioned. If you are new to vCF and don’t know the host commision process, then please refer to my earlier post for the steps.

Once we have met the above prerequisites, we are all set to kick new VI workload domain.… Read More