Introduction
vSphere with Tanzu currently doesn’t provide the AKO orchestration feature out-of-the-box. What I mean by this statement is that you can’t automate the deployment of AKO pods based on the cluster labels. There is no AkoDeploymentConfig that gets created when you enable workload management on a vSphere cluster and because of this, you don’t have anything running in your supervisor cluster to keep an eye on the cluster labels and take the decision of automated AKO installation in the workload clusters.
However, this does not preclude you from using NSX ALB to provide layer-7 ingress for your workload clusters. AKO installation in a vSphere with Tanzu environment is done via helm charts and is a completely self-managed solution. You will be in charge of maintaining the AKO life cycle.
My Lab Setup
My lab’s bill of materials is shown below.
Component | Version |
NSX ALB (Enterprise) | 20.1.7 |
AKO | 1.6.2 |
vSphere | 7.0 U3c |
Helm | 3.7.4 |
The current setup of the NSX ALB is shown in the table below.
Component | Details |
NSX ALB Controller | alb.tanzu.lab |
ALB VIP Network | TKG-Cluster-VIP – 172.19.83.0/24 |
TKGs Workload Network | TKGS-Workload – 172.19.82.0/24 |
Service Engine Group | Default-Group (N+M buffer) |
Avi IPAM Profile | tkgvsphere-tkgmgmt-ipam01 |
Only one VIP network is currently configured in NSX ALB, and it offers L4 load balancing to the Supervisor and Tanzu Kubernetes clusters’ Control Planes.
I created/configured a few new items in my lab to achieve L7 ingress for workload clusters utilizing a dedicated network:
Object | Details |
VIP Network | Workload-VIP – 172.19.84.0/24 |
Service Engine Group | Workload-SEG |
Network ‘TKG-Workload-VIP‘ is added to the existing IPAM profile.
Insallation Procedure
1: Connect to the Tanzu Kubernetes cluster (Workload cluster)
Connect to the workload cluster where you wish to install AKO and deploy an ‘ingress’ application.
1 |
kubectl vsphere login --vsphere-username=administrator@vsphere.local --server=172.19.83.3 --insecure-skip-tls-verify --tanzu-kubernetes-cluster-name=tkgs-workload01 --tanzu-kubernetes-cluster-namespace=prod |
2: Switch the context to the workload cluster
1 2 3 |
# kubectl config use-context tkgs-workload01 Switched to context "tkgs-workload01". |
2: Create ‘avi-system’ namespace. Ako is deployed in this namespace.
1 2 3 |
# kubectl create ns avi-system namespace/avi-system created |
4: Configure helm to use VMware’s public harbor repository.
1 2 3 |
# helm repo add ako https://projects.registry.vmware.com/chartrepo/ako "ako" has been added to your repositories |
5: Search helm charts for available Ako versions
6: Generate the values.yaml file for Ako deployment
1 |
helm show values ako/ako --version 1.6.2 > values.yaml |
Edit the values.yaml file and enter NSX ALB and your network details. The below fields are the ones (minimum) which you need to modify
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
AKOSettings: clusterName: tkgs-workload01 NetworkSettings: nodeNetworkList: - networkName: "TKG-Workload" cidrs: - 172.19.82.0/24 vipNetworkList: - networkName: "Workload-VIP" cidr: 172.19.84.0/24 L7Settings: defaultIngController: 'true' serviceType: ClusterIP ControllerSettings: serviceEngineGroupName: Workload-SEG controllerVersion: '20.1.7' cloudName: Default-Cloud controllerHost: 'avi.tanzu.lab' |
7: Install Ako in the workload cluster
1 2 3 4 5 6 7 8 |
# helm install ako/ako --generate-name --version 1.6.2 -f values.yaml --set ControllerSettings.controllerHost=alb.tanzu.lab --set avicredentials.username=admin --set avicredentials.password="VMware1!" --namespace=avi-system NAME: ako-1648125579 LAST DEPLOYED: Thu Mar 24 12:39:44 2022 NAMESPACE: avi-system STATUS: deployed REVISION: 1 TEST SUITE: None |
Verify Ako has been installed and is up and running
1 2 3 4 |
# helm list -n avi-system NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION ako-1648125579 avi-system 1 2022-03-24 12:39:44.085902162 +0000 UTC deployed ako-1.6.2 1.6.2 |
8: Verify Ako Pod status
1 2 3 4 |
# kubectl get po -n avi-system NAME READY STATUS RESTARTS AGE ako-0 0/1 Completed 3 79s |
At this point in time, an ingressclass is created in the workload cluster.
1 2 3 4 |
# kubectl get ingressclass NAME CONTROLLER PARAMETERS AGE avi-lb ako.vmware.com/avi-lb <none> 61m |
9: Deploy sample application of type Ingress
In my lab, I have deployed a sample hackazon application using the below yaml. Please refer to my previous post for instructions to deploy the application.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
kind: Deployment apiVersion: apps/v1 metadata: name: http-ingress-deployment labels: app: http-ingress spec: replicas: 2 selector: matchLabels: app: http-ingress template: metadata: labels: app: http-ingress spec: containers: - name: http-ingress image: ianwijaya/hackazon ports: - name: http containerPort: 80 protocol: TCP imagePullSecrets: - name: regcred --- kind: Service apiVersion: v1 metadata: name: ingress-svc labels: svc: ingress-svc spec: ports: - name: http port: 80 targetPort: 80 selector: app: http-ingress type: LoadBalancer --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: avisvcingress labels: app: avisvcingress annotations: kubernetes.io/ingress.class: avi spec: rules: - host: ingress.tanzu.lab http: paths: - pathType: Prefix path: / backend: service: name: ingress-svc port: number: 80 |
An ingress pool will be generated in NSX ALB once the application is deployed.
You should now be able to access the ingress (assumed you have created a DNS record for the ingress IP)
I hope you enjoyed reading this post. Feel free to share this on social media if it is worth sharing.