Installing and configuring VMware Integrated Openstack (VIO) 4.1 with Kubernetes

We Live in the Cloud

Installing and configuring VMware Integrated Openstack (VIO) 4.1 with Kubernetes

4th June 2018 Kubernetes VIO vSphere 2

In this post I will show you how to install VMware Integrated OpenStack (VIO) on your existing vSphere environment, and go into how you configure your first cluster, as a preamble to my next posts on this subject.  If you know what VIO is you can skip straight to the install and configure.

What is VIO? It is a VMware supported OpenStack distribution that lets you run an enterprise grade OpenStack cloud on top of VMware virtualization technologies. Use cases include building an IaaS platform, providing OpenStack API access to developers, leveraging edge computing and deploying NFV services on OpenStack.

What is Kubernetes? Kubernetes (often abbreviated to K8s) is an extensible open-source orchestration platform for managing containerized workloads and services that was developed by Google, before being open sourced in 2014.  You can read more about what K8S is by clicking the link at the bottom of this page.

So what then is VIO with Kubernetes? VIO with Kubernetes adds in a Kubernetes module for VIO that creates a Kubernetes management interface, allowing traditional vSphere admins to manage Kubernetes clusters and because Kubernetes is run from within VIO, the provision of compute, storage and networking is done for you. In a later slide I will show you how the creation of a cluster from within VIO automatically spawns several Ansible tasks, that you will see within the vSphere Web Client. From a Developers point of view, everything looks the same with clusters being managed with the same kubectl commands.

VIO enables you to deploy OpenStack clusters onto your existing vCenters, leveraging the High Availability features of vSphere, and when coupled with NSX and vSAN, admins are now able to deploy and manage OpenStack clusters from the vSphere Web Client.

Under the hood, the cluster installation steps are accomplished utilizing Ansible to configure the nodes with predefined setups.  You can configure Cloud Providers which are the infrastructure interfaces used by VIO with Kubernetes to create the VM instances that will act as masters, workers, and routers.  When integrated with NSX, VIO with Kubernetes can create virtual networks on the fly, placing Kubernetes nodes behind a Kubernetes router, which filters traffic to the containers hosted on the cluster.

When downloading VIO with Kubernetes, be sure to pick the right download as indicated below.  This guide works for VIO or VIO with Kubernetes but if you have the wrong OVA, you won’t be able to configure Kubernetes below.  I would advise you check out the Hands on Labs for VIO before you install and configure so that you’re familiar with the interface and what the resulting install should look like https://my.vmware.com/en/web/vmware/evalcenter?p=openstack-18-hol

 

Installation of VIO with Kubernetes

I have created a short video of the install or you can use the step by step screen shots below.  Please note VIO 4.1 is compatible with vSphere 5.5 U2 and above, but if in doubt please see the requirements section in the link at the bottom of this page.

  • From within the vSphere console, right click your Datacenter, cluster or host where you want to deploy VIO, choosing “Deploy OVF Template…”

  • Click Browse…

  • Select the OVF Template you downloaded and click Open, then hit Next

  • Here you can rename the OVF Template, and choose your deployment location. Select your VM folder and click Next

  • Select the cluster/host/resource pool/vapp where the OVF template will be deployed, and click Next

  • Review the details, and click Next

  • Review the EULA then click Accept, then Next

  • Selecting storage will be unique to your environment, so here you can choose your datastore, VM Policy and vDisk format. I’m fortunate to have an SDDC to hand so chose Thick provision lazy zero and placed the VM onto my vSAN datastore. Configure and hit Next

  • Select the port group for your Management Network NIC then click Next

  • Here you can customise the template with IPs as you can see below. For the purposes of this install however I left it all blank to leave that to DHCP. Note if you try to click Next here, you will get an error, as you need to scroll down and put in the root password twice. Once done, hit Next

  • You’re done! Skim over the summary to ensure it is correct and hit Finish

  • If you hop onto the vSphere Web Client, and look at the recent tasks pane, you should see your VIO VM being created Once up and running make a note of the IP address if you did not set it to a static IP

  • If all has gone to plan, you should now be able to see the VIO with Kubernetes blue screen in your Remote Console.

  • Navigate to the IP of your VIO VM and you will see the below screen

  • Log in with your root credentials

Configuration of VIO with Kubernetes

Once logged in, the first order of business is to create a Cloud Provider to be able to provision a Kubernetes cluster. You have two options for this; SDDC (vSphere/NSX/vSAN) or OpenStack. You can only select

Use the following steps to create an SDDC Cloud Provider (Note: you can only have one SDDC Cloud Provider per VIO with Kubernetes instance; therefore, if you want to manage multiple vCenters, you would need to provision multiple VIO with Kubernetes management appliaces). Attempting to create more than one Cloud Provider per VIO will display the following:

  • From within the VIO console, click Cloud Providers to the left then Deploy New Provider. If you have a saved JSON configuration file you can upload it here. You likely won’t have one yet, but you can save your configuration at the end for reuse. Hit Next

  • Enter a provider name, which can be anything you like, but of course always try to make it intuitive. Next you have two options. For the purposes of this lab I will select SDDC, which essentially means deploy onto vSphere. Hit Next. You can also choose OpenStack if you have that setup. If you don’t, you can install DevStack onto Ubuntu 16.04 which which is very good for learning and lab purposes https://docs.openstack.org/devstack/latest/

  • Enter the FQDN or IP of your vCenter server and your username and password. I am using self-signed certs so I check the Ignore the vCenter Server certificate validation box. Hit Next

  • Select a vSphere cluster to deploy your Kubernetes cluster to and hit Next

  • Select one or more datastores that you will use as storage for the Kubernetes cluster nodes then hit Next (Obvs I have vSAN in my lab which is awesome! Click here for information on its Hyper Converged goodness  https://www.vmware.com/products/vsan.html)

  • The Configure networking page presents you with three options.
    • I am fortunate enough to have the full SDDC stack in my lab, so I chose NSX-V and will explain the config below.
    • Distributed switch is what you would choose if you just have vSphere in your environment, no NSX.
    • NSX-T is a fantastic solution that can be used in conjunction with, or entirely without vSphere. You can read more about it here https://featurewalkthrough.vmware.com/t/nsx/

  • Enter the NSX manager FQDN or IP, username and password
  • As before, I’m using self-signed certificates so ticked the Ignore the NSX-V SSL certificate validation checkbox
  • Select the transport zone from the drop down
  • Select the Edge resource pool
  • Choose your datastore
  • Select the Virtual distributed switch and click Next

  • Select the Port Group which is external interface for the Kubernetes clusters that NSX edges will connect to
  • I skipped the VLAN ID but put this in if you have VLANs configured
  • Enter the network CIDR for your management network eg 192.168.0.0/24
  • Enter an allocation IP range for your Kubernetes NSX edges
  • Enter your gateway
  • Enter your DNS server and hit Next

  • I created a Local Admin User but you can use Active Directory LDAP if you wish
  • Create your cluster admin username and password then click Next

  • Scroll down to check over your summary page and if happy. To save time in future this is where you can download your provider JSON file, which if you recall can be uploaded at the beginning of this process, and fills out most of this wizard for you the next time you run it. When ready, click Finish which will begin creating your SDDC provider

  • The creation process takes approximately 15 or 20 minutes to complete

  • When finished, if all has gone well it should look like this

Create a Kubernetes cluster

Now that you have your SDDC provider, you are ready to deploy your first Kubernetes cluster. A Kubernetes cluster consists of at least one cluster master and multiple worker machines called nodes. You can read more about cluster architecture here https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture

  • Click the “Clusters” link located to the left of the management portal. Now that you have an SDDC provider, the grey deploy new cluster button should now be blue

  • Click Deploy New Custer
  • Again you can upload a JSON which saves you a ton of time in the future, but for now just hit Next

  • Select the SDDC provider you previously created and hit Next

  • Check the Use default node profile checkbox and click Next

  • Enter a cluster name and the number of master and worker nodes
  • Enter a DNS server
  • Enter your vRealize Log Insight Server IP. If you’ve not tried Log Insight, I can’t recommend it enough, and a lesser known fact is that you get 25 free OSI licences with any supported version of vCenter Standard. Check it out here: https://www.vmware.com/uk/products/vrealize-log-insight.html
  • Select either an exclusive or shared cluster type. Exclusive clusters indicate a single tenant. Shared clusters bind users to a namespace for multitenancy on a Kubernetes cluster
  • Click the Next

  • Select the users for your cluster. On a shared cluster you can create a namespace here then add the users to the namespace. Click Next

  • Verify the settings on your Kubernetes cluster, remembering to scroll down to see additional detail.
  • Again you can download the JSON config file for ease when repeating this process. If all looks well, click Finish

  • You will now see that your Kubernetes cluster is being created. This usually only takes around five minutes, but obviously depends on how many nodes specified

  • While you wait, if you go back to your vSphere Web Client, you will see the cluster being created, with the creation of multiple VMs, a Kubernetes Router and NSX virtual wires

And that’s all folks, you have now deployed VIO with Kubernetes onto your SDDC. You can access Kubernetes by navigating to it’s management IP in your web browser

You can read more about VIO here https://docs.vmware.com/en/VMware-Integrated-OpenStack/4.1/com.vmware.openstack.admin.doc/GUID-22A17509-22FA-4C1B-B89E-C1EC196FE867.html

System requirement’s can be found here https://docs.vmware.com/en/VMware-Integrated-OpenStack/4.1/integrated-openstack-41-install-config-guide.pdf

What is Kubernetes?  https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/