Tanzu Community Edition

Documentation

Deploy a Management Cluster to vSphere

This topic describes how to use the Tanzu installer interface to deploy a management cluster. The installer interface launches in a browser and takes you through the steps to configure the management. The input values are saved in: ~/.config/tanzu/tkg/clusterconfigs/cluster-config.yaml.

Before you begin

  • Make sure that you have installed Tanzu Community Edition. See Plan Your Install

  • Make sure that you have completed steps to prepare to deploy a cluster. See Plan Your Install.

  • Make sure you have met the following installer prerequisites:

    • NTP is running on the bootstrap machine on which you are running tanzu management-cluster create and on the hypervisor.
    • A DHCP server is available.
    • The host where the CLI is being run has unrestricted Internet access in order to pull down container images.
    • Docker is running.
  • By default Tanzu saves the kubeconfig for all management clusters in the ~/.kube-tkg/config file. If you want to save the kubeconfig file to a different location, set the KUBECONFIG environment variable before running the installer, for example:

    KUBECONFIG=/path/to/mc-kubeconfig.yaml
    

Procedure

Start the Installer in your Browser

On the machine on which you downloaded and installed the Tanzu CLI, run the tanzu management-cluster create command with the --ui option:

tanzu management-cluster create --ui

If the prerequisites are met, the installer interface opens locally, at http://127.0.0.1:8080 in your default browser. To change where the installer interface runs, including running it on a different machine from the Tanzu CLI, use the following parameters:

  • --browser specifies the local browser to open the interface in. Supported values are chrome, firefox, safari, ie, edge, or none. You can use none with --bind to run the interface on a different machine.
  • --bind specifies the IP address and port to serve the interface from. For example, if another process is already using http://127.0.0.1:8080, use --bind to serve the interface from a different local port.

Example:

tanzu management-cluster create --ui --bind 192.168.1.87:5555 --browser none

The Tanzu Installer opens, click the Deploy button for VMware vSphere, AWS, Azure, or Docker.

Note: If you are bootstrapping from a Windows machine and you encounter the following error, see this troubleshooting entry for a workaround.

Error: unable to ensure prerequisites: unable to ensure tkg BOM file: failed to download TKG compatibility file from the registry: failed to list TKG compatibility image tags: Get "https://projects.registry.vmware.com/v2/": x509: certificate signed by unknown authority

Complete the Installer steps as follows:

Step 1: IaaS Provider

  1. Enter the IP address or fully qualified domain name (FQDN) for the vCenter Server instance on which to deploy the management cluster. Tanzu does not support IPv6 addresses. This is because upstream Kubernetes only provides alpha support for IPv6.
  2. Enter the vCenter Single Sign On username and password for a user account that has the required privileges for Tanzu operation, and click Connect.
  3. Verify the SSL thumbprint of the vCenter Server certificate and click Continue if it is valid. For information about how to obtain the vCenter Server certificate thumbprint, see Obtain vSphere Certificate Thumbprints.
  4. Select the datacenter in which to deploy the management cluster from the Datacenter drop-down menu.
  5. Paste the contents of your SSH public key into the text box and click Next.

Step 2: Management Cluster Settings

  1. In the Management Cluster Settings section, select an instance size for either Development or Production. If you select Development, the installer deploys a management cluster with a single control plane node. If you select Production, the installer deploys a highly available management cluster with three control plane nodes. Use the Instance type drop-down menu to select from different combinations of CPU, RAM, and storage for the control plane node VM or VMs.
  2. (Optional) Enter a name for your management cluster under Management Cluster Name. If you do not specify a name, the installer generates a unique name. The name must end with a letter, not a numeric character, and must be compliant with DNS hostname requirements as outlined in RFC 952 and amended in RFC 1123.
  3. The Machine Health Check option provides node health monitoring and node auto-repair on the clusters that you deploy with this management cluster. Machine Health Checks is selected by default.
  4. Select the Control Plane Endpoint Provider. This can be either the default kube-vip, or if available, you may use an NSX Advanced Load Balancer.
  5. Under Control Plane Endpoint, enter a static virtual IP address or FQDN for API requests to the management cluster. Ensure that this IP address is not in your DHCP range, but is in the same subnet as the DHCP range. If you mapped an FQDN to the VIP address, you can specify the FQDN instead of the VIP address.
  6. Under Worker Node Instance Type, select the configuration for the worker node VM. If you select an instance type in the Production tile, the instance type that you select is automatically selected for the Worker Node Instance Type. If necessary, you can change this.
  7. Checking the Enable Audit Logging checkbox will activate additional audit logging to be captured.

Step 3: VMware NSX Advanced Load Balancer

VMware NSX Advanced Load Balancer provides an L4 load balancing solution for vSphere. NSX Advanced Load Balancer includes a Kubernetes operator that integrates with the Kubernetes API to manage the lifecycle of load balancing and ingress resources for workloads. To use NSX Advanced Load Balancer, you must first deploy it in your vSphere environment. For information, see Install VMware NSX Advanced Load Balancer on a vSphere Distributed Switch.

In the optional VMware NSX Advanced Load Balancer section, you can configure Tanzu to use NSX Advanced Load Balancer. By default all workload clusters will use the load balancer.

  1. For Controller Host, enter the IP address or FQDN of the Controller VM.

  2. Enter the username and password that you set for the Controller host when you deployed it, and click Verify Credentials.

  3. Use the Cloud Name drop-down menu to select the cloud that you created in your NSX Advanced Load Balancer deployment.

    For example, Default-Cloud.

  4. Use the Service Engine Group Name drop-down menu to select a Service Engine Group.

    For example, Default-Group.

  5. For VIP Network Name, use the drop-down menu to select the name of the network where the load balancer floating IP Pool resides.

    The VIP network for NSX Advanced Load Balancer must be present in the same vCenter Server instance as the Kubernetes network that Tanzu uses. This allows NSX Advanced Load Balancer to discover the Kubernetes network in vCenter Server and to deploy and configure Service Engines. The drop-down menu is present in Tanzu v1.3.1 and later. In v1.3.0, you enter the name manually.

    You can see the network in the Infrastructure > Networks view of the NSX Advanced Load Balancer interface.

  6. For VIP Network CIDR, use the drop-down menu to select the CIDR of the subnet to use for the load balancer VIP.

    This comes from one of the VIP Network’s configured subnets. You can see the subnet CIDR for a particular network in the Infrastructure > Networks view of the NSX Advanced Load Balancer interface.

  7. Paste the contents of the Certificate Authority that is used to generate your Controller Certificate into the Controller Certificate Authority text box.

    If you have a self-signed Controller Certificate, the Certificate Authority is the same as the Controller Certificate.

  8. (Optional) Enter one or more cluster labels to identify clusters on which to selectively activate NSX Advanced Load Balancer or to customize NSX Advanced Load Balancer Settings per group of clusters.

    By default, all clusters that you deploy with this management cluster will activate NSX Advanced Load Balancer. All clusters will share the same VMware NSX Advanced Load Balancer Controller, Cloud, Service Engine Group, and VIP Network as you entered previously. This cannot be changed later. To only activate the load balancer on a subset of clusters, or to preserve the ability to customize NSX Advanced Load Balancer settings for a group of clusters, add labels in the format key: value. For example team: tkg.

    This is useful in the following scenarios:

    • You want to configure different sets of workload clusters to different Service Engine Groups to implement isolation or to support more Service type Load Balancers than one Service Engine Group’s capacity.

    • You want to configure different sets of workload clusters to different Clouds because they are deployed in separate sites.

      NOTE: Labels that you define here will be used to create a label selector. Only workload cluster Cluster objects that have the matching labels will have the load balancer activated. As a consequence, you are responsible for making sure that the workload cluster’s Cluster object has the corresponding labels. For example, if you use team: tkg, to activate the load balancer on a workload cluster, you will need to perform the following steps after deployment of the management cluster:

      1. Set kubectl to the management cluster’s context.

        kubectl config set-context management-cluster@admin
        
      2. Label the Cluster object of the corresponding workload cluster with the labels defined. If you define multiple key-values, you need to apply all of them.

        kubectl label cluster <cluster-name> team=tkg
        

Step 4: Metadata

In the optional Metadata section, provide descriptive information about the cluster:

  • Location: The geographical location in which the clusters run.
  • Description: A description of this cluster. The description has a maximum length of 63 characters and must start and end with a letter. It can contain only lower case letters, numbers, and hyphens, with no spaces.
  • Labels: Key/value pairs to help users identify clusters, for example release : beta, environment : staging, or environment : production. For more information, see Labels and Selectors.
    You can click Add to apply multiple labels to the clusters.

Any metadata that you specify here applies to the management cluster and workload clusters, and can be accessed by using the cluster management tool of your choice.

Step 5: Resources

In the Resources section, select vSphere resources for the management cluster to use, and click Next.

  • Select the VM folder in which to place the management cluster VMs.
  • Select a vSphere datastore for the management cluster to use.
  • Select the cluster, host, or resource pool in which to place the management cluster.

If appropriate resources do not already exist in vSphere, without quitting the Tanzu installer, go to vSphere to create them. Then click the refresh button so that the new resources can be selected.

Step 6: Kubernetes Network

  1. Under Network Name, select a vSphere network to use as the Kubernetes service network.

  2. (Optional) To send outgoing HTTP(S) traffic from the management cluster to a proxy, toggle Enable Proxy Settings and follow the instructions below to enter your proxy information. Tanzu applies these settings to kubelet, containerd, and the control plane. You can choose to use one proxy for HTTP traffic and another proxy for HTTPS traffic or to use the same proxy for both HTTP and HTTPS traffic.

    • To add your HTTP proxy information: Under HTTP Proxy URL, enter the URL of the proxy that handles HTTP requests. The URL must start with http://. For example, http://myproxy.com:1234. If the proxy requires authentication, under HTTP Proxy Username and HTTP Proxy Password, enter the username and password to use to connect to your HTTP proxy.

    • To add your HTTPS proxy information: If you want to use the same URL for both HTTP and HTTPS traffic, select Use the same configuration for https proxy. If you want to use a different URL for HTTPS traffic, enter the URL of the proxy that handles HTTPS requests. The URL must start with http://. For example, http://myproxy.com:1234. If the proxy requires authentication, under HTTPS Proxy Username and HTTPS Proxy Password, enter the username and password to use to connect to your HTTPS proxy.

    • Under No proxy, enter a comma-separated list of network CIDRs or hostnames that must bypass the HTTP(S) proxy. For example, noproxy.yourdomain.com,192.168.0.0/24. You must enter the CIDR of the vSphere network that you selected under Network Name. The vSphere network CIDR includes the IP address of your Control Plane Endpoint. If you entered an FQDN under Control Plane Endpoint, add both the FQDN and the vSphere network CIDR to No proxy. Internally, Tanzu appends localhost, 127.0.0.1, the values of Cluster Pod CIDR and Cluster Service CIDR, .svc, and .svc.cluster.local to the list that you enter in this field.

    Important: If the management cluster VMs need to communicate with external services and infrastructure endpoints in your Tanzu environment, ensure that those endpoints are reachable by the proxies that you configured above or add them to No proxy. Depending on your environment configuration, this may include, but is not limited to, your OIDC or LDAP server, and Harbor.

Step 7: Identity Management

  1. In the Identity Management section, optionally uncheck Enable Identity Management Settings. You can deactivate identity management for proof-of-concept deployments, but it is strongly recommended to implement identity management in production deployments. If you deactivate identity management, you can activate it later.

  2. If you selected identity management, select OIDC or LDAPS.

    OIDC:

    Provide details of your OIDC provider account, for example, Okta.

    • Issuer URL: The IP or DNS address of your OIDC server.
    • Client ID: The client_id value that you obtain from your OIDC provider. For example, if your provider is Okta, log in to Okta, create a Web application, and select the Client Credentials options to get a client_id and secret.
    • Client Secret: The secret value that you obtain from your OIDC provider.
    • Scopes: A comma-separated list of additional scopes to request in the token response. For example, openid,groups,email.
    • Username Claim: The name of your username claim. This is used to set a user’s username in the JSON Web Token (JWT) claim. Depending on your provider, enter claims such as user_name, email, or code.
    • Groups Claim: The name of your group’s claim. This is used to set a user’s group in the JWT claim. For example, groups.

LDAPS:

Provide details of your company’s LDAPS server. All settings except for LDAPS Endpoint are optional.

  • LDAPS Endpoint: The IP or DNS address of your LDAPS server. Provide the address and port of the LDAP server, in the form host:port.
  • Bind DN: The DN for an application service account. The connector uses these credentials to search for users and groups. Not required if the LDAP server provides access for anonymous authentication.
  • Bind Password: The password for an application service account, if Bind DN is set.

Provide the user search attributes.

  • Base DN: The point from which to start the LDAP search. For example, OU=Users,OU=domain,DC=io.
  • Filter: A filter used by the LDAP search. For example, objectClass=group.
  • Username: The LDAP attribute that contains the user ID. For example, uid, sAMAccountName.

Provide the group search attributes.

  • Base DN: The point from which to start the LDAP search. For example, OU=Groups,OU=domain,DC=io.
  • Filter: A filter used by the LDAP search. For example, objectClass=group.
  • Name Attribute: The LDAP attribute that holds the name of the group. For example, cn.
  • User Attribute: The attribute of the user record that is used as the value of the membership attribute of the group record. For example, distinguishedName, DN.
  • Group Attribute: The attribute of the group record that holds the user/member information. For example, member.

Paste the contents of the LDAPS server CA certificate into the Root CA text box.

Step 8: OS Image

In the OS Image section, use the drop-down menu to select the OS and Kubernetes version image template to use for deploying Tanzu VMs, and click Next.

Note: Only the images you uploaded in the Import the Base Image Template into vSphere procedure will appear in the drop-down list. You can import an image now, without quitting the installer interface. After you import it, use the Refresh button to make it appear in the drop-down menu.

Finalize the Deployment

  1. Click Review Configuration to see the details of the management cluster that you have configured. When you click Review Configuration, Tanzu populates the cluster configuration file, which is located in the ~/.config/tanzu/tkg/clusterconfigs subdirectory, with the settings that you specified in the interface. You can optionally copy the cluster configuration file without completing the deployment. You can copy the cluster configuration file to another bootstrap machine and deploy the management cluster from that machine. For example, you might do this so that you can deploy the management cluster from a bootstrap machine that does not have a Web browser.

  2. (Optional) Under CLI Command Equivalent, click the Copy button to copy the CLI command for the configuration that you specified.

    Copying the CLI command allows you to reuse the command at the command line to deploy management clusters with the configuration that you specified in the interface. This can be useful if you want to automate management cluster deployment.

  3. (Optional) Click Edit Configuration to return to the installer wizard to modify your configuration.

  4. Click Deploy Management Cluster.

Deployment of the management cluster can take several minutes. The first run of tanzu management-cluster create takes longer than subsequent runs because it has to pull the required Docker images into the image store on your bootstrap machine. Subsequent runs do not require this step, so are faster. You can follow the progress of the deployment of the management cluster in the installer interface or in the terminal in which you ran tanzu management-cluster create --ui. If the machine on which you run tanzu management-cluster create shuts down or restarts before the local operations finish, the deployment will fail. If you inadvertently close the browser or browser tab in which the deployment is running before it finishes, the deployment continues in the terminal.

Join us!

Our open community welcomes all users and contributors

Community