Tanzu Community Edition

Documentation

Deploy a Management Cluster to Azure

This topic describes how to use the Tanzu installer interface to deploy a management cluster. The installer interface launches in a browser and takes you through the steps to configure the management. The input values are saved in: ~/.config/tanzu/tkg/clusterconfigs/cluster-config.yaml.

Before you begin

  • Make sure that you have installed Tanzu Community Edition. See Plan Your Install

  • Make sure that you have completed steps to prepare to deploy a cluster. See Plan Your Install.

  • Make sure you have met the following installer prerequisites:

    • NTP is running on the bootstrap machine on which you are running tanzu management-cluster create and on the hypervisor.
    • A DHCP server is available.
    • The host where the CLI is being run has unrestricted Internet access in order to pull down container images.
    • Docker is running.
  • By default Tanzu saves the kubeconfig for all management clusters in the ~/.kube-tkg/config file. If you want to save the kubeconfig file to a different location, set the KUBECONFIG environment variable before running the installer, for example:

    KUBECONFIG=/path/to/mc-kubeconfig.yaml
    

Procedure

Start the Installer in your Browser

On the machine on which you downloaded and installed the Tanzu CLI, run the tanzu management-cluster create command with the --ui option:

tanzu management-cluster create --ui

If the prerequisites are met, the installer interface opens locally, at http://127.0.0.1:8080 in your default browser. To change where the installer interface runs, including running it on a different machine from the Tanzu CLI, use the following parameters:

  • --browser specifies the local browser to open the interface in. Supported values are chrome, firefox, safari, ie, edge, or none. You can use none with --bind to run the interface on a different machine.
  • --bind specifies the IP address and port to serve the interface from. For example, if another process is already using http://127.0.0.1:8080, use --bind to serve the interface from a different local port.

Example:

tanzu management-cluster create --ui --bind 192.168.1.87:5555 --browser none

The Tanzu Installer opens, click the Deploy button for VMware vSphere, AWS, Azure, or Docker.

Complete the Installer steps as follows:

Step 1: IaaS Provider

  1. In the IaaS Provider section, enter the Tenant ID, Client ID, Client Secret, and Subscription ID values for your Azure account. You recorded these values when you registered an Azure app and created a secret for it using the Azure Portal. For more information, see the Register Tanzu Community Edition as an Azure Client App topic.

  2. Select your Azure Environment, either Public Cloud or US Government Cloud. You can specify other environments by deploying from a configuration file and setting AZURE_ENVIRONMENT.

  3. Click Connect. The installer verifies the connection and changes the button label to Connected.

  4. Select the Azure region in which to deploy the management cluster.

  5. Paste the contents of your SSH public key, such as .ssh/id_rsa.pub, into the text box.

  6. Under Resource Group, select either the Select an existing resource group or the Create a new resource group radio button.

    • If you select Select an existing resource group, use the drop-down menu to select the group, then click Next.
    • If you select Create a new resource group, enter a name for the new resource group and then click Next.
  7. In the VNET for Azure section, select either the Create a new VNET on Azure or the Select an existing VNET radio button.

    • If you select Create a new VNET on Azure, use the drop-down menu to select the resource group in which to create the VNET and provide the following:
      • A name and a CIDR block for the VNET. The default is 10.0.0.0/16.
      • A name and a CIDR block for the control plane subnet. The default is 10.0.0.0/24.
      • A name and a CIDR block for the worker node subnet. The default is 10.0.1.0/24.
    • If you select Select an existing VNET, use the drop-down menus to select the resource group in which the VNET is located, the VNET name, the control plane and worker node subnets, and then click Next.
    • To make the management cluster private, select the Private Azure Cluster checkbox. By default, Azure management and workload clusters are public. But you can also configure them to be private, which means their API server uses an Azure internal load balancer (ILB) and is therefore only accessible from within the cluster’s own VNET or peered VNETs. For more information, see the Azure Private Clusters topic.

Step 2: Management Cluster Settings

  1. In the Management Cluster Settings section, select an instance size for either Development or Production. If you select Development, the installer deploys a management cluster with a single control plane node. If you select Production, the installer deploys a highly available management cluster with three control plane nodes. Use the Instance type drop-down menu to select from different combinations of CPU, RAM, and storage for the control plane node VM or VMs. The minimum configuration is 2 CPUs and 8 GB memory. The list of compatible instance types varies in different regions. For information about the configurations of the different sizes of node instances for Azure, see Sizes for virtual machines in Azure.
  2. (Optional) Enter a name for your management cluster. If you do not specify a name, the installer generates a unique name. The name must end with a letter, not a numeric character, and must be compliant with DNS hostname requirements as outlined in RFC 952 and amended in RFC 1123.
  3. Under Worker Node Instance Type, select the configuration for the worker node VM. If you select an instance type in the Production tile, the instance type that you select is automatically selected for the Worker Node Instance Type. If necessary, you can change this.
  4. The MachineHealthCheck option provides node health monitoring and node auto-repair on the clusters that you deploy with this management cluster. MachineHealthCheck is selected by default. You can activate or deactivate MachineHealthCheck on clusters after deployment by using the CLI.

Step 3: Metadata

In the optional Metadata section, provide descriptive information about the cluster:

  • Location: The geographical location in which the clusters run.
  • Description: A description of this cluster. The description has a maximum length of 63 characters and must start and end with a letter. It can contain only lower case letters, numbers, and hyphens, with no spaces.
  • Labels: Key/value pairs to help users identify clusters, for example release : beta, environment : staging, or environment : production. For more information, see Labels and Selectors.
    You can click Add to apply multiple labels to the clusters.

Any metadata that you specify here applies to the management cluster, standalone clusters, and workload clusters, and can be accessed by using the cluster management tool of your choice.

Step 4: Kubernetes Network

  1. Review the Cluster Service CIDR and Cluster Pod CIDR ranges. If the recommended CIDR ranges of 100.64.0.0/13 and 100.96.0.0/11 are unavailable, update the values under Cluster Service CIDR and Cluster Pod CIDR.

  2. (Optional) To send outgoing HTTP(S) traffic from the management cluster to a proxy, toggle Enable Proxy Settings and follow the instructions below to enter your proxy information. Tanzu applies these settings to kubelet, containerd, and the control plane. You can choose to use one proxy for HTTP traffic and another proxy for HTTPS traffic or to use the same proxy for both HTTP and HTTPS traffic.

    • To add your HTTP proxy information: Under HTTP Proxy URL, enter the URL of the proxy that handles HTTP requests. The URL must start with http://. For example, http://myproxy.com:1234. If the proxy requires authentication, under HTTP Proxy Username and HTTP Proxy Password, enter the username and password to use to connect to your HTTP proxy.

    • To add your HTTPS proxy information: If you want to use the same URL for both HTTP and HTTPS traffic, select Use the same configuration for https proxy. If you want to use a different URL for HTTPS traffic, enter the URL of the proxy that handles HTTPS requests. The URL must start with http://. For example, http://myproxy.com:1234. If the proxy requires authentication, under HTTPS Proxy Username and HTTPS Proxy Password, enter the username and password to use to connect to your HTTPS proxy.

    • Under No proxy, enter a comma-separated list of network CIDRs or hostnames that must bypass the HTTP(S) proxy. For example, noproxy.yourdomain.com,192.168.0.0/24. Internally, Tanzu appends localhost, 127.0.0.1, your VPC CIDR, Cluster Pod CIDR, and Cluster Service CIDR, .svc, .svc.cluster.local, and 169.254.0.0/16 to the list that you enter in this field.

    Important: If the management cluster VMs need to communicate with external services and infrastructure endpoints in your Tanzu environment, ensure that those endpoints are reachable by the proxies that you configured above or add them to No proxy. Depending on your environment configuration, this may include, but is not limited to, your OIDC or LDAP server, and Harbor.

Step 5: Identity Management

  1. In the Identity Management section, optionally uncheck Enable Identity Management Settings. You can deactivate identity management for proof-of-concept deployments, but it is strongly recommended to implement identity management in production deployments. If you deactivate identity management, you can activate it later.

  2. If you selected identity management, select OIDC or LDAPS.

    OIDC:

    Provide details of your OIDC provider account, for example, Okta.

    • Issuer URL: The IP or DNS address of your OIDC server.
    • Client ID: The client_id value that you obtain from your OIDC provider. For example, if your provider is Okta, log in to Okta, create a Web application, and select the Client Credentials options to get a client_id and secret.
    • Client Secret: The secret value that you obtain from your OIDC provider.
    • Scopes: A comma-separated list of additional scopes to request in the token response. For example, openid,groups,email.
    • Username Claim: The name of your username claim. This is used to set a user’s username in the JSON Web Token (JWT) claim. Depending on your provider, enter claims such as user_name, email, or code.
    • Groups Claim: The name of your group’s claim. This is used to set a user’s group in the JWT claim. For example, groups.

LDAPS:

Provide details of your company’s LDAPS server. All settings except for LDAPS Endpoint are optional.

  • LDAPS Endpoint: The IP or DNS address of your LDAPS server. Provide the address and port of the LDAP server, in the form host:port.
  • Bind DN: The DN for an application service account. The connector uses these credentials to search for users and groups. Not required if the LDAP server provides access for anonymous authentication.
  • Bind Password: The password for an application service account, if Bind DN is set.

Provide the user search attributes.

  • Base DN: The point from which to start the LDAP search. For example, OU=Users,OU=domain,DC=io.
  • Filter: A filter used by the LDAP search. For example, objectClass=group.
  • Username: The LDAP attribute that contains the user ID. For example, uid, sAMAccountName.

Provide the group search attributes.

  • Base DN: The point from which to start the LDAP search. For example, OU=Groups,OU=domain,DC=io.
  • Filter: A filter used by the LDAP search. For example, objectClass=group.
  • Name Attribute: The LDAP attribute that holds the name of the group. For example, cn.
  • User Attribute: The attribute of the user record that is used as the value of the membership attribute of the group record. For example, distinguishedName, DN.
  • Group Attribute: The attribute of the group record that holds the user/member information. For example, member.

Paste the contents of the LDAPS server CA certificate into the Root CA text box.

Finalize the Deployment

  1. Click Review Configuration to see the details of the management cluster that you have configured. When you click Review Configuration, Tanzu populates the cluster configuration file, which is located in the ~/.config/tanzu/tkg/clusterconfigs subdirectory, with the settings that you specified in the interface. You can optionally copy the cluster configuration file without completing the deployment. You can copy the cluster configuration file to another bootstrap machine and deploy the management cluster from that machine. For example, you might do this so that you can deploy the management cluster from a bootstrap machine that does not have a Web browser.

  2. (Optional) Under CLI Command Equivalent, click the Copy button to copy the CLI command for the configuration that you specified.

    Copying the CLI command allows you to reuse the command at the command line to deploy management clusters with the configuration that you specified in the interface. This can be useful if you want to automate management cluster deployment.

  3. (Optional) Click Edit Configuration to return to the installer wizard to modify your configuration.

  4. Click Deploy Management Cluster.

Deployment of the management cluster can take several minutes. The first run of tanzu management-cluster create takes longer than subsequent runs because it has to pull the required Docker images into the image store on your bootstrap machine. Subsequent runs do not require this step, so are faster. You can follow the progress of the deployment of the management cluster in the installer interface or in the terminal in which you ran tanzu management-cluster create --ui. If the machine on which you run tanzu management-cluster create shuts down or restarts before the local operations finish, the deployment will fail. If you inadvertently close the browser or browser tab in which the deployment is running before it finishes, the deployment continues in the terminal.

Join us!

Our open community welcomes all users and contributors

Community