Tanzu Community Edition

Documentation

Deploy a Management Cluster to AWS

This topic describes how to use the Tanzu installer interface to deploy a management cluster. The installer interface launches in a browser and takes you through the steps to configure the management. The input values are saved in: ~/.config/tanzu/tkg/clusterconfigs/cluster-config.yaml.

Before you begin

  • Make sure that you have installed Tanzu Community Edition. See Plan Your Install

  • Make sure that you have completed steps to prepare to deploy a cluster. See Plan Your Install.

  • Make sure you have met the following installer prerequisites:

    • NTP is running on the bootstrap machine on which you are running tanzu management-cluster create and on the hypervisor.
    • A DHCP server is available.
    • The host where the CLI is being run has unrestricted Internet access in order to pull down container images.
    • Docker is running.
  • By default Tanzu saves the kubeconfig for all management clusters in the ~/.kube-tkg/config file. If you want to save the kubeconfig file to a different location, set the KUBECONFIG environment variable before running the installer, for example:

    KUBECONFIG=/path/to/mc-kubeconfig.yaml
    

Procedure

Start the Installer in your Browser

On the machine on which you downloaded and installed the Tanzu CLI, run the tanzu management-cluster create command with the --ui option:

tanzu management-cluster create --ui

If the prerequisites are met, the installer interface opens locally, at http://127.0.0.1:8080 in your default browser. To change where the installer interface runs, including running it on a different machine from the Tanzu CLI, use the following parameters:

  • --browser specifies the local browser to open the interface in. Supported values are chrome, firefox, safari, ie, edge, or none. You can use none with --bind to run the interface on a different machine.
  • --bind specifies the IP address and port to serve the interface from. For example, if another process is already using http://127.0.0.1:8080, use --bind to serve the interface from a different local port.

Example:

tanzu management-cluster create --ui --bind 192.168.1.87:5555 --browser none

The Tanzu Installer opens, click the Deploy button for VMware vSphere, AWS, Azure, or Docker.

Note: If you are bootstrapping from a Windows machine and you encounter the following error, see this troubleshooting entry for a workaround.

Error: unable to ensure prerequisites: unable to ensure tkg BOM file: failed to download TKG compatibility file from the registry: failed to list TKG compatibility image tags: Get "https://projects.registry.vmware.com/v2/": x509: certificate signed by unknown authority

Complete the Installer steps as follows:

Step 1: IaaS Provider

  1. In the IaaS Provider section, enter credentials for your AWS account. You have two options:

    • In the AWS Credential Profile drop-down, you can select an already existing AWS credential profile. If you select a profile, the access key and session token information configured for your profile are passed to the Installer without displaying actual values in the UI.
    • Alternately, enter AWS account credentials directly in the Access Key ID and Secret Access Key fields for your AWS account. For information about setting up credential profiles, see Prepare to Deploy a Management Cluster to AWS.
    • Optionally, specify an AWS session token in Session Token if your AWS account is configured to require temporary credentials. For more information on acquiring session tokens, see Using temporary credentials with AWS resources.
  2. In Region, select the AWS region in which to deploy the cluster. If you intend to deploy a production management cluster, this region must have at least three availability zones. This region must also be registered with the SSH key entered in the next field. A list of regions is available here in the AWS documentation.

  3. In SSH Key Name, specify the name of an SSH key that is already registered with both your Amazon EC2 account and the region where you are deploying the cluster. For information about setting up credential profiles, see Prepare to Deploy a Management Cluster to AWS.

  4. If this is the first time deploying a cluster, select the Automate creation of AWS CloudFormation Stack checkbox, and click Connect.

    The CloudFormation stack creates the identity and access management (IAM) resources that Tanzu Community Edition needs to deploy and run clusters on AWS. For more information, see Required IAM Resources.

  5. If the connection is successful, click Next.

Step 2: VPC for AWS

In the VPC for AWS section, do one of the following:

  • To create a new VPC, select Create new VPC on AWS, check that the pre-filled CIDR block is available, and click Next. If the recommended CIDR block is not available, enter a new IP range in CIDR format for the management cluster to use. The recommended CIDR block for VPC CIDR is 10.0.0.0/16.
  • To use an existing VPC, select Select an existing VPC and select the VPC ID from the drop-down menu. The VPC CIDR block is filled in automatically when you select the VPC.

For more information about VPC, see Virtual Private Clouds and NAT Gateway Limits.

Step 3: Management Cluster Settings

  1. In the Management Cluster Settings section, select an instance size for either Development or Production. If you select Development, the installer deploys a management cluster with a single control plane node. If you select Production, the installer deploys a highly available management cluster with three control plane nodes. Use the Instance type drop-down menu to select from different combinations of CPU, RAM, and storage for the control plane node VM or VMs. Choices are listed alphabetically, not by size. The minimum configuration is 2 CPUs and 8 GB memory. The list of compatible instance types varies in different regions. For information about the configuration of the different sizes of instances, see Amazon EC2 Instance Types.

  2. (Optional) Enter a name for your management cluster. If you do not specify a name, the installer generates a unique name. If you do specify a name, the name must end with a letter, not a numeric character, and must be compliant with DNS hostname requirements as outlined in RFC 952 and amended in RFC 1123.

  3. Under Worker Node Instance Type, select the configuration for the worker node VM. If you select an instance type in the Production tile, the instance type that you select is automatically selected for the Worker Node Instance Type. If necessary, you can change this.

  4. MachineHealthCheck is selected by default. MachineHealthCheck provides node health monitoring and node auto-repair on the clusters that you deploy with this management cluster. You can activate or deactivate MachineHealthCheck on clusters after deployment by using the CLI.

  5. (Optional) Disable the Bastion Host checkbox if a bastion host already exists in the availability zone(s) in which you are deploying the management cluster.

  6. Configure Availability Zones. From the Availability Zone 1 drop-down menu, select an availability zone for the management cluster. You can select only one availability zone in the Development tile. If you selected the Production tile, use the Availability Zone 1, Availability Zone 2, and Availability Zone 3 drop-down menus to select three unique availability zones for the management cluster. When Tanzu deploys the management cluster, which includes three control plane nodes, it distributes the control plane nodes across these availability zones.

  7. To complete the configuration of the Management Cluster Settings section, do one of the following:

    • If you created a new VPC in the VPC for AWS section, click Next.
    • If you selected an existing VPC in the VPC for AWS section, use the VPC public subnet and VPC private subnet drop-down menus to select existing subnets on the VPC and click Next.

Step 4: Metadata

In the optional Metadata section, provide descriptive information about the cluster:

  • Location: The geographical location in which the clusters run.
  • Description: A description of this cluster. The description has a maximum length of 63 characters and must start and end with a letter. It can contain only lower case letters, numbers, and hyphens, with no spaces.
  • Labels: Key/value pairs to help users identify clusters, for example release : beta, environment : staging, or environment : production. For more information, see Labels and Selectors.
    You can click Add to apply multiple labels to the clusters.

Any metadata that you specify here applies to the management cluster and workload clusters, and can be accessed by using the cluster management tool of your choice.

Step 5: Kubernetes Network

  1. Review the Cluster Service CIDR and Cluster Pod CIDR ranges. If the recommended CIDR ranges of 100.64.0.0/13 and 100.96.0.0/11 are unavailable, update the values under Cluster Service CIDR and Cluster Pod CIDR.

  2. (Optional) To send outgoing HTTP(S) traffic from the management cluster to a proxy, toggle Enable Proxy Settings and follow the instructions below to enter your proxy information. Tanzu applies these settings to kubelet, containerd, and the control plane. You can choose to use one proxy for HTTP traffic and another proxy for HTTPS traffic or to use the same proxy for both HTTP and HTTPS traffic.

    • To add your HTTP proxy information: Under HTTP Proxy URL, enter the URL of the proxy that handles HTTP requests. The URL must start with http://. For example, http://myproxy.com:1234. If the proxy requires authentication, under HTTP Proxy Username and HTTP Proxy Password, enter the username and password to use to connect to your HTTP proxy.

    • To add your HTTPS proxy information: If you want to use the same URL for both HTTP and HTTPS traffic, select Use the same configuration for https proxy. If you want to use a different URL for HTTPS traffic, enter the URL of the proxy that handles HTTPS requests. The URL must start with http://. For example, http://myproxy.com:1234. If the proxy requires authentication, under HTTPS Proxy Username and HTTPS Proxy Password, enter the username and password to use to connect to your HTTPS proxy.

    • Under No proxy, enter a comma-separated list of network CIDRs or hostnames that must bypass the HTTP(S) proxy. For example, noproxy.yourdomain.com,192.168.0.0/24. Internally, Tanzu appends localhost, 127.0.0.1, your VPC CIDR, Cluster Pod CIDR, and Cluster Service CIDR, .svc, .svc.cluster.local, and 169.254.0.0/16 to the list that you enter in this field.

    Important: If the management cluster VMs need to communicate with external services and infrastructure endpoints in your Tanzu environment, ensure that those endpoints are reachable by the proxies that you configured above or add them to No proxy. Depending on your environment configuration, this may include, but is not limited to, your OIDC or LDAP server, and Harbor.

Step 6: Identity Management

  1. In the Identity Management section, optionally uncheck Enable Identity Management Settings. You can deactivate identity management for proof-of-concept deployments, but it is strongly recommended to implement identity management in production deployments. If you deactivate identity management, you can activate it later.

  2. If you selected identity management, select OIDC or LDAPS.

    OIDC:

    Provide details of your OIDC provider account, for example, Okta.

    • Issuer URL: The IP or DNS address of your OIDC server.
    • Client ID: The client_id value that you obtain from your OIDC provider. For example, if your provider is Okta, log in to Okta, create a Web application, and select the Client Credentials options to get a client_id and secret.
    • Client Secret: The secret value that you obtain from your OIDC provider.
    • Scopes: A comma-separated list of additional scopes to request in the token response. For example, openid,groups,email.
    • Username Claim: The name of your username claim. This is used to set a user’s username in the JSON Web Token (JWT) claim. Depending on your provider, enter claims such as user_name, email, or code.
    • Groups Claim: The name of your group’s claim. This is used to set a user’s group in the JWT claim. For example, groups.

LDAPS:

Provide details of your company’s LDAPS server. All settings except for LDAPS Endpoint are optional.

  • LDAPS Endpoint: The IP or DNS address of your LDAPS server. Provide the address and port of the LDAP server, in the form host:port.
  • Bind DN: The DN for an application service account. The connector uses these credentials to search for users and groups. Not required if the LDAP server provides access for anonymous authentication.
  • Bind Password: The password for an application service account, if Bind DN is set.

Provide the user search attributes.

  • Base DN: The point from which to start the LDAP search. For example, OU=Users,OU=domain,DC=io.
  • Filter: A filter used by the LDAP search. For example, objectClass=group.
  • Username: The LDAP attribute that contains the user ID. For example, uid, sAMAccountName.

Provide the group search attributes.

  • Base DN: The point from which to start the LDAP search. For example, OU=Groups,OU=domain,DC=io.
  • Filter: A filter used by the LDAP search. For example, objectClass=group.
  • Name Attribute: The LDAP attribute that holds the name of the group. For example, cn.
  • User Attribute: The attribute of the user record that is used as the value of the membership attribute of the group record. For example, distinguishedName, DN.
  • Group Attribute: The attribute of the group record that holds the user/member information. For example, member.

Paste the contents of the LDAPS server CA certificate into the Root CA text box.

Step 7: OS Image

In the OS Image section, use the drop-down menu to select the OS and Kubernetes version image template to use for deploying Tanzu Community Edition VMs, and click Next.

Note: This list should populate based on known AMIs uploaded by VMware. These AMIs are publicly accessible for your use. Choose based on your preferred Linux distribution.

Finalize the Deployment

  1. Click Review Configuration to see the details of the management cluster that you have configured. When you click Review Configuration, Tanzu populates the cluster configuration file, which is located in the ~/.config/tanzu/tkg/clusterconfigs subdirectory, with the settings that you specified in the interface. You can optionally copy the cluster configuration file without completing the deployment. You can copy the cluster configuration file to another bootstrap machine and deploy the management cluster from that machine. For example, you might do this so that you can deploy the management cluster from a bootstrap machine that does not have a Web browser.

  2. (Optional) Under CLI Command Equivalent, click the Copy button to copy the CLI command for the configuration that you specified.

    Copying the CLI command allows you to reuse the command at the command line to deploy management clusters with the configuration that you specified in the interface. This can be useful if you want to automate management cluster deployment.

  3. (Optional) Click Edit Configuration to return to the installer wizard to modify your configuration.

  4. Click Deploy Management Cluster.

Deployment of the management cluster can take several minutes. The first run of tanzu management-cluster create takes longer than subsequent runs because it has to pull the required Docker images into the image store on your bootstrap machine. Subsequent runs do not require this step, so are faster. You can follow the progress of the deployment of the management cluster in the installer interface or in the terminal in which you ran tanzu management-cluster create --ui. If the machine on which you run tanzu management-cluster create shuts down or restarts before the local operations finish, the deployment will fail. If you inadvertently close the browser or browser tab in which the deployment is running before it finishes, the deployment continues in the terminal.

Join us!

Our open community welcomes all users and contributors

Community