Tanzu Community Edition

Documentation

Getting Started with Tanzu Community Edition

This guide walks you through standing up a management and workload cluster using Tanzu Community Edition.

Management Clusters

Managed clusters is a deployment model that features 1 management cluster and N workload cluster(s). The management cluster provides management and operations for Tanzu. It runs Cluster-API which is used to manage workload clusters and multi-cluster services. The workload cluster(s) are where developer’s workloads run.

When you create a management cluster, a bootstrap cluster is created on your local machine. This is a Kind based cluster, which runs via Docker. The bootstrap cluster creates a management cluster on your specified provider. The information for how to manage clusters in the target environment is then pivoted into the management cluster. At this point, the local bootstrap cluster is deleted. The management cluster can now create workload clusters.

📋 Feedback

Thank you for trying Tanzu Community Edition! Please be sure to fill out our survey and leave feedback here after trying this guide!

Tanzu Community Edition Installation

Tanzu Community Edition consists of the Tanzu CLI and a select set of plugins. You will install Tanzu Community Edition on your local machine and then use the Tanzu CLI on your local machine to deploy a cluster to your chosen target platform.

Linux Local Bootstrap Machine Prerequisites

RAM: 6 GB
CPU: 2
Docker
In Docker, you must create the docker group and add your user before you attempt to create a standalone or management cluster. Complete steps 1 to 4 in the Manage Docker as a non-root user procedure in the Docker documentation.
Kubectl
Latest version of Chrome, Firefox, Safari, Internet Explorer, or Edge
System time is synchronized with a Network Time Protocol (NTP) server.
Ensure your Linux bootstrap machine is using cgroup v1, for more information, see Check and set the cgroup below.

Check and set the cgroup

  1. Check the cgroup by running the following command:

    docker info | grep -i cgroup 
    

    You should see the following output:

    Cgroup Driver: cgroupfs
    Cgroup Version: 1
    
  2. If your Linux distribution is configured to use cgroups v2, you will need to set the systemd.unified_cgroup_hierarchy=0 kernel parameter to restore cgroups v1. See the instructions for setting kernel parameters for your Linux distribution, including:

    Fedora 32+

    Arch Linux

    OpenSUSE

Installation Procedure

  1. You must download and install the latest version of kubectl. For more information, see Install and Set Up kubectl on Linux in the Kubernetes documentation.

  2. You must download and install the latest version of docker. For more information, see Install Docker Engine in the Docker documentation.

Option 1: Homebrew

  1. Make sure you have the Homebrew package manager installed

  2. Run the following in your terminal:

    brew install vmware-tanzu/tanzu/tanzu-community-edition
    
  3. Run the post install configuration script. Note the output of the brew install step for the correct location of the configure script:

    {HOMEBREW-INSTALL-LOCATION}/configure-tce.sh
    

    This puts all the Tanzu plugins in the correct location. The first time you run the tanzu command the installed plugins and plugin repositories are initialized. This action might take a minute.

Option 2: Curl GitHub release

  1. Download the release for Linux via web browser.

  2. [Alternative] Download the release using the CLI. You may download a release using the provided remote script piped into bash.

    curl -H "Accept: application/vnd.github.v3.raw" \
        -L https://api.github.com/repos/vmware-tanzu/community-edition/contents/hack/get-tce-release.sh | \
        bash -s <RELEASE-VERSION> <RELEASE-OS-DISTRIBUTION>
    
    • Where <RELEASE-VERSION> is the Tanzu Community Edition release version. This is a required argument.
    • Where <RELEASE-OS-DISTRIBUTION> is the Tanzu Community Edition release version and distribution. This is a required argument.
    • For example, to download v0.9.1 for Linux, provide:
      bash -s v0.9.1 linux
    • This script requires curl, grep, sed, tr, and jq in order to work
    • The release will be downloaded to the local directory as tce-linux-amd64-v0.9.1.tar.gz
    • Note: A GitHub personal access token may be provided to the script as the GITHUB_TOKEN environment variable. This bypasses GitHub API rate limiting but is not required. Follow the GitHub documentation to aquire and use a personal access token.
  3. Unpack the release.

    tar xzvf ~/<DOWNLOAD-DIR>/tce-linux-amd64-v0.9.1.tar.gz
    
  4. Run the install script (make sure to use the appropriate directory for your platform).

    cd tce-linux-amd64-v0.9.1
    ./install.sh
    

    This installs the Tanzu CLI and puts all the plugins in the correct location. The first time you run the tanzu command the installed plugins and plugin repositories are initialized. This action might take a minute.

  5. You must download and install the latest version of kubectl.

    curl -LO https://dl.k8s.io/release/v1.20.1/bin/linux/amd64/kubectl
    sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
    

    For more information, see Install and Set Up kubectl on Linux in the Kubernetes documentation.

Mac Local Bootstrap Machine Prerequisites

RAM: 6 GB
CPU: 2
Docker Desktop for Mac
Kubectl
Latest version of Chrome, Firefox, Safari, Internet Explorer, or Edge

Installation Procedure

  1. Make sure you have the Homebrew package manager installed

  2. You must download and install the latest version of kubectl. For more information, see Install and Set Up kubectl on MacOS in the Kubernetes documentation.

  3. You must download and install the latest version of docker. For more information, see Install Docker Desktop on MacOS in the Docker documentation.

  4. Run the following in your terminal:

    brew install vmware-tanzu/tanzu/tanzu-community-edition
    
  5. Run the post install configuration script. Note the output of the brew install step for the correct location of the configure script:

    {HOMEBREW-INSTALL-LOCATION}/v0.9.1/libexec/configure-tce.sh
    

    This puts all the Tanzu plugins in the correct location. The first time you run the tanzu command the installed plugins and plugin repositories are initialized. This action might take a minute.

Windows Local Bootstrap Machine Prerequisites

RAM: 8 GB
CPU: 2
Docker Desktop for Windows
Kubectl
Latest version of Chrome, Firefox, Safari, Internet Explorer, or Edge

Note: Bootstrapping a cluster to Docker from a Windows bootstrap machine is currently experimental.

Installation Procedure

  1. Make sure you have the Chocolatey package manager installed.

  2. You must download and install the latest version of kubectl. For more information, see Install and Set Up kubectl on Windows in the Kubernetes documentation.

  3. You must download and install the latest version of docker. For more information, see Install Docker Desktop on Windows in the Docker documentation.

  4. Open PowerShell as an administrator and run the following:

    choco install tanzu-community-edition
    

    Both docker and kubectl are required to be present on the system, but are not explicit Chocolatey dependencies. Installing the Tanzu Community Edition package will extract the binaries and configure the plugin repositories. This might take a minute. Using an explicit version is required for now.

  5. The tanzu command will be added to your $PATH variable automatically by Chocolatey.

Creating Clusters

Create Managed Clusters in AWS

This section describes setting up management and workload clusters in Amazon Web Services (AWS).

There are some prerequisites the installation process will assume. Refer to the Prepare to Deploy a Management or Standalone Cluster to AWS docs for instructions on deploying an SSH key-pair and preparing your AWS account.

  1. Initialize the Tanzu Community Edition installer interface.

    tanzu management-cluster create --ui
    
  2. Choose Amazon from the provider tiles.

    kickstart amazon tile

  3. Fill out the IaaS Provider section.

    kickstart amazon iaas

    • A: Whether to use AWS named profiles or provide static credentials. It is highly recommended you use profiles. This can be setup by installing the AWS CLI on the bootstrap machine.
    • B: If using profiles, the name of the profile (credentials) you’d like to use. By default, profiles are stored in ${HOME}/.aws/credentials.
    • C: The region of AWS you’d like all networking, compute, etc to be created within. A list of regions is available here in the AWS documentation.
  4. Fill out the VPC settings.

    kickstart aws vpc

    • A: Whether to create a new Virtual Private Cloud in AWS or use an existing one. If using an existing one, you must provide its VPC ID. For initial deployments, it is recomended to create a new Virtual Private Cloud. This will ensure the installer takes care of all networking creation and configuration.
    • B: If creating a new VPC, the CIDR range or IPs to use for hosts (EC2 VMs).
  5. Fill out the Management Cluster Settings.

    kickstart aws management cluster settings

    • A: Choose between Development profile, with 1 control plane node or Production, which features a highly-available three node control plane. Additionally, choose the instance type you’d like to use for control plane nodes.
    • B: Name the cluster. This is a friendly name that will be used to reference your cluster in the Tanzu CLI and kubectl.
    • C: Choose an SSH key to use for accessing control plane and workload nodes. This SSH key must be accessible in the AWS region chosen in a previous step. See the AWS documentation for instructions on creating a key pair.
    • D: Whether to enable Cluster API’s machine health checks.
    • E: Whether to create a bastion host in your VPC. This host will be publicly accessible via your SSH key. All Kubernetes-related hosts will not be accessible without SSHing into this host. If preferred, you can create a bastion host independent of the installation process.
    • F: Choose whether you’d like to enable Kubernetes API server auditing.
    • G: Choose whether you’d like to create the CloudFormation stack expected by Tanzu. Checking this box is recommended. If the stack pre-exists, this step will be skipped.
    • H: The AWS availability zone in your chosen region to create control plane node(s) in. If the Production profile was chosen, you’ll have 3 options of zones, one for each host.
    • I: The AWS EC2 instance type to be used for each node creation. See the instances types documentation to understand trade-offs between CPU, memory, pricing and more.
  6. If you would like additional metadata to be tagged in your soon-to-be-created AWS infrastructure, fill out the Metadata section.

  7. Fill out the Kubernetes Network section.

    kickstart kubernetes networking

    • A: Set the CIDR for Kubernetes Services (Cluster IPs). These are internal IPs that, by default, are only exposed and routable within Kubernetes.
    • B: Set the CIDR range for Kubernetes Pods. These are internal IPs that, by default, are only exposed and routable within Kubernetes.
    • C: Set a network proxy that internal traffic should egress through to access external network(s).
  8. Fill out the Identity Management section.

    kickstart identity management

    • A: Select whether you want to enable identity management. If this is off, certificates (via kubeconfig) are used to authenticate users. For most development scenarios, it is preferred to keep this off.
    • B: If identity management is on, choose whether to authenticate using

      OIDC or LDAPS.

    • C: Fill out connection details for identity management.
  9. Fill out the OS Image section.

    kickstart aws os

    • A: The Amazon Machine Image (AMI) to use for Kubernetes host VMs. This list should populate based on known AMIs uploaded by VMware. These AMIs are publicly accessible for your use. Choose based on your preferred Linux distribution.
  10. Skip the TMC Registration section.

  11. Click the Review Configuration button.

    For your record, the configuration settings have been saved to ${HOME}/.config/tanzu/tkg/clusterconfigs.

  12. Deploy the cluster.

    If you experience issues deploying your cluster, visit the Troubleshooting documentation.

  13. Validate the management cluster started successfully.

    tanzu management-cluster get
    

    The output will look similar to the following:

    NAME  NAMESPACE   STATUS   CONTROLPLANE  WORKERS  KUBERNETES        ROLES
    mtce  tkg-system  running  1/1           1/1      v1.20.1+vmware.2  management
    

    Details:

    NAME READY SEVERITY REASON SINCE MESSAGE /mtce True 113m ├─ClusterInfrastructure - AWSCluster/mtce True 113m ├─ControlPlane - KubeadmControlPlane/mtce-control-plane True 113m │ └─Machine/mtce-control-plane-r7k52 True 113m └─Workers └─MachineDeployment/mtce-md-0 └─Machine/mtce-md-0-fdfc9f766-6n6lc True 113m

    Providers:

    NAMESPACE NAME TYPE PROVIDERNAME VERSION WATCHNAMESPACE capa-system infrastructure-aws InfrastructureProvider aws v0.6.4 capi-kubeadm-bootstrap-system bootstrap-kubeadm BootstrapProvider kubeadm v0.3.14 capi-kubeadm-control-plane-system control-plane-kubeadm ControlPlaneProvider kubeadm v0.3.14 capi-system cluster-api CoreProvider cluster-api v0.3.14

  14. Capture the management cluster’s kubeconfig and take note of the command for accessing the cluster in the output message, as you will use this for setting the context in the next step.

    tanzu management-cluster kubeconfig get <MGMT-CLUSTER-NAME> --admin
    

    Where <MGMT-CLUSTER-NAME> should be set to the name returned by tanzu management-cluster get.

    For example, if your management cluster is called ‘mtce’, you will see a message similar to:

    Credentials of cluster 'mtce' have been saved.
    You can now access the cluster by running 'kubectl config use-context mtce-admin@mtce'
    
  15. Set your kubectl context to the management cluster.

    kubectl config use-context <MGMT-CLUSTER-NAME>-admin@<MGMT-CLUSTER-NAME>
    

    Where <MGMT-CLUSTER-NAME> should be set to the name returned by tanzu management-cluster get.

  16. Validate you can access the management cluster’s API server.

    kubectl get nodes
    

    The output will look similar to the following:

    NAME                                       STATUS   ROLES                  AGE    VERSION
    ip-10-0-1-133.us-west-2.compute.internal   Ready    <none>                 123m   v1.20.1+vmware.2
    ip-10-0-1-76.us-west-2.compute.internal    Ready    control-plane,master   125m   v1.20.1+vmware.2
    
  17. Next, you will create a workload cluster. First, create a workload cluster configuration file by taking a copy of the management cluster YAML configuration file that was created when you deployed your management cluster. This example names the workload cluster configuration file workload1.yaml.

    cp  ~/.config/tanzu/tkg/clusterconfigs/<MGMT-CONFIG-FILE> ~/.config/tanzu/tkg/clusterconfigs/workload1.yaml
    
    • Where <MGMT-CONFIG-FILE> is the name of the management cluster YAML configuration file. The management cluster YAML configuration file will either have the name you assigned to the management cluster, or if no name was assigned, it will be a randomly generated name.

    • The duplicated file (workload1.yaml) will be used as the configuration file for your workload cluster. You can edit the parameters in this new file as required. For an example of a workload cluster template, see AWS Workload Cluster Template.

    • In the next two steps you will edit the parameters in this new file (workload1.yaml) and then use the file to deploy a workload cluster.

  18. In the new workload cluster file (~/.config/tanzu/tkg/clusterconfigs/workload1.yaml), edit the CLUSTER_NAME parameter to assign a name to your workload cluster. For example,

    CLUSTER_CIDR: 100.96.0.0/11
    CLUSTER_NAME: my-workload-cluster
    CLUSTER_PLAN: dev
    
    • If you did not specify a name for your management cluster, the installer generated a random unique name. In this case, you must manually add the CLUSTER_NAME parameter and assign a workload cluster name. The workload cluster names must be must be 42 characters or less and must comply with DNS hostname requirements as described here: RFC 1123
    • If you specified a name for your management cluster, the CLUSTER_NAME parameter is present and needs to be changed to the new workload cluster name.
    • The other parameters in workload1.yaml are likely fine as-is. However, you can change them as required. Validation is performed on the file prior to applying it, so the tanzu command will return a message if something necessary is omitted. Reference an example configuration template here: AWS Workload Cluster Template.
  19. Create your workload cluster.

    tanzu cluster create <WORKLOAD-CLUSTER-NAME> --file ~/.config/tanzu/tkg/clusterconfigs/workload1.yaml
    
  20. Validate the cluster starts successfully.

    tanzu cluster list
    
  21. Capture the workload cluster’s kubeconfig.

    tanzu cluster kubeconfig get <WORKLOAD-CLUSTER-NAME> --admin
    
  22. Set your kubectl context to the workload cluster.

    kubectl config use-context <WORKLOAD-CLUSTER-NAME>-admin@<WORKLOAD-CLUSTER-NAME>
    
  23. Verify you can see pods in the cluster.

    kubectl get pods --all-namespaces
    

    The output will look similar to the following:

    NAMESPACE     NAME                                                    READY   STATUS    RESTARTS   AGE
    kube-system   antrea-agent-9d4db                                      2/2     Running   0          3m42s
    kube-system   antrea-agent-vkgt4                                      2/2     Running   1          5m48s
    kube-system   antrea-controller-5d594c5cc7-vn5gt                      1/1     Running   0          5m49s
    kube-system   coredns-5d6f7c958-hs6vr                                 1/1     Running   0          5m49s
    kube-system   coredns-5d6f7c958-xf6cl                                 1/1     Running   0          5m49s
    kube-system   etcd-tce-guest-control-plane-b2wsf                      1/1     Running   0          5m56s
    kube-system   kube-apiserver-tce-guest-control-plane-b2wsf            1/1     Running   0          5m56s
    kube-system   kube-controller-manager-tce-guest-control-plane-b2wsf   1/1     Running   0          5m56s
    kube-system   kube-proxy-9825q                                        1/1     Running   0          5m48s
    kube-system   kube-proxy-wfktm                                        1/1     Running   0          3m42s
    kube-system   kube-scheduler-tce-guest-control-plane-b2wsf            1/1     Running   0          5m56s
    

Create Microsoft Azure Clusters

This section describes setting up management and workload clusters for Microsoft Azure.

There are some prerequisites this process will assume. Refer to the

Prepare to Deploy a Cluster to Azure docs for instructions on accepting image licenses and preparing your Azure account.

  1. Initialize the Tanzu Community Edition installer interface.

    tanzu management-cluster create --ui
    
  2. Choose Azure from the provider tiles.

    kickstart azure tile

  3. Fill out the IaaS Provider section.

    kickstart azure iaas

    • A: Your account’s Tenant ID.
    • B: Your Client ID.
    • C: Your Client secret.
    • D: Your Subscription ID.
    • E: The Azure Cloud in which to deploy. For example, “Public Cloud”, “US Government Cloud”, etc.
    • F: The region of Azure you’d like all networking, compute, etc to be created within.
    • G: The public key you’d like to use for your VM instances. This is how you’ll SSH into control plane and worker nodes.
    • H: Whether to use an existing

      resource group or create a new one.

    • I: The existing resource group, or the name to provide the new resource group.
  4. Fill out the VNET settings.

    kickstart azure vnet

    • A: Whether to create a new

      Virtual Network in Azure or use an existing one. If using an existing one, you must provide its VNET name. For initial deployments, it is recomended to create a new Virtual Network. This will ensure the installer takes care of all networking creation and configuration.

    • B: The Resource Group under which to create the VNET.
    • C: The name to use when creating a new VNET.
    • D: The CIDR block to use for this VNET.
    • E: The name for the control plane subnet.
    • F: The CIDR block to use for the control plane subnet. This range should be within the VNET CIDR.
    • G: The name for the worker node subnet.
    • H: The CIDR block to use for the worker node subnet. This range should be within the VNET CIDR and not overlap with the control plane CIDR.
    • I: Whether to deploy without a publicly accessible IP address. Access to the cluster will be limited to your Azure private network only. Various ways for connecting to your private cluster

      can be found in the Azure private cluster documentation.

  5. Fill out the Management Cluster Settings.

    kickstart azure management cluster settings

    • A: Choose between Development profile with one control plane node, or Production, which features a highly-available three node control plane. Additionally, choose the instance type you’d like to use for control plane nodes.
    • B: Name the cluster. This is a friendly name that will be used to reference your cluster in the Tanzu CLI and kubectl.
    • C: The instance type to be used for each node creation. See the instances types documentation to understand trade-offs between CPU, memory, pricing and more.
    • D: Whether to enable Cluster API’s machine health checks.
    • E: Choose whether you’d like to enable Kubernetes API server auditing.
  6. If you would like additional metadata to be tagged in your soon-to-be-created Azure infrastructure, fill out the Metadata section.

  7. Fill out the Kubernetes Network section.

    kickstart kubernetes networking

    • A: Set the CIDR for Kubernetes Services (Cluster IPs). These are internal IPs that, by default, are only exposed and routable within Kubernetes.
    • B: Set the CIDR range for Kubernetes Pods. These are internal IPs that, by default, are only exposed and routable within Kubernetes.
    • C: Set a network proxy that internal traffic should egress through to access external network(s).
  8. Fill out the Identity Management section.

    kickstart identity management

    • A: Select whether you want to enable identity management. If this is off, certificates (via kubeconfig) are used to authenticate users. For most development scenarios, it is preferred to keep this off.
    • B: If identity management is on, choose whether to authenticate using

      OIDC or LDAPS.

    • C: Fill out connection details for identity management.
  9. Fill out the OS Image section.

    kickstart azure os

    • A: The Azure image to use for Kubernetes host VMs. This list should populate based on known images uploaded by VMware. These images are publicly accessible for your use. Choose based on your preferred Linux distribution.
  10. Skip the TMC Registration section.

  11. Click the Review Configuration button.

    For your record, the configuration settings have been saved to ${HOME}/.config/tanzu/tkg/clusterconfigs.

  12. Deploy the cluster.

    If you experience issues deploying your cluster, visit the Troubleshooting documentation.

  13. Validate the management cluster started successfully.

    tanzu management-cluster get
    

    The output will look similar to the following:

    NAME         NAMESPACE   STATUS   CONTROLPLANE  WORKERS  KUBERNETES        ROLES       
    mgmt         tkg-system  running  1/1           1/1      v1.21.2+vmware.1  management
    

    Details:

    NAME READY SEVERITY REASON SINCE MESSAGE /mgmt True 5m38s ├─ClusterInfrastructure - AzureCluster/mgmt True 5m42s ├─ControlPlane - KubeadmControlPlane/mgmt-control-plane True 5m38s │ └─Machine/mgmt-control-plane-d99g5 True 5m41s └─Workers └─MachineDeployment/mgmt-md-0 └─Machine/mgmt-md-0-bc94f54b4-tgr9h True 5m41s

    Providers:

    NAMESPACE NAME TYPE PROVIDERNAME VERSION WATCHNAMESPACE capi-kubeadm-bootstrap-system bootstrap-kubeadm BootstrapProvider kubeadm v0.3.23 capi-kubeadm-control-plane-system control-plane-kubeadm ControlPlaneProvider kubeadm v0.3.23 capi-system cluster-api CoreProvider cluster-api v0.3.23 capz-system infrastructure-azure InfrastructureProvider azure v0.4.15

  14. Capture the management cluster’s kubeconfig.

    tanzu management-cluster kubeconfig get <MGMT-CLUSTER-NAME> --admin
    

    Where <MGMT-CLUSTER-NAME> should be set to the name returned by tanzu management-cluster get above.

    For example, if your management cluster is called ‘mgmt’, you will see a message similar to:

    Credentials of workload cluster 'mgmt' have been saved.
    You can now access the cluster by running 'kubectl config use-context mgmt-admin@mgmt'
    
  15. Set your kubectl context to the management cluster.

    kubectl config use-context <MGMT-CLUSTER-NAME>-admin@<MGMT-CLUSTER-NAME>
    

    Where <MGMT-CLUSTER-NAME> should be set to the name returned by tanzu management-cluster get.

  16. Validate you can access the management cluster’s API server.

    kubectl get nodes
    

    NAME STATUS ROLES AGE VERSION mgmt-control-plane-vkpsm Ready control-plane,master 111m v1.21.2+vmware.1 mgmt-md-0-qbbhk Ready <none> 109m v1.21.2+vmware.1

  17. Next you will create a workload cluster. First, setup a workload cluster configuration file.

    cp  ~/.config/tanzu/tkg/clusterconfigs/<MGMT-CONFIG-FILE> ~/.config/tanzu/tkg/clusterconfigs/workload1.yaml
    
    • Where <MGMT-CONFIG-FILE> is the name of the management cluster YAML configuration file

    • This step duplicates the configuration file that was created when you deployed your management cluster. The configuration file will either have the name you assigned to the management cluster, or if no name was assigned, it will be a randomly generated name.

    • This duplicated file will be used as the configuration file for your workload cluster. You can edit the parameters in this new file as required. For an example of a workload cluster template, see Azure Workload Cluster Template.

    • In the next two steps you will edit the parameters in this new file (workload1) and then use the file to deploy a workload cluster.

  18. In the new workload cluster file (~/.config/tanzu/tkg/clusterconfigs/workload1.yaml), edit the CLUSTER_NAME parameter to assign a name to your workload cluster. For example,

    CLUSTER_CIDR: 100.96.0.0/11
    CLUSTER_NAME: my-workload-cluster
    CLUSTER_PLAN: dev
    
    • If you did not specify a name for your management cluster, the installer generated a random unique name. In this case, you must manually add the CLUSTER_NAME parameter and assign a workload cluster name. The workload cluster names must be must be 42 characters or less and must comply with DNS hostname requirements as described here: RFC 1123
    • If you specified a name for your management cluster, the CLUSTER_NAME parameter is present and needs to be changed to the new workload cluster name.
    • The other parameters in workload1.yaml are likely fine as-is. Validation is performed on the file prior to applying it, so the tanzu command will return a message if something necessary is omitted. However, you can change paramaters as required. Reference an example configuration template here: Azure Workload Cluster Template.
  19. Create your workload cluster.

    tanzu cluster create <WORKLOAD-CLUSTER-NAME> --file ~/.config/tanzu/tkg/clusterconfigs/workload1.yaml
    
  20. Validate the cluster starts successfully.

    tanzu cluster list
    
  21. Capture the workload cluster’s kubeconfig.

    tanzu cluster kubeconfig get <WORKLOAD-CLUSTER-NAME> --admin
    
  22. Set your kubectl context to the workload cluster.

    kubectl config use-context <WORKLOAD-CLUSTER_NAME>-admin@<WORKLOAD-CLUSTER-NAME>
    
  23. Verify you can see pods in the cluster.

    kubectl get pods --all-namespaces
    

    NAMESPACE NAME READY STATUS RESTARTS AGE kube-system antrea-agent-9d4db 2/2 Running 0 3m42s kube-system antrea-agent-vkgt4 2/2 Running 1 5m48s kube-system antrea-controller-5d594c5cc7-vn5gt 1/1 Running 0 5m49s kube-system coredns-5d6f7c958-hs6vr 1/1 Running 0 5m49s kube-system coredns-5d6f7c958-xf6cl 1/1 Running 0 5m49s kube-system etcd-tce-guest-control-plane-b2wsf 1/1 Running 0 5m56s kube-system kube-apiserver-tce-guest-control-plane-b2wsf 1/1 Running 0 5m56s kube-system kube-controller-manager-tce-guest-control-plane-b2wsf 1/1 Running 0 5m56s kube-system kube-proxy-9825q 1/1 Running 0 5m48s kube-system kube-proxy-wfktm 1/1 Running 0 3m42s kube-system kube-scheduler-tce-guest-control-plane-b2wsf 1/1 Running 0 5m56s

⚠️ If bootstrapping docker-based clusters on Windows, see our Windows guide

Create Local Docker Clusters

This section describes setting up a management cluster on your local workstation using Docker.

⚠️: Tanzu Community Edition support for Docker is experimental and may require troubleshooting on your system.

Note: Bootstrapping a cluster to Docker from a Windows bootstrap machine is currently experimental.

Prerequisites

The following additional configuration is needed for the Docker engine on your local client machine (with no other containers running):

6 GB of RAM
15 GB of local machine disk storage for images
4 CPUs

⚠️ Warning on DockerHub Rate Limiting

When using the Docker (CAPD) provider, the load balancer image (HA Proxy) is pulled from DockerHub. DockerHub limits pulls per user and this can especially impact users who share a common IP, in the case of NAT or VPN. If DockerHub rate-limiting is an issue in your environment, you can pre-pull the load balancer image to your machine by running the following command.

docker pull kindest/haproxy:v20210715-a6da3463

This behavior will eventually be addressed in

https://github.com/vmware-tanzu/community-edition/issues/897.

Before You Begin

To optimise your Docker system and ensure a successful deployment, you may wish to complete the next two optional steps.

  1. (Optional): Stop all existing containers.

    docker kill $(docker ps -q)
    
  2. (Optional): Run the following command to prune all existing containers, volumes, and images.

    Warning: Read the prompt carefully before running the command, as it erases the majority of what is cached in your Docker environment. While this ensures your environment is clean before starting, it also significantly increases bootstrapping time if you already had the Docker images downloaded.

     docker system prune -a --volumes
    

Local Docker Bootstrapping

  1. Initialize the Tanzu Community Edition installer interface.

    tanzu management-cluster create --ui
    
  2. Complete the configuration steps in the installer interface for Docker and create the management cluster. The following configuration settings are recommended:

    • The Kubernetes Network Settings are auto-filled with a default CNI Provider and Cluster Service CIDR.
    • Docker Proxy settings are experimental and are to be used at your own risk.
    • We will have more complete tanzu cluster bootstrapping documentation available here in the near future.
    • If you ran the prune command in the previous step, expect this to take some time, as it’ll download an image that is over 1GB.
  3. (Alternative method) It is also possible to use the command line to create a Docker based management cluster:

    tanzu management-cluster create -i docker --name <MGMT-CLUSTER-NAME> -v 10 --plan dev --ceip-participation=false
    
    • <MGMT-CLUSTER-NAME> must end with a letter, not a numeric character, and must be compliant with DNS hostname requirements described here: RFC 1123.
  4. Validate the management cluster started:

    tanzu management-cluster get
    

    The output should look similar to the following:

    NAME                                               READY  SEVERITY  REASON  SINCE  MESSAGE
    /tkg-mgmt-docker-20210601125056                                                                 True                     28s
    ├─ClusterInfrastructure - DockerCluster/tkg-mgmt-docker-20210601125056                          True                     32s
    ├─ControlPlane - KubeadmControlPlane/tkg-mgmt-docker-20210601125056-control-plane               True                     28s
    │ └─Machine/tkg-mgmt-docker-20210601125056-control-plane-5pkcp                                  True                     24s
    │   └─MachineInfrastructure - DockerMachine/tkg-mgmt-docker-20210601125056-control-plane-9wlf2
      └─MachineDeployment/tkg-mgmt-docker-20210601125056-md-0
        └─Machine/tkg-mgmt-docker-20210601125056-md-0-5d895cbfd9-khj4s                              True                     24s
          └─MachineInfrastructure - DockerMachine/tkg-mgmt-docker-20210601125056-md-0-d544k
    

    Providers:

    NAMESPACE NAME TYPE PROVIDERNAME VERSION WATCHNAMESPACE capd-system infrastructure-docker InfrastructureProvider docker v0.3.10 capi-kubeadm-bootstrap-system bootstrap-kubeadm BootstrapProvider kubeadm v0.3.14 capi-kubeadm-control-plane-system control-plane-kubeadm ControlPlaneProvider kubeadm v0.3.14 capi-system cluster-api CoreProvider cluster-api v0.3.14

  5. Capture the management cluster’s kubeconfig and take note of the command for accessing the cluster in the message, as you will use this for setting the context in the next step.

    tanzu management-cluster kubeconfig get <MGMT-CLUSTER-NAME> --admin
    
    • Where <MGMT-CLUSTER-NAME> should be set to the name returned by tanzu management-cluster get.
    • For example, if your management cluster is called ‘mtce’, you will see a message similar to:
    Credentials of cluster 'mtce' have been saved.
    You can now access the cluster by running 'kubectl config use-context mtce-admin@mtce'
    
  6. Set your kubectl context to the management cluster.

    kubectl config use-context <MGMT-CLUSTER-NAME>-admin@<MGMT-CLUSTER-NAME>
    
  7. Validate you can access the management cluster’s API server.

    kubectl get nodes
    

    You will see output similar to:

    NAME                         STATUS   ROLES                  AGE   VERSION
    guest-control-plane-tcjk2    Ready    control-plane,master   59m   v1.20.4+vmware.1
    guest-md-0-f68799ffd-lpqsh   Ready    <none>                 59m   v1.20.4+vmware.1
    
  8. Create your workload cluster.

    tanzu cluster create <WORKLOAD-CLUSTER-NAME> --plan dev
    
  9. Validate the cluster starts successfully.

    tanzu cluster list
    
  10. Capture the workload cluster’s kubeconfig.

    tanzu cluster kubeconfig get <WORKLOAD-CLUSTER-NAME> --admin
    
  11. Set your kubectl context accordingly.

    kubectl config use-context <WORKLOAD-CLUSTER-NAME>-admin@<WORKLOAD-CLUSTER-NAME>
    
  12. Verify you can see pods in the cluster.

    kubectl get pods --all-namespaces
    

    The output will look similar to the following:

    NAMESPACE     NAME                                                    READY   STATUS    RESTARTS   AGE
    kube-system   antrea-agent-9d4db                                      2/2     Running   0          3m42s
    kube-system   antrea-agent-vkgt4                                      2/2     Running   1          5m48s
    kube-system   antrea-controller-5d594c5cc7-vn5gt                      1/1     Running   0          5m49s
    kube-system   coredns-5d6f7c958-hs6vr                                 1/1     Running   0          5m49s
    kube-system   coredns-5d6f7c958-xf6cl                                 1/1     Running   0          5m49s
    kube-system   etcd-tce-guest-control-plane-b2wsf                      1/1     Running   0          5m56s
    kube-system   kube-apiserver-tce-guest-control-plane-b2wsf            1/1     Running   0          5m56s
    kube-system   kube-controller-manager-tce-guest-control-plane-b2wsf   1/1     Running   0          5m56s
    kube-system   kube-proxy-9825q                                        1/1     Running   0          5m48s
    kube-system   kube-proxy-wfktm                                        1/1     Running   0          3m42s
    kube-system   kube-scheduler-tce-guest-control-plane-b2wsf            1/1     Running   0          5m56s
    

You now have local clusters running on Docker. The nodes can be seen by running the following command:

docker ps

The output will be similar to the following:

CONTAINER ID   IMAGE                                                         COMMAND                  CREATED             STATUS             PORTS                                  NAMES
33e4e422e102   projects.registry.vmware.com/tkg/kind/node:v1.20.4_vmware.1   "/usr/local/bin/entr…"   About an hour ago   Up About an hour                                          guest-md-0-f68799ffd-lpqsh
4ae2829ab6e1   projects.registry.vmware.com/tkg/kind/node:v1.20.4_vmware.1   "/usr/local/bin/entr…"   About an hour ago   Up About an hour   41637/tcp, 127.0.0.1:41637->6443/tcp   guest-control-plane-tcjk2
c0947823840b   kindest/haproxy:2.1.1-alpine                                  "/docker-entrypoint.…"   About an hour ago   Up About an hour   42385/tcp, 0.0.0.0:42385->6443/tcp     guest-lb
a2f156fe933d   projects.registry.vmware.com/tkg/kind/node:v1.20.4_vmware.1   "/usr/local/bin/entr…"   About an hour ago   Up About an hour                                          mgmt-md-0-b8689788f-tlv68
128bf25b9ae9   projects.registry.vmware.com/tkg/kind/node:v1.20.4_vmware.1   "/usr/local/bin/entr…"   About an hour ago   Up About an hour   40753/tcp, 127.0.0.1:40753->6443/tcp   mgmt-control-plane-9rdcq
e59ca95c14d7   kindest/haproxy:2.1.1-alpine                                  "/docker-entrypoint.…"   About an hour ago   Up About an hour   35621/tcp, 0.0.0.0:35621->6443/tcp     mgmt-lb

The above reflects 1 management cluster and 1 workload cluster, both featuring 1 control plane node and 1 worker node. Each cluster gets an haproxy container fronting the control plane node(s). This enables scaling the control plane into an HA configuration.

🛠️: For troubleshooting failed bootstraps, you can exec into a container and use the kubeconfig at /etc/kubernetes/admin.conf to access the API server directly. For example:

$ docker exec -it 4ae /bin/bash

root@guest-control-plane-tcjk2:/# kubectl –kubeconfig=/etc/kubernetes/admin.conf get nodes

NAME STATUS ROLES AGE VERSION guest-control-plane-tcjk2 Ready control-plane,master 67m v1.20.4+vmware.1 guest-md-0-f68799ffd-lpqsh Ready <none> 67m v1.20.4+vmware.1

In the above 4ae is a control plane node.

⚠️: If the Docker host machine is rebooted, the cluster will need to be re-created. Support for clusters surviving a host reboot is tracked in issue

#832.

Create vSphere Clusters

This section describes setting up a management and workload cluster on vSphere.

  1. Open the Tanzu Community Edition product page on VMware Customer Connect.

    If you do not have a Customer Connect account, register here.

  2. Ensure you have the version selected corresponding to your installation.

    customer connect download page

  3. Locate and download the machine image (OVA) for your desired operating system and Kubernetes version.

    customer connect ova downloads

  4. Log in to your vCenter instance.

  5. In vCenter, right-click on your datacenter and choose Deploy OVF Template.

    vcenter deploy ovf

  6. Follow the prompts, browsing to the local file that is the .ova downloaded in a previous step.

  7. Allow the template deployment to complete.

    vcenter deploy ovf

  8. Right-click on the newly imported OVF template and choose Template > Convert to Template.

    vcenter convert to template

  9. Verify the template is added by selecting the VMs and Templates icon and locating it within your datacenter.

    vcenter template import

  10. Initialize the Tanzu Community Edition installer interface.

    tanzu management-cluster create --ui
    
  11. Choose VMware vSphere from the provider tiles.

    kickstart vsphere tile

  12. Fill out the IaaS Provider section.

    kickstart vsphere iaas

    • A: The IP or DNS name pointing at your vCenter instance. This is the same instance you uploaded the OVA to in previous steps.
    • B: The username, with elevated privileges, that can be used to create infrastructure in vSphere.
    • C: The password corresponding to that username.
    • D: With the above filled out, connect to the instance to continue. You may be prompted to verify the SSL fingerprint for vCenter.
    • E: The datacenter you’ll deploy Tanzu Community Edition into. This should be the same datacenter you uploaded the OVA to.
    • F: The public key you’d like to use for your VM instances. This is how you’ll SSH into control plane and worker nodes.
  13. Fill out the Management Cluster Settings.

    kickstart vsphere management cluster settings

    • A: Choose between Development profile, with 1 control plane node or Production, which features a highly-available three node control plane. Additionally, choose the instance type you’d like to use for control plane nodes.
    • B: Name the cluster. This is a friendly name that will be used to reference your cluster in the Tanzu CLI and kubectl.
    • C: Choose whether to enable Cluster API’s machine health checks.
    • D: Choose how to expose your control plane endpoint. If you have NSX available and would like to use it, choose NSX Advanced Load Balancer, otherwise choose Kube-vip, which will expose a virtual IP in your network.
    • E: Set the IP address your Kubernetes API server should be accessible from. This should be an IP that is routable in your network but excluded from your DHCP range.
    • F: Set the instance type you’d like to use for workload nodes.
    • G: Choose whether you’d like to enable Kubernetes API server auditing.
  14. If you choose NSX as your Control Plane Endpoint Provider in the above step, fill out the VMware NSX Advanced Load Balancer section.

  15. If you would like additional metadata to be tagged in your soon-to-be-created vSphere infrastructure, fill out the Metadata section.

  16. Fill out the Resources section.

    kickstart vsphere resources

    • A: Set the VM folder you’d like new virtual machines to be created in. By default, this will be ${DATACENTER_NAME}/vm/
    • B: Set the

      Datastore you’d like volumes to be created within.

    • C: Set the servers or resource pools within the data center you’d like VMs, networking, etc to be created in.
  17. Fill out the Kubernetes Network section.

    kickstart kubernetes networking

    • A: Select the vSphere network where host/virtual machine networking will be setup in.
    • B: Set the CIDR for Kubernetes Services (Cluster IPs). These are internal IPs that, by default, are only exposed and routable within Kubernetes.
    • C: Set the CIDR range for Kubernetes Pods. These are internal IPs that, by default, are only exposed and routable within Kubernetes.
    • D: Set a network proxy that internal traffic should egress through to access external network(s).
  18. Fill out the Identity Management section.

    kickstart identity management

    • A: Select whether you want to enable identity management. If this is off, certificates (via kubeconfig) are used to authenticate users. For most development scenarios, it is preferred to keep this off.
    • B: If identity management is on, choose whether to authenticate using

      OIDC or LDAPS.

    • C: Fill out connection details for identity management.
  19. Fill out the OS Image section.

    kickstart vsphere os

    • A: The OVA image to use for Kubernetes host VMs. This list should populate based on the OVA you uploaded in previous steps. If it’s missing, you may have uploaded an incompatible OVA.
  20. Skip the TMC Registration section.

  21. Click the Review Configuration button.

    For your record, the configuration settings have been saved to ${HOME}/.config/tanzu/tkg/clusterconfigs.

  22. Deploy the cluster.

    If you experience issues deploying your cluster, visit the Troubleshooting documentation.

  23. Validate the management cluster started successfully.

    tanzu management-cluster get
    
  24. Capture the management cluster’s kubeconfig.

    tanzu management-cluster kubeconfig get <MGMT-CLUSTER-NAME> --admin
    

    Where <<MGMT-CLUSTER-NAME> should be set to the name returned by tanzu management-cluster get above.

    For example, if your management cluster is called ‘mtce’, you will see a message similar to:

    Credentials of cluster 'mtce' have been saved.
    You can now access the cluster by running 'kubectl config use-context mtce-admin@mtce'
    
  25. Set your kubectl context to the management cluster.

    kubectl config use-context <MGMT-CLUSTER-NAME>-admin@<MGMT-CLUSTER-NAME>
    

    Where <MGMT-CLUSTER-NAME> should be set to the name returned by tanzu management-cluster get.

  26. Validate you can access the management cluster’s API server.

    kubectl get nodes
    

    NAME STATUS ROLES AGE VERSION 10-0-1-133 Ready <none> 123m v1.20.1+vmware.2 10-0-1-76 Ready control-plane,master 125m v1.20.1+vmware.2

  27. Next you will create a workload cluster. First, create a workload cluster configuration file by taking a copy of the management cluster YAML configuration file that was created when you deployed your management cluster. This example names the workload cluster configuration file workload1.yaml.

    cp  ~/.config/tanzu/tkg/clusterconfigs/<MGMT-CONFIG-FILE> ~/.config/tanzu/tkg/clusterconfigs/workload1.yaml
    
    • Where <MGMT-CONFIG-FILE> is the name of the management cluster YAML config file. The management cluster YAML configuration file will either have the name you assigned to the management cluster, or if no name was assigned, it will be a randomly generated name.

    • The duplicated file (workload1.yaml) will be used as the configuration file for your workload cluster. You can edit the parameters in this new file as required. For an example of a workload cluster template, see vSphere Workload Cluster Template.

      • In the next two steps you will edit the parameters in this new file (workload1.yaml) and then use the file to deploy a workload cluster.
  28. In the new workload cluster file (~/.config/tanzu/tkg/clusterconfigs/workload.yaml), edit the CLUSTER_NAME parameter to assign a name to your workload cluster. For example,

    CLUSTER_CIDR: 100.96.0.0/11
    CLUSTER_NAME: my-workload-cluster
    CLUSTER_PLAN: dev
    
    • If you did not specify a name for your management cluster, the installer generated a random unique name. In this case, you must manually add the CLUSTER_NAME parameter and assign a workload cluster name. The workload cluster names must be must be 42 characters or less and must comply with DNS hostname requirements as described here: RFC 1123
    • If you specified a name for your management cluster, the CLUSTER_NAME parameter is present and needs to be changed to the new workload cluster name.
  29. In the workload cluster file (~/.config/tanzu/tkg/clusterconfigs/workload1.yaml), edit the VSPHERE_CONTROL_PLANE_ENDPOINT parameter to apply a viable IP.

    • This will be the API Server IP for your workload cluster. You must choose an IP that is routable and not used elsewhere in your network, e.g., out of your DHCP range.

    • The other parameters in workload1.yaml are likely fine as-is. Validation is performed on the file prior to applying it, so the tanzu command will return a message if something necessary is omitted. However, you can parameters as required. Reference an example configuration template here: vSphere Workload Cluster Template.

  30. Create your workload cluster.

    tanzu cluster create <WORKLOAD-CLUSTER-NAME> --file ~/.config/tanzu/tkg/clusterconfigs/workload1.yaml
    
  31. Validate the cluster starts successfully.

    tanzu cluster list
    
  32. Capture the workload cluster’s kubeconfig.

    tanzu cluster kubeconfig get <WORKLOAD-CLUSTER-NAME> --admin
    
  33. Set your kubectl context accordingly.

    kubectl config use-context <WORKLOAD-CLUSTER-NAME>-admin@<WORKLOAD-CLUSTER-NAME>
    
  34. Verify you can see pods in the cluster.

    kubectl get pods --all-namespaces
    

    NAMESPACE NAME READY STATUS RESTARTS AGE kube-system antrea-agent-9d4db 2/2 Running 0 3m42s kube-system antrea-agent-vkgt4 2/2 Running 1 5m48s kube-system antrea-controller-5d594c5cc7-vn5gt 1/1 Running 0 5m49s kube-system coredns-5d6f7c958-hs6vr 1/1 Running 0 5m49s kube-system coredns-5d6f7c958-xf6cl 1/1 Running 0 5m49s kube-system etcd-tce-guest-control-plane-b2wsf 1/1 Running 0 5m56s kube-system kube-apiserver-tce-guest-control-plane-b2wsf 1/1 Running 0 5m56s kube-system kube-controller-manager-tce-guest-control-plane-b2wsf 1/1 Running 0 5m56s kube-system kube-proxy-9825q 1/1 Running 0 5m48s kube-system kube-proxy-wfktm 1/1 Running 0 3m42s kube-system kube-scheduler-tce-guest-control-plane-b2wsf 1/1 Running 0 5m56s kube-system kube-vip-tce-guest-control-plane-b2wsf 1/1 Running 0 5m56s kube-system vsphere-cloud-controller-manager-nwrg4 1/1 Running 2 5m48s kube-system vsphere-csi-controller-5b6f54ccc5-trgm4 5/5 Running 0 5m49s kube-system vsphere-csi-node-drnkz 3/3 Running 0 5m48s kube-system vsphere-csi-node-flszf 3/3 Running 0 3m42s

Installing a Package

This section walks you through installing a package (cert-manager) in your cluster. For detailed instruction on package management, see Work with Packages.

  1. Make sure your kubectl context is set to either the workload cluster or standalone cluster.

    kubectl config use-context <CLUSTER-NAME>-admin@<CLUSTER-NAME>
    

    Where <CLUSTER-NAME> is the name of workload or standalone cluster where you want to install package.

  2. Install the Tanzu Community Edition package repository into the tanzu-package-repo-global namespace.

    tanzu package repository add tce-repo --url projects.registry.vmware.com/tce/main:0.9.1 --namespace tanzu-package-repo-global
    

    Package repositories installed into the tanzu-package-repo-global namespace are available to the entire cluster.
    Use the --namespace argument in the tanzu package repository add command to install a package repository into a specific namespace. If you install a package repository into another namespace, you must specify that namespace as an argument in the tanzu package install command when you install a package from that repository.
    A tanzu-core repository is also installed in the tkg-system namespace clusters. This repository holds lower-level components that are not meant to be installed by the user! These packages are used during cluster boostrapping.

  3. Verify the package repository has reconciled.

    tanzu package repository list --namespace tanzu-package-repo-global
    

    The output will look similar to the following:

    / Retrieving repositories...
      NAME      REPOSITORY                                    STATUS
    DETAILS
      tce-repo  projects.registry.vmware.com/tce/main:0.9.1  Reconcile succeeded
    

    It may take some time to see Reconcile succeeded. Until then, packages won’t show up in the available list described in the next step.

  4. List the available packages.

    tanzu package available list
    

    The output will look similar to the following:

    - Retrieving available packages...
     NAME                                           DISPLAY-NAME        SHORT-DESCRIPTION
     cert-manager.community.tanzu.vmware.com        cert-manager        Certificate management
     contour.community.tanzu.vmware.com             Contour             An ingress controller
     external-dns.community.tanzu.vmware.com        external-dns        This package provides DNS...
     fluent-bit.community.tanzu.vmware.com          fluent-bit          Fluent Bit is a fast Log Processor and...
     gatekeeper.community.tanzu.vmware.com          gatekeeper          policy management
     grafana.community.tanzu.vmware.com             grafana             Visualization and analytics software
     harbor.community.tanzu.vmware.com              Harbor              OCI Registry
     knative-serving.community.tanzu.vmware.com     knative-serving     Knative Serving builds on Kubernetes to...
     local-path-storage.community.tanzu.vmware.com  local-path-storage  This package provides local path node...
     multus-cni.community.tanzu.vmware.com          multus-cni          This package provides the ability for...
     prometheus.community.tanzu.vmware.com          prometheus          A time series database for your metrics
     velero.community.tanzu.vmware.com              velero              Disaster recovery capabilities
    
  5. List the available versions for the cert-manager package.

    tanzu package available list cert-manager.community.tanzu.vmware.com
    

    The output will look similar to the following:

    / Retrieving package versions for cert-manager.community.tanzu.vmware.com...
    NAME                                     VERSION  RELEASED-AT
    cert-manager.community.tanzu.vmware.com  1.3.3    2021-08-06T12:31:21Z
    cert-manager.community.tanzu.vmware.com  1.4.4    2021-08-23T16:47:51Z
    cert-manager.community.tanzu.vmware.com  1.5.3    2021-08-23T17:22:51Z
    

    NOTE: The available versions of a package may have changed since this guide was written.

  6. Install the package to the cluster.

    tanzu package install cert-manager --package-name cert-manager.community.tanzu.vmware.com --version 1.5.3
    

    The output will look similar to the following:

    | Installing package 'cert-manager.community.tanzu.vmware.com'
    / Getting package metadata for cert-manager.community.tanzu.vmware.com
    - Creating service account 'cert-manager-default-sa'
    \ Creating cluster admin role 'cert-manager-default-cluster-role'
    

    Creating package resource / Package install status: Reconciling

    Added installed package 'cert-manager' in namespace 'default'

    NOTE: Use one of the available package versions, since the one described in this guide might no longer be available.

  7. Verify cert-manager is installed in the cluster.

    tanzu package installed list
    

    The output will look similar to the following:

    | Retrieving installed packages...
    NAME          PACKAGE-NAME                             PACKAGE-VERSION  STATUS
    cert-manager  cert-manager.community.tanzu.vmware.com  1.5.3            Reconcile succeeded
    
  8. To remove a package from the cluster, run the following command:

    tanzu package installed delete cert-manager
    

    The output will look similar to the following:

    | Uninstalling package 'cert-manager' from namespace 'default'
    | Getting package install for 'cert-manager'
    \ Deleting package install 'cert-manager' from namespace 'default'
    \ Package uninstall status: ReconcileSucceeded
    \ Package uninstall status: Reconciling
    \ Package uninstall status: Deleting
    | Deleting admin role 'cert-manager-default-cluster-role'
    

    / Deleting service account 'cert-manager-default-sa' Uninstalled package 'cert-manager' from namespace 'default'

For more information about package management, see Work with Packages. For details on installing a specific package, see the package’s documentations in the left navigation bar (Packages > ${PACKAGE_NAME}).

Installing a Local Dashboard (octant)

This section describes how to use octant to visually navigate cluster(s). Using Octant is not required for Tanzu Community Edition.

  1. Install octant using one of their documented methods.

  2. Ensure your context is pointed at the correct cluster you wish to monitor.

    kubectl config use-context ${CLUSTER_NAME}-admin@${CLUSTER_NAME}
    

    ${CLUSTER_NAME} should be replaced with the name of the cluster you wish to visually inspect.

  3. Run octant.

    octant
    

    In most environments, octant should be able to start without arguments or flags. For details on how to configure Octant, run octant --help.

  4. Navigate the Octant UI.

    image of octant UI

Cleaning up

After going through this guide, the following enables you to clean-up resources.

  1. Delete any deployed workload clusters.

    tanzu cluster delete <WORKLOAD-CLUSTER-NAME>
    
  2. Once all workload clusters have been deleted, the management cluster can then be removed as well. Run the following commands to get the name of the cluster and delete the cluster

    tanzu management-cluster get
    tanzu management-cluster delete <MGMT-CLUSTER-NAME>
    

    Note for AWS: If the cluster you are deleting is deployed on AWS, you must precede the delete command with the region. For example,

    AWS_REGION=us-west-2 tanzu management-cluster delete my-mgmt-cluster
    

    For more information on deleting clusters, see Delete Management Clusters, and Delete Workload Clusters.

Join us!

Our open community welcomes all users and contributors

Community