Tanzu Community Edition

Documentation

Getting Started with Tanzu Community Edition

This guide walks you through creating a standalone cluster using Tanzu Community Edition.

Standalone Clusters

A standalone cluster is a faster way to get a functioning workload cluster with less resources than managed clusters. These clusters do not require a long-running management cluster. A standalone cluster is created using a bootstrap cluster on your local machine with Kind. After the standalone cluster is created, the bootstrap cluster is destroyed. Any operations against the standalone cluster, e.g. deletion, will re-invoke the bootstrap cluster.

🚨 Warning 🚨

Standalone clusters are highly-experimental and partially implemented! Functions such as scaling clusters are not implemented. If you follow this guide, you may need to clean-up resources created by these clusters manually!

📋 Feedback

Thank you for trying Tanzu Community Edition! Please be sure to fill out our survey and leave feedback here after trying this guide!

Tanzu Community Edition Installation

Tanzu Community Edition consists of the Tanzu CLI and a select set of plugins. You will install Tanzu Community Edition on your local machine and then use the Tanzu CLI on your local machine to deploy a cluster to your chosen target platform.

Linux Local Bootstrap Machine Prerequisites

RAM: 6 GB
CPU: 2
Docker
In Docker, you must create the docker group and add your user before you attempt to create a standalone or management cluster. Complete steps 1 to 4 in the Manage Docker as a non-root user procedure in the Docker documentation.
Kubectl
Latest version of Chrome, Firefox, Safari, Internet Explorer, or Edge
System time is synchronized with a Network Time Protocol (NTP) server.
Ensure your Linux bootstrap machine is using cgroup v1, for more information, see Check and set the cgroup below.

Check and set the cgroup

  1. Check the cgroup by running the following command:

    docker info | grep -i cgroup 
    

    You should see the following output:

    Cgroup Driver: cgroupfs
    Cgroup Version: 1
    
  2. If your Linux distribution is configured to use cgroups v2, you will need to set the system.unified_cgroup_hierarchy=0 kernel parameter to restore cgroups v1. See the instructions for setting kernel parameters for your Linux distribution, including:

    Fedora 32+

    Arch Linux

    OpenSUSE

Installation Procedure

  1. You must download and install the latest version of kubectl. For more information, see Install and Set Up kubectl on Linux in the Kubernetes documentation.

  2. You must download and install the latest version of docker. For more information, see Install Docker Engine in the Docker documentation.

Option 1: Homebrew

  1. Make sure you have the Homebrew package manager installed

  2. Run the following in your terminal:

    brew tap vmware-tanzu/tanzu
    brew install tanzu-community-edition
    
  3. Run the post install configuration script. Note the output of the brew install step for the correct location of the configure script:

    {HOMEBREW-INSTALL-LOCATION}/configure-tce.sh
    

    This puts all the Tanzu plugins in the correct location. The first time you run the tanzu command the installed plugins and plugin repositories are initialized. This action might take a minute.

Option 2: Curl GitHub release

  1. Download the release for Linux via web browser.

  2. [Alternative] Download the release using the CLI. You may download a release using the provided remote script piped into bash.

    curl -H "Accept: application/vnd.github.v3.raw" \
        -L https://api.github.com/repos/vmware-tanzu/community-edition/contents/hack/get-tce-release.sh | \
        bash -s <RELEASE-VERSION> <RELEASE-OS-DISTRIBUTION>
    
    • Where <RELEASE-VERSION> is the Tanzu Community Edition release version. This is a required argument.
    • Where <RELEASE-OS-DISTRIBUTION> is the Tanzu Community Edition release version and distribution. This is a required argument.
    • For example, to download v0.9.1 for Linux, provide:
      bash -s v0.9.1 linux
    • This script requires curl, grep, sed, tr, and jq in order to work
    • The release will be downloaded to the local directory as tce-linux-amd64-v0.9.1.tar.gz
    • Note: A GitHub personal access token may be provided to the script as the GITHUB_TOKEN environment variable. This bypasses GitHub API rate limiting but is not required. Follow the GitHub documentation to aquire and use a personal access token.
  3. Unpack the release.

    tar xzvf ~/<DOWNLOAD-DIR>/tce-linux-amd64-v0.9.1.tar.gz
    
  4. Run the install script (make sure to use the appropriate directory for your platform).

    cd tce-linux-amd64-v0.9.1
    ./install.sh
    

    This installs the Tanzu CLI and puts all the plugins in the correct location. The first time you run the tanzu command the installed plugins and plugin repositories are initialized. This action might take a minute.

  5. You must download and install the latest version of kubectl.

    curl -LO https://dl.k8s.io/release/v1.20.1/bin/linux/amd64/kubectl
    sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
    

    For more information, see Install and Set Up kubectl on Linux in the Kubernetes documentation.

Mac Local Bootstrap Machine Prerequisites

RAM: 6 GB
CPU: 2
Docker Desktop for Mac
Kubectl
Latest version of Chrome, Firefox, Safari, Internet Explorer, or Edge

Installation Procedure

  1. Make sure you have the Homebrew package manager installed

  2. You must download and install the latest version of kubectl. For more information, see Install and Set Up kubectl on MacOS in the Kubernetes documentation.

  3. You must download and install the latest version of docker. For more information, see Install Docker Desktop on MacOS in the Docker documentation.

  4. Run the following in your terminal:

    brew tap vmware-tanzu/tanzu
    brew install tanzu-community-edition
    
  5. Run the post install configuration script. Note the output of the brew install step for the correct location of the configure script:

    {HOMEBREW-INSTALL-LOCATION}/v0.9.1/libexec/configure-tce.sh
    

    This puts all the Tanzu plugins in the correct location. The first time you run the tanzu command the installed plugins and plugin repositories are initialized. This action might take a minute.

Windows Local Bootstrap Machine Prerequisites

RAM: 8 GB
CPU: 2
Docker Desktop for Windows
Kubectl
Latest version of Chrome, Firefox, Safari, Internet Explorer, or Edge

Note: Bootstrapping a cluster to Docker from a Windows bootstrap machine is currently experimental.

Installation Procedure

  1. Download the release zip for Windows.
  2. Unpack the release zip.
  3. Enter the directory of the unpacked release.
  4. Run the install script.
    • install.bat on Windows (run as Administrator).

Creating Clusters

Create Standalone Clusters in AWS

This section covers setting up a standalone cluster in Amazon Web Services (AWS). A standalone cluster provides a workload cluster that is not managed by a centralized management cluster.

Ensure that you have set up your AWS account to be ready to deploy Tanzu clusters. Refer to the Prepare to Deploy a Management or Standalone Cluster to AWS docs for instructions on deploying an SSH key-pair and preparing your AWS account.

  1. Initialize the Tanzu Community Edition Installer Interface.

    tanzu standalone-cluster create --ui
    
  2. Choose Amazon from the provider tiles.

    kickstart amazon tile

  3. Fill out the IaaS Provider section.

    kickstart vsphere iaas

    • A: Whether to use AWS named profiles or provide static credentials. It is highly recommended you use profiles. This can be setup by installing the AWS CLI on the bootstrap machine.
    • B: If using profiles, the name of the profile (credentials) you’d like to use. By default, profiles are stored in ${HOME}/.aws/credentials.
    • C: The region of AWS you’d like all networking, compute, etc to be created within. A list of regions is available here in the AWS documentation.
  4. Fill out the VPC settings.

    kickstart aws iaas

    • A: Whether to create a new Virtual Private Cloud in AWS or use an existing one. If using an existing one, you must provide its VPC ID. For initial deployments, it is recomended to create a new Virtual Private Cloud. This will ensure the installer takes care of all networking creation and configuration.
    • B: If creating a new VPC, the CIDR range or IPs to use for hosts (EC2 VMs).
  5. Fill out the Standalone Cluster Settings.

    kickstart aws standalone cluster settings

    • A: Choose between Development profile, with 1 control plane node or Production, which features a highly-available three node control plane. Additionally, choose the instance type you’d like to use for control plane nodes.
    • B: Name the cluster. This is a friendly name that will be used to reference your cluster in the Tanzu CLI and kubectl.
    • C: Choose an SSH key to use for accessing control plane and workload nodes. This SSH key must be accessible in the AWS region chosen in a previous step. See the AWS documentation for instructions on creating a key pair.
    • D: Whether to enable Cluster API’s machine health checks.
    • E: Whether to create a bastion host in your VPC. This host will be publicly accessible via your SSH key. All Kubernetes-related hosts will not be accessible without SSHing into this host. If preferred, you can create a bastion host independent of the installation process.
    • F: Choose whether you’d like to enable Kubernetes API server auditing.
    • G: Choose whether you’d like to create the CloudFormation stack expected by Tanzu. Checking this box is recommended. If the stack pre-exists, this step will be skipped.
    • H: The AWS availability zone in your chosen region to create control plane node(s) in. If the Production profile was chosen, you’ll have 3 options of zones, one for each host.
  6. If you would like additional metadata to be tagged in your soon-to-be-created AWS infrastructure, fill out the Metadata section.

  7. Fill out the Kubernetes Network section.

    kickstart kubernetes networking

    • A: Set the CIDR for Kubernetes Services (Cluster IPs). These are internal IPs that, by default, are only exposed and routable within Kubernetes.
    • B: Set the CIDR range for Kubernetes Pods. These are internal IPs that, by default, are only exposed and routable within Kubernetes.
    • C: Set a network proxy that internal traffic should egress through to access external network(s).
  8. Fill out the Identity Management section.

    kickstart identity management

    • A: Select whether you want to enable identity management. If this is off, certificates (via kubeconfig) are used to authenticate users. For most development scenarios, it is preferred to keep this off.
    • B: If identity management is on, choose whether to authenticate using

      OIDC or LDAPS.

    • C: Fill out connection details for identity management.
  9. Fill out the OS Image section.

    kickstart aws os

    • A: The Amazon Machine Image (AMI) to use for Kubernetes host VMs. This list should populate based on known AMIs uploaded by VMware. These AMIs are publicly accessible for your use. Choose based on your preferred Linux distribution.
  10. Click the Review Configuration button.

    For your record, the configuration settings have been saved to ${HOME}/.config/tanzu/tkg/clusterconfigs.

  11. Deploy the cluster.

    If you experience issues deploying your cluster, visit the Troubleshooting documentation.

  12. Set your kubectl context to the cluster.

    kubectl config use-context <STANDALONE-CLUSTER-NAME>-admin@<STANDALONE-CLUSTER-NAME>
    

    Where <STANDALONE-CLUSTER-NAME> is the name of the standalone cluster that you specified or if you didn’t specify a name, it’s the randomly generated name.

  13. Validate you can access the cluster’s API server.

    kubectl get nodes
    

    The output will look similar to the following:

    NAME                                       STATUS   ROLES                  AGE    VERSION
    ip-10-0-1-133.us-west-2.compute.internal   Ready    <none>                 123m   v1.20.1+vmware.2
    ip-10-0-1-76.us-west-2.compute.internal    Ready    control-plane,master   125m   v1.20.1+vmware.2
    

Create Standalone Azure Clusters

This section covers setting up a standalone cluster in Azure. A standalone cluster provides a workload cluster that is not managed by a centralized management cluster.

There are some prerequisites this process will assume. Refer to the

Prepare to Deploy a Cluster to Azure docs for instructions on accepting image licenses and preparing your Azure account.

  1. Initialize the Tanzu Community Edition installer interface.

    tanzu standalone-cluster create --ui
    
  2. Choose Azure from the provider tiles.

    kickstart azure tile

  3. Fill out the IaaS Provider section.

    kickstart azure iaas

    • A: Your account’s Tenant ID.
    • B: Your Client ID.
    • C: Your Client secret.
    • D: Your Subscription ID.
    • E: The Azure Cloud in which to deploy. For example, “Public Cloud”, “US Government Cloud”, etc.
    • F: The region of Azure you’d like all networking, compute, etc to be created within.
    • G: The public key you’d like to use for your VM instances. This is how you’ll SSH into control plane and worker nodes.
    • H: Whether to use an existing

      resource group or create a new one.

    • I: The existing resource group, or the name to provide the new resource group.
  4. Fill out the VNET settings.

    kickstart azure vnet

    • A: Whether to create a new

      Virtual Network in Azure or use an existing one. If using an existing one, you must provide its VNET name. For initial deployments, it is recomended to create a new Virtual Network. This will ensure the installer takes care of all networking creation and configuration.

    • B: The Resource Group under which to create the VNET.
    • C: The name to use when creating a new VNET.
    • D: The CIDR block to use for this VNET.
    • E: The name for the control plane subnet.
    • F: The CIDR block to use for the control plane subnet.
    • G: Whether to deploy without a publicly accessible IP address. Access to the cluster will be limited to your Azure private network only. Various ways for connecting to your private cluster

      can be found in the Azure private cluster documentation.

  5. Fill out the Standalone Cluster Settings.

    kickstart azure standalone cluster settings

    • A: Choose between Development profile with one control plane node, or Production, which features a highly-available three node control plane. Additionally, choose the instance type you’d like to use for control plane nodes.
    • B: Name the cluster. This is a friendly name that will be used to reference your cluster in the Tanzu CLI and kubectl.
    • C: Whether to enable Cluster API’s machine health checks.
    • D: Choose whether you’d like to enable Kubernetes API server auditing.
  6. If you would like additional metadata to be tagged in your soon-to-be-created Azure infrastructure, fill out the Metadata section.

  7. Fill out the Kubernetes Network section.

    kickstart kubernetes networking

    • A: Set the CIDR for Kubernetes Services (Cluster IPs). These are internal IPs that, by default, are only exposed and routable within Kubernetes.
    • B: Set the CIDR range for Kubernetes Pods. These are internal IPs that, by default, are only exposed and routable within Kubernetes.
    • C: Set a network proxy that internal traffic should egress through to access external network(s).
  8. Fill out the Identity Management section.

    kickstart identity management

    • A: Select whether you want to enable identity management. If this is off, certificates (via kubeconfig) are used to authenticate users. For most development scenarios, it is preferred to keep this off.
    • B: If identity management is on, choose whether to authenticate using

      OIDC or LDAPS.

    • C: Fill out connection details for identity management.
  9. Fill out the OS Image section.

    kickstart azure os

    • A: The Azure image to use for Kubernetes host VMs. This list should populate based on known images uploaded by VMware. These images are publicly accessible for your use. Choose based on your preferred Linux distribution.
  10. Skip the TMC Registration section.

  11. Click the Review Configuration button.

    For your record, the configuration settings have been saved to ${HOME}/.config/tanzu/tkg/clusterconfigs.

  12. Deploy the cluster.

    If you experience issues deploying your cluster, visit the Troubleshooting documentation.

  13. Set your kubectl context to the cluster.

    kubectl config use-context <STANDALONE-CLUSTER-NAME>-admin@<STANDALONE-CLUSTER-NAME>
    
  14. Validate you can access the cluster’s API server.

    kubectl get nodes
    

    The output will look similar to the following:

    NAME                                       STATUS   ROLES                  AGE    VERSION
    ip-10-0-1-133.us-west-2.compute.internal   Ready    <none>                 123m   v1.20.1+vmware.2
    ip-10-0-1-76.us-west-2.compute.internal    Ready    control-plane,master   125m   v1.20.1+vmware.2
    

⚠️ If bootstrapping docker-based clusers on Windows, see our Windows guide

Create Standalone Docker Clusters

This section describes setting up a standalone cluster on your local workstation using Docker. This provides you a workload cluster that is not managed by a centralized management cluster.

⚠️: Tanzu Community Edition support for Docker is experimental and may require troubleshooting on your system.

Note: Bootstrapping a cluster to Docker from a Windows bootstrap machine is currently experimental.

Prerequisites

The following additional configuration is needed for the Docker engine on your local client machine (with no other containers running):

6 GB of RAM
15 GB of local machine disk storage for images
4 CPUs

⚠️ Warning on DockerHub Rate Limiting

When using the Docker (CAPD) provider, the load balancer image (HA Proxy) is pulled from DockerHub. DockerHub limits pulls per user and this can especially impact users who share a common IP, in the case of NAT or VPN. If DockerHub rate-limiting is an issue in your environment, you can pre-pull the load balancer image to your machine by running the following command.

docker pull kindest/haproxy:v20210715-a6da3463

This behavior will eventually be addressed in

https://github.com/vmware-tanzu/community-edition/issues/897.

Before You Begin

To optimise your Docker system and ensure a successful deployment, you may wish to complete the next two optional steps.

  1. (Optional): Stop all existing containers.

    docker kill $(docker ps -q)
    
  2. (Optional): Run the following command to prune all existing containers, volumes, and images.

    Warning: Read the prompt carefully before running the command, as it erases the majority of what is cached in your Docker environment. While this ensures your environment is clean before starting, it also significantly increases bootstrapping time if you already had the Docker images downloaded.

     docker system prune -a --volumes
    

Local Docker Bootstrapping

  1. Create the standalone cluster.

    tanzu standalone-cluster create -i docker <STANDALONE-CLUSTER-NAME>
    

    <STANDALONE-CLUSTER-NAME> must end with a letter, not a numeric character, and must be compliant with DNS hostname requirements RFC 952 and RFC 1123. For increased logs, you can append -v 10.

    If the deployment is successful, you should see the following output:

    Standalone cluster created!
    
  2. Set your kubectl context to the cluster.

    kubectl config use-context <STANDALONE-CLUSTER-NAME>-admin@<STANDALONE-CLUSTER-NAME>
    
  3. Validate you can access the cluster’s API server.

    kubectl get pod -A
    

    The output should look similar to the following:

    NAMESPACE     NAME                                                     READY   STATUS    RESTARTS   AGE
    kube-system   antrea-agent-4wwq9                                       2/2     Running   0          3m28s
    kube-system   antrea-agent-s9gbb                                       2/2     Running   0          3m28s
    kube-system   antrea-controller-58cdb9dc6d-mdn56                       1/1     Running   0          3m28s
    kube-system   coredns-8dcb5c56b-7dltt                                  1/1     Running   0          4m43s
    kube-system   coredns-8dcb5c56b-cvkpx                                  1/1     Running   0          4m43s
    kube-system   etcd-testme-control-plane-2fcfs                          1/1     Running   0          4m44s
    kube-system   kube-apiserver-testme-control-plane-2fcfs                1/1     Running   0          4m44s
    kube-system   kube-controller-manager-testme-control-plane-2fcfs       1/1     Running   0          4m44s
    kube-system   kube-proxy-7wfs8                                         1/1     Running   0          4m8s
    kube-system   kube-proxy-bzr2d                                         1/1     Running   0          4m43s
    kube-system   kube-scheduler-testme-control-plane-2fcfs                1/1     Running   0          4m44s
    tkg-system    kapp-controller-764fc6c69f-lpvn6                         1/1     Running   0          3m49s
    tkg-system    tanzu-capabilities-controller-manager-69f58566d9-8ks8q   1/1     Running   0          4m28s
    tkr-system    tkr-controller-manager-cc88b6968-hv8zg                   1/1     Running   0          4m28s
    

⚠️: If the Docker host machine is rebooted, the cluster will need to be re-created. Support for clusters surviving a host reboot is tracked in issue #832.

Create vSphere Clusters

This section describes setting up standalone clusters on vSphere.

  1. Open the Tanzu Community Edition product page on VMware Customer Connect.

    If you do not have a Customer Connect account, register here.

  2. Ensure you have the version selected corresponding to your installation.

    customer connect download page

  3. Locate and download the machine image (OVA) for your desired operating system and Kubernetes version.

    customer connect ova downloads

  4. Log in to your vCenter instance.

  5. In vCenter, right-click on your datacenter and choose Deploy OVF Template.

    vcenter deploy ovf

  6. Follow the prompts, browsing to the local file that is the .ova downloaded in a previous step.

  7. Allow the template deployment to complete.

    vcenter deploy ovf

  8. Right-click on the newly imported OVF template and choose Template > Convert to Template.

    vcenter convert to template

  9. Verify the template is added by selecting the VMs and Templates icon and locating it within your datacenter.

    vcenter template import

  10. Initialize the Tanzu Community Edition installer interface.

    tanzu standalone-cluster create --ui
    
  11. Choose VMware vSphere from the provider tiles.

    kickstart vsphere tile

  12. Fill out the IaaS Provider section.

    kickstart vsphere iaas

    • A: The IP or DNS name pointing at your vCenter instance. This is the same instance you uploaded the OVA to in previous steps.
    • B: The username, with elevated privileges, that can be used to create infrastructure in vSphere.
    • C: The password corresponding to that username.
    • D: With the above filled out, connect to the instance to continue. You may be prompted to verify the SSL fingerprint for vCenter.
    • E: The datacenter you’ll deploy Tanzu Community Edition into. This should be the same datacenter you uploaded the OVA to.
    • F: The public key you’d like to use for your VM instances. This is how you’ll SSH into control plane and worker nodes.
  13. Fill out the Management Cluster Settings.

    kickstart vsphere management cluster settings

    • A: Choose between Development profile, with 1 control plane node or Production, which features a highly-available three node control plane. Additionally, choose the instance type you’d like to use for control plane and workload nodes.
    • B: Name the cluster. This is a friendly name that will be used to reference your cluster in the Tanzu CLI and kubectl.
    • C: Choose whether to enable Cluster API’s machine health checks.
    • D: Choose how to expose your control plane endpoint. If you have NSX available and would like to use it, choose NSX Advanced Load Balancer, otherwise choose Kube-vip, which will expose a virtual IP in your network.
    • E: Set the IP address your Kubernetes API server should be accessible from. This should be an IP that is routable in your network but excluded from your DHCP range.
    • F: Choose whether you’d like to enable Kubernetes API server auditing.
  14. If you choose NSX as your Control Plane Endpoint Provider in the above step, fill out the VMware NSX Advanced Load Balancer section.

  15. If you would like additional metadata to be tagged in your soon-to-be-created vSphere infrastructure, fill out the Metadata section.

  16. Fill out the Resources section.

    kickstart vsphere resources

    • A: Set the VM folder you’d like new virtual machines to be created in. By default, this will be ${DATACENTER_NAME}/vm/
    • B: Set the

      Datastore you’d like volumes to be created within.

    • C: Set the servers or resource pools within the data center you’d like VMs, networking, etc to be created in.
  17. Fill out the Kubernetes Network section.

    kickstart kubernetes networking

    • A: Select the vSphere network where host/virtual machine networking will be setup in.
    • B: Set the CIDR for Kubernetes Services (Cluster IPs). These are internal IPs that, by default, are only exposed and routable within Kubernetes.
    • C: Set the CIDR range for Kubernetes Pods. These are internal IPs that, by default, are only exposed and routable within Kubernetes.
    • D: Setup a proxy that cluster traffic should egress through to access extrenal network(s).
  18. Fill out the Identity Management section.

    kickstart identity management

    • A: Select whether you want to enable identity management. If this is off, certificates (via kubeconfig) are used to authenticate users. For most development scenarios, it is preferred to keep this off.
    • B: If identity management is on, choose whether to authenticate using

      OIDC or LDAPS.

    • C: Fill out connection details for identity management.
  19. Fill out the OS Image section.

    kickstart vsphere os

    • A: The OVA image to use for Kubernetes host VMs. This list should populate based on the OVA you uploaded in previous steps. If it’s missing, you may have uploaded an incompatible OVA.
  20. Click the Review Configuration button.

    For your record, the configuration settings have been saved to ${HOME}/.config/tanzu/tkg/clusterconfigs

  21. Deploy the cluster.

    If you experience issues deploying your cluster, visit the Troubleshooting documentation.

  22. Once complete, Set your kubectl context to the cluster.

    kubectl config use-context <STANDALONE-CLUSTER-NAME>-admin@<STANDALONE-CLUSTER-NAME>
    
  23. Validate you can access the cluster’s API server.

    kubectl get nodes
    

    NAME STATUS ROLES AGE VERSION 10-0-1-133 Ready <none> 123m v1.20.1+vmware.2 10-0-1-76 Ready control-plane,master 125m v1.20.1+vmware.2

Installing a Package

This section walks you through installing a package (cert-manager) in your cluster. For detailed instruction on package management, see Work with Packages.

  1. Make sure your kubectl context is set to either the workload cluster or standalone cluster.

    kubectl config use-context <CLUSTER-NAME>-admin@<CLUSTER-NAME>
    

    Where <CLUSTER-NAME> is the name of workload or standalone cluster where you want to install package.

  2. Install the Tanzu Community Edition package repository into the tanzu-package-repo-global namespace.

    tanzu package repository add tce-repo --url projects.registry.vmware.com/tce/main:0.9.1 --namespace tanzu-package-repo-global
    

    Repositories installed into the tanzu-package-repo-global namespace will provide their packages to the entire cluster. It is possible to install package repositories into specific namespaces when using the --namespace argument. To install a package from a repository in another namespace will require you to specify that namespace as an argument to the tanzu package install command. A tanzu-core repository is also installed in the tkg-system namespace clusters. This repository holds lower-level components that are not meant to be installed by the user! These packages are used during cluster boostrapping.

  3. Verify the package repository has reconciled.

    tanzu package repository list --namespace tanzu-package-repo-global
    

    The output will look similar to the following:

    / Retrieving repositories...
      NAME      REPOSITORY                                    STATUS
    DETAILS
      tce-repo  projects.registry.vmware.com/tce/main:0.9.1  Reconcile succeeded
    

    It may take some time to see Reconcile succeeded. Until then, packages won’t show up in the available list described in the next step.

  4. List the available packages.

    tanzu package available list
    

    The output will look similar to the following:

    - Retrieving available packages...
     NAME                                           DISPLAY-NAME        SHORT-DESCRIPTION
     cert-manager.community.tanzu.vmware.com        cert-manager        Certificate management
     contour-operator.community.tanzu.vmware.com    contour-operator    Layer 7 Ingress
     contour.community.tanzu.vmware.com             Contour             An ingress controller
     external-dns.community.tanzu.vmware.com        external-dns        This package provides DNS...
     fluent-bit.community.tanzu.vmware.com          fluent-bit          Fluent Bit is a fast Log Processor and...
     gatekeeper.community.tanzu.vmware.com          gatekeeper          policy management
     grafana.community.tanzu.vmware.com             grafana             Visualization and analytics software
     harbor.community.tanzu.vmware.com              Harbor              OCI Registry
     knative-serving.community.tanzu.vmware.com     knative-serving     Knative Serving builds on Kubernetes to...
     local-path-storage.community.tanzu.vmware.com  local-path-storage  This package provides local path node...
     multus-cni.community.tanzu.vmware.com          multus-cni          This package provides the ability for...
     prometheus.community.tanzu.vmware.com          prometheus          A time series database for your metrics
     velero.community.tanzu.vmware.com              velero              Disaster recovery capabilities
    
  5. List the available versions for the cert-manager package.

    tanzu package available list cert-manager.community.tanzu.vmware.com
    

    The output will look similar to the following:

    / Retrieving package versions for cert-manager.community.tanzu.vmware.com...
    NAME                                     VERSION  RELEASED-AT
    cert-manager.community.tanzu.vmware.com  1.3.1    2021-04-14T18:00:00Z
    cert-manager.community.tanzu.vmware.com  1.4.0    2021-06-15T18:00:00Z
    

    NOTE: The available versions of a package may have changed since this guide was written.

  6. Install the package to the cluster.

    tanzu package install cert-manager \
      --package-name cert-manager.community.tanzu.vmware.com \
      --version 1.4.0
    

    The output will look similar to the following:

    | Installing package 'cert-manager.community.tanzu.vmware.com'
    / Getting package metadata for cert-manager.community.tanzu.vmware.com
    - Creating service account 'cert-manager-default-sa'
    \ Creating cluster admin role 'cert-manager-default-cluster-role'
    

    Creating package resource / Package install status: Reconciling

    Added installed package 'cert-manager' in namespace 'default'

    NOTE: Use one of the available package versions, since the one described in this guide might no longer be available.

  7. Verify cert-manager is installed in the cluster.

    tanzu package installed list
    

    The output will look similar to the following:

    | Retrieving installed packages...
    NAME          PACKAGE-NAME                             PACKAGE-VERSION  STATUS
    cert-manager  cert-manager.community.tanzu.vmware.com  1.4.0            Reconcile succeeded
    
  8. To remove a package from the cluster, run the following command:

    tanzu package installed delete cert-manager
    

    The output will look similar to the following:

    | Uninstalling package 'cert-manager' from namespace 'default'
    | Getting package install for 'cert-manager'
    \ Deleting package install 'cert-manager' from namespace 'default'
    \ Package uninstall status: ReconcileSucceeded
    \ Package uninstall status: Reconciling
    \ Package uninstall status: Deleting
    | Deleting admin role 'cert-manager-default-cluster-role'
    

    / Deleting service account 'cert-manager-default-sa' Uninstalled package 'cert-manager' from namespace 'default'

For more information about package management, see Work with Packages. For details on installing a specific package, see the package’s documentations in the left navigation bar (Packages > ${PACKAGE_NAME}).

Installing a Local Dashboard (octant)

This section describes how to use octant to visually navigate cluster(s). Using Octant is not required for Tanzu Community Edition.

  1. Install octant using one of their documented methods.

  2. Ensure your context is pointed at the correct cluster you wish to monitor.

    kubectl config use-context ${CLUSTER_NAME}-admin@${CLUSTER_NAME}
    

    ${CLUSTER_NAME} should be replaced with the name of the cluster you wish to visually inspect.

  3. Run octant.

    octant
    

    In most environments, octant should be able to start without arguments or flags. For details on how to configure Octant, run octant --help.

  4. Navigate the Octant UI.

    image of octant UI

Clean-up

  1. Run the delete command.

    tanzu standalone-cluster delete <STANDALONE-CLUSTER-NAME>
    

    This may take several minutes to complete!

  2. Note: If you configured a proxy, you may need to provide the following environment variables TKG_NO_PROXY, TKG_HTTP_PROXY, TKG_HTTPS_PROXY

    TKG_HTTP_PROXY="127.0.0.1" tanzu standalone-cluster delete <STANDALONE-CLUSTER-NAME>
    

Ready to dive in?

Our documentation is a great place to start!

Documentation