Documentation
Getting Started with Managed Clusters ¶
This guide walks you through standing up a management and workload cluster using Tanzu CLI.
Before You Begin ¶
Review Plan Your Deployment.
📋 Feedback ¶
Thank you for trying Tanzu Community Edition! Please be sure to fill out our survey and leave feedback here after trying this guide!
Install Tanzu CLI ¶
Architecture | CPU | RAM | Required software |
---|---|---|---|
x86_64 / amd64 (ARM is currently unsupported) | 2 | 6 GB | Docker - You must create the docker group and add your user, see Manage Docker as a non-root user. - Ensure your bootstrap machine is using cgroup v1. For more information, see Check and set the cgroup. |
Package Manager ¶
Install using Homebrew.
brew install vmware-tanzu/tanzu/tanzu-community-edition
Run the configure command displayed after Homebrew completes installation.
{HOMEBREW-INSTALL-LOCATION}/configure-tce.sh
This puts all the Tanzu plugins in the correct location. The first time you run the
tanzu
command the installed plugins and plugin repositories are initialized. This action might take a minute.
Direct Download ¶
If you prefer to not use a package manager, you can download releases
on
GitHub. Download
and unpack the release for your operating system. Then, run the script
install.sh
(Linux/Mac) or install.bat
(Windows).
Architecture | CPU | RAM | Required software |
---|---|---|---|
x86_64 / amd64 (ARM is currently unsupported) | 2 | 6 GB | Docker Desktop for Mac; Version <= 4.2.0 - You must create the docker group and add your user, see Manage Docker as a non-root user. - Ensure your bootstrap machine is using cgroup v1. (Docker Desktop for Mac versions prior to 4.3.0 use cgroup1). See Check and set the cgroup. |
Package Manager ¶
Install using Homebrew.
brew install vmware-tanzu/tanzu/tanzu-community-edition
Run the configure command displayed after Homebrew completes installation.
{HOMEBREW-INSTALL-LOCATION}/configure-tce.sh
This puts all the Tanzu plugins in the correct location. The first time you run the
tanzu
command the installed plugins and plugin repositories are initialized. This action might take a minute.
Direct Download ¶
If you prefer to not use a package manager, you can download releases
on
GitHub. Download
and unpack the release for your operating system. Then, run the script
install.sh
(Linux/Mac) or install.bat
(Windows).
CPU | RAM | Required software |
---|---|---|
2 | 8 GB | Docker Desktop for Windows - Ensure your bootstrap machine is using cgroup v1. (Docker Desktop for Windows versions prior to 4.3.0 use cgroup1). For more information, see Check and set the cgroup. - Note: Bootstrapping a cluster to Docker from a Windows bootstrap machine is currently experimental, for more information, see Docker-based Clusters on Windows. |
Package Manager ¶
Install using Chocolatey, in Powershell, as an administrator.
choco install tanzu-community-edition
Direct Download ¶
If you prefer to not use a package manager, you can download releases
on
GitHub. Download
and unpack the release for your operating system. Then, run the script
install.sh
(Linux/Mac) or install.bat
(Windows).
Deploy Clusters ¶
This section describes deploying a management and workload cluster in Amazon Web Services (AWS).
Deploy a Management Cluster ¶
There are some prerequisites the installation process will assume. Refer to the Prepare to Deploy a Management Cluster to AWS docs for instructions on deploying an SSH key-pair and preparing your AWS account.
Initialize the Tanzu Community Edition installer interface.
tanzu management-cluster create --ui
Note: If you are bootstrapping from a Windows machine and encounter an
unable to ensure prerequisites
error, see the following troubleshooting topic.Choose Amazon from the provider tiles.
Fill out the IaaS Provider section.
A
: Whether to use AWS named profiles or provide static credentials. It is highly recommended you use profiles. This can be setup by installing the AWS CLI on the bootstrap machine.B
: If using profiles, the name of the profile (credentials) you’d like to use. By default, profiles are stored in${HOME}/.aws/credentials
.C
: The region of AWS you’d like all networking, compute, etc to be created within. A list of regions is available here in the AWS documentation.
Fill out the VPC settings.
A
: Whether to create a new Virtual Private Cloud in AWS or use an existing one. If using an existing one, you must provide its VPC ID. For initial deployments, it is recommended to create a new Virtual Private Cloud. This will ensure the installer takes care of all networking creation and configuration.B
: If creating a new VPC, the CIDR range or IPs to use for hosts (EC2 VMs).
Fill out the Management Cluster Settings.
A
: Choose between Development profile, with 1 control plane node or Production, which features a highly-available three node control plane. Additionally, choose the instance type you’d like to use for control plane nodes.B
: Name the cluster. This is a friendly name that will be used to reference your cluster in the Tanzu CLI andkubectl
.C
: Choose an SSH key to use for accessing control plane and workload nodes. This SSH key must be accessible in the AWS region chosen in a previous step. See the AWS documentation for instructions on creating a key pair.D
: Whether to enable Cluster API’s machine health checks.E
: Whether to create a bastion host in your VPC. This host will be publicly accessible via your SSH key. All Kubernetes-related hosts will not be accessible without SSHing into this host. If preferred, you can create a bastion host independent of the installation process.F
: Choose whether you’d like to enable Kubernetes API server auditing.G
: Choose whether you’d like to create the CloudFormation stack expected by Tanzu. Checking this box is recommended. If the stack pre-exists, this step will be skipped.H
: The AWS availability zone in your chosen region to create control plane node(s) in. If the Production profile was chosen, you’ll have 3 options of zones, one for each host.I
: The AWS EC2 instance type to be used for each node creation. See the instances types documentation to understand trade-offs between CPU, memory, pricing and more.
If you would like additional metadata to be tagged in your soon-to-be-created AWS infrastructure, fill out the Metadata section.
Fill out the Kubernetes Network section.
A
: Set the CIDR for Kubernetes Services (Cluster IPs). These are internal IPs that, by default, are only exposed and routable within Kubernetes.B
: Set the CIDR range for Kubernetes Pods. These are internal IPs that, by default, are only exposed and routable within Kubernetes.C
: Set a network proxy that internal traffic should egress through to access external network(s).
Fill out the Identity Management section.
A
: Select whether you want to enable identity management. If this is off, certificates (via kubeconfig) are used to authenticate users. For most development scenarios, it is preferred to keep this off.B
: If identity management is on, choose whether to authenticate usingC
: Fill out connection details for identity management.
Fill out the OS Image section.
A
: The Amazon Machine Image (AMI) to use for Kubernetes host VMs. This list should populate based on known AMIs uploaded by VMware. These AMIs are publicly accessible for your use. Choose based on your preferred Linux distribution.
Skip the TMC Registration section.
Click the Review Configuration button.
For your record, the configuration settings have been saved to
${HOME}/.config/tanzu/tkg/clusterconfigs
.Deploy the cluster.
If you experience issues deploying your cluster, visit the Troubleshooting documentation.
Validate the management cluster started successfully.
tanzu management-cluster get
The output will look similar to the following:
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES mtce tkg-system running 1/1 1/1 v1.20.1+vmware.2 management Details: NAME READY SEVERITY REASON SINCE MESSAGE /mtce True 113m ├─ClusterInfrastructure - AWSCluster/mtce True 113m ├─ControlPlane - KubeadmControlPlane/mtce-control-plane True 113m │ └─Machine/mtce-control-plane-r7k52 True 113m └─Workers └─MachineDeployment/mtce-md-0 └─Machine/mtce-md-0-fdfc9f766-6n6lc True 113m Providers: NAMESPACE NAME TYPE PROVIDERNAME VERSION WATCHNAMESPACE capa-system infrastructure-aws InfrastructureProvider aws v0.6.4 capi-kubeadm-bootstrap-system bootstrap-kubeadm BootstrapProvider kubeadm v0.3.14 capi-kubeadm-control-plane-system control-plane-kubeadm ControlPlaneProvider kubeadm v0.3.14 capi-system cluster-api CoreProvider cluster-api v0.3.14
Capture the management cluster’s kubeconfig and take note of the command for accessing the cluster in the output message, as you will use this for setting the context in the next step.
tanzu management-cluster kubeconfig get <MGMT-CLUSTER-NAME> --admin
Where
<MGMT-CLUSTER-NAME>
should be set to the name returned bytanzu management-cluster get
.
For example, if your management cluster is called ‘mtce’, you will see a message similar to:Credentials of cluster 'mtce' have been saved. You can now access the cluster by running 'kubectl config use-context mtce-admin@mtce'
Set your kubectl context to the management cluster.
kubectl config use-context <MGMT-CLUSTER-NAME>-admin@<MGMT-CLUSTER-NAME>
Where
<MGMT-CLUSTER-NAME>
should be set to the name returned bytanzu management-cluster get
.Validate you can access the management cluster’s API server.
kubectl get nodes
The output will look similar to the following:
NAME STATUS ROLES AGE VERSION ip-10-0-1-133.us-west-2.compute.internal Ready <none> 123m v1.20.1+vmware.2 ip-10-0-1-76.us-west-2.compute.internal Ready control-plane,master 125m v1.20.1+vmware.2
Deploy a Workload Cluster ¶
Next, you will create a workload cluster. First, create a workload cluster configuration file by taking a copy of the management cluster YAML configuration file that was created when you deployed your management cluster. This example names the workload cluster configuration file
workload1.yaml
.cp ~/.config/tanzu/tkg/clusterconfigs/<MGMT-CONFIG-FILE> ~/.config/tanzu/tkg/clusterconfigs/workload1.yaml
Where
<MGMT-CONFIG-FILE>
is the name of the management cluster YAML configuration file. The management cluster YAML configuration file will either have the name you assigned to the management cluster, or if no name was assigned, it will be a randomly generated name.The duplicated file (
workload1.yaml
) will be used as the configuration file for your workload cluster. You can edit the parameters in this new file as required. For an example of a workload cluster template, see AWS Workload Cluster Template.In the next two steps you will edit the parameters in this new file (
workload1.yaml
) and then use the file to deploy a workload cluster.
In the new workload cluster file (
~/.config/tanzu/tkg/clusterconfigs/workload1.yaml
), edit the CLUSTER_NAME parameter to assign a name to your workload cluster. For example,CLUSTER_CIDR: 100.96.0.0/11 CLUSTER_NAME: my-workload-cluster CLUSTER_PLAN: dev
- If you did not specify a name for your management cluster, the installer generated a random unique name. In this case, you must manually add the CLUSTER_NAME parameter and assign a workload cluster name. The workload cluster names must be must be 42 characters or less and must comply with DNS hostname requirements as described here: RFC 1123
- If you specified a name for your management cluster, the CLUSTER_NAME parameter is present and needs to be changed to the new workload cluster name.
- The other parameters in
workload1.yaml
are likely fine as-is. However, you can change them as required. Validation is performed on the file prior to applying it, so thetanzu
command will return a message if something necessary is omitted. Reference an example configuration template here: AWS Workload Cluster Template.
Create your workload cluster.
tanzu cluster create <WORKLOAD-CLUSTER-NAME> --file ~/.config/tanzu/tkg/clusterconfigs/workload1.yaml
Validate the cluster starts successfully.
tanzu cluster list
Capture the workload cluster’s kubeconfig.
tanzu cluster kubeconfig get <WORKLOAD-CLUSTER-NAME> --admin
Set your
kubectl
context to the workload cluster.kubectl config use-context <WORKLOAD-CLUSTER-NAME>-admin@<WORKLOAD-CLUSTER-NAME>
Verify you can see pods in the cluster.
kubectl get pods --all-namespaces
The output will look similar to the following:
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system antrea-agent-9d4db 2/2 Running 0 3m42s kube-system antrea-agent-vkgt4 2/2 Running 1 5m48s kube-system antrea-controller-5d594c5cc7-vn5gt 1/1 Running 0 5m49s kube-system coredns-5d6f7c958-hs6vr 1/1 Running 0 5m49s kube-system coredns-5d6f7c958-xf6cl 1/1 Running 0 5m49s kube-system etcd-tce-guest-control-plane-b2wsf 1/1 Running 0 5m56s kube-system kube-apiserver-tce-guest-control-plane-b2wsf 1/1 Running 0 5m56s kube-system kube-controller-manager-tce-guest-control-plane-b2wsf 1/1 Running 0 5m56s kube-system kube-proxy-9825q 1/1 Running 0 5m48s kube-system kube-proxy-wfktm 1/1 Running 0 3m42s kube-system kube-scheduler-tce-guest-control-plane-b2wsf 1/1 Running 0 5m56s
Create Microsoft Azure Clusters ¶
This section describes setting up management and workload clusters for Microsoft Azure.
Deploy a Management Cluster ¶
There are some prerequisites this process will assume. Refer to the Prepare to Deploy a Cluster to Azure docs for instructions on accepting image licenses and preparing your Azure account.
Initialize the Tanzu Community Edition installer interface.
tanzu management-cluster create --ui
Note: If you are bootstrapping from a Windows machine and encounter an
unable to ensure prerequisites
error, see the following troubleshooting topic.Choose Azure from the provider tiles.
Fill out the IaaS Provider section.
A
: Your account’s Tenant ID.B
: Your Client ID.C
: Your Client secret.D
: Your Subscription ID.E
: The Azure Cloud in which to deploy. For example, “Public Cloud”, “US Government Cloud”, etc.F
: The region of Azure you’d like all networking, compute, etc to be created within.G
: The public key you’d like to use for your VM instances. This is how you’ll SSH into control plane and worker nodes.H
: Whether to use an existingresource group or create a new one.
I
: The existing resource group, or the name to provide the new resource group.
Fill out the VNET settings.
A
: Whether to create a newVirtual Network in Azure or use an existing one. If using an existing one, you must provide its VNET name. For initial deployments, it is recommended to create a new Virtual Network. This will ensure the installer takes care of all networking creation and configuration.
B
: The Resource Group under which to create the VNET.C
: The name to use when creating a new VNET.D
: The CIDR block to use for this VNET.E
: The name for the control plane subnet.F
: The CIDR block to use for the control plane subnet. This range should be within the VNET CIDR.G
: The name for the worker node subnet.H
: The CIDR block to use for the worker node subnet. This range should be within the VNET CIDR and not overlap with the control plane CIDR.I
: Whether to deploy without a publicly accessible IP address. Access to the cluster will be limited to your Azure private network only. Various ways for connecting to your private cluster
Fill out the Management Cluster Settings.
A
: Choose between Development profile with one control plane node, or Production, which features a highly-available three node control plane. Additionally, choose the instance type you’d like to use for control plane nodes.B
: Name the cluster. This is a friendly name that will be used to reference your cluster in the Tanzu CLI andkubectl
.C
: The instance type to be used for each node creation. See the instances types documentation to understand trade-offs between CPU, memory, pricing and more.D
: Whether to enable Cluster API’s machine health checks.E
: Choose whether you’d like to enable Kubernetes API server auditing.
If you would like additional metadata to be tagged in your soon-to-be-created Azure infrastructure, fill out the Metadata section.
Fill out the Kubernetes Network section.
A
: Set the CIDR for Kubernetes Services (Cluster IPs). These are internal IPs that, by default, are only exposed and routable within Kubernetes.B
: Set the CIDR range for Kubernetes Pods. These are internal IPs that, by default, are only exposed and routable within Kubernetes.C
: Set a network proxy that internal traffic should egress through to access external network(s).
Fill out the Identity Management section.
A
: Select whether you want to enable identity management. If this is off, certificates (via kubeconfig) are used to authenticate users. For most development scenarios, it is preferred to keep this off.B
: If identity management is on, choose whether to authenticate usingC
: Fill out connection details for identity management.
Fill out the OS Image section.
A
: The Azure image to use for Kubernetes host VMs. This list should populate based on known images uploaded by VMware. These images are publicly accessible for your use. Choose based on your preferred Linux distribution.
Skip the TMC Registration section.
Click the Review Configuration button.
For your record, the configuration settings have been saved to
${HOME}/.config/tanzu/tkg/clusterconfigs
.Deploy the cluster.
If you experience issues deploying your cluster, visit the Troubleshooting documentation.
Validate the management cluster started successfully.
tanzu management-cluster get
The output will look similar to the following:
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES mgmt tkg-system running 1/1 1/1 v1.21.2+vmware.1 management Details: NAME READY SEVERITY REASON SINCE MESSAGE /mgmt True 5m38s ├─ClusterInfrastructure - AzureCluster/mgmt True 5m42s ├─ControlPlane - KubeadmControlPlane/mgmt-control-plane True 5m38s │ └─Machine/mgmt-control-plane-d99g5 True 5m41s └─Workers └─MachineDeployment/mgmt-md-0 └─Machine/mgmt-md-0-bc94f54b4-tgr9h True 5m41s Providers: NAMESPACE NAME TYPE PROVIDERNAME VERSION WATCHNAMESPACE capi-kubeadm-bootstrap-system bootstrap-kubeadm BootstrapProvider kubeadm v0.3.23 capi-kubeadm-control-plane-system control-plane-kubeadm ControlPlaneProvider kubeadm v0.3.23 capi-system cluster-api CoreProvider cluster-api v0.3.23 capz-system infrastructure-azure InfrastructureProvider azure v0.4.15
Capture the management cluster’s kubeconfig.
tanzu management-cluster kubeconfig get <MGMT-CLUSTER-NAME> --admin
Where
<MGMT-CLUSTER-NAME>
should be set to the name returned bytanzu management-cluster get
above.
For example, if your management cluster is called ‘mgmt’, you will see a message similar to:Credentials of workload cluster 'mgmt' have been saved. You can now access the cluster by running 'kubectl config use-context mgmt-admin@mgmt'
Set your kubectl context to the management cluster.
kubectl config use-context <MGMT-CLUSTER-NAME>-admin@<MGMT-CLUSTER-NAME>
Where
<MGMT-CLUSTER-NAME>
should be set to the name returned bytanzu management-cluster get
.Validate you can access the management cluster’s API server.
kubectl get nodes NAME STATUS ROLES AGE VERSION mgmt-control-plane-vkpsm Ready control-plane,master 111m v1.21.2+vmware.1 mgmt-md-0-qbbhk Ready <none> 109m v1.21.2+vmware.1
Deploy a Workload Cluster ¶
Next you will create a workload cluster. First, setup a workload cluster configuration file.
cp ~/.config/tanzu/tkg/clusterconfigs/<MGMT-CONFIG-FILE> ~/.config/tanzu/tkg/clusterconfigs/workload1.yaml
Where
<MGMT-CONFIG-FILE>
is the name of the management cluster YAML configuration fileThis step duplicates the configuration file that was created when you deployed your management cluster. The configuration file will either have the name you assigned to the management cluster, or if no name was assigned, it will be a randomly generated name.
This duplicated file will be used as the configuration file for your workload cluster. You can edit the parameters in this new file as required. For an example of a workload cluster template, see Azure Workload Cluster Template.
In the next two steps you will edit the parameters in this new file (
workload1
) and then use the file to deploy a workload cluster.
In the new workload cluster file (
~/.config/tanzu/tkg/clusterconfigs/workload1.yaml
), edit theCLUSTER_NAME
parameter to assign a name to your workload cluster. For example,CLUSTER_CIDR: 100.96.0.0/11 CLUSTER_NAME: my-workload-cluster CLUSTER_PLAN: dev
- If you did not specify a name for your management cluster, the installer generated a random unique name. In this case, you must manually add the CLUSTER_NAME parameter and assign a workload cluster name. The workload cluster names must be must be 42 characters or less and must comply with DNS hostname requirements as described here: RFC 1123
- If you specified a name for your management cluster, the CLUSTER_NAME parameter is present and needs to be changed to the new workload cluster name.
- The other parameters in
workload1.yaml
are likely fine as-is. Validation is performed on the file prior to applying it, so thetanzu
command will return a message if something necessary is omitted. However, you can change parameters as required. Reference an example configuration template here: Azure Workload Cluster Template.
Create your workload cluster.
tanzu cluster create <WORKLOAD-CLUSTER-NAME> --file ~/.config/tanzu/tkg/clusterconfigs/workload1.yaml
Validate the cluster starts successfully.
tanzu cluster list
Capture the workload cluster’s kubeconfig.
tanzu cluster kubeconfig get <WORKLOAD-CLUSTER-NAME> --admin
Set your
kubectl
context to the workload cluster.kubectl config use-context <WORKLOAD-CLUSTER_NAME>-admin@<WORKLOAD-CLUSTER-NAME>
Verify you can see pods in the cluster.
kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system antrea-agent-9d4db 2/2 Running 0 3m42s kube-system antrea-agent-vkgt4 2/2 Running 1 5m48s kube-system antrea-controller-5d594c5cc7-vn5gt 1/1 Running 0 5m49s kube-system coredns-5d6f7c958-hs6vr 1/1 Running 0 5m49s kube-system coredns-5d6f7c958-xf6cl 1/1 Running 0 5m49s kube-system etcd-tce-guest-control-plane-b2wsf 1/1 Running 0 5m56s kube-system kube-apiserver-tce-guest-control-plane-b2wsf 1/1 Running 0 5m56s kube-system kube-controller-manager-tce-guest-control-plane-b2wsf 1/1 Running 0 5m56s kube-system kube-proxy-9825q 1/1 Running 0 5m48s kube-system kube-proxy-wfktm 1/1 Running 0 3m42s kube-system kube-scheduler-tce-guest-control-plane-b2wsf 1/1 Running 0 5m56s
⚠️ If bootstrapping docker-based clusters on Windows, see our Windows guide ¶
Create Local Docker Clusters ¶
This section describes setting up a management and workload cluster on your local workstation using Docker.
⚠️: Tanzu Community Edition support for Docker is experimental and may require troubleshooting on your system.
Note: Bootstrapping a cluster to Docker from a Windows bootstrap machine is currently experimental.
Prerequisites ¶
The following additional configuration is needed for the Docker engine on your local client machine (with no other containers running):
6 GB of RAM |
15 GB of local machine disk storage for images |
4 CPUs |
⚠️ Warning on DockerHub Rate Limiting ¶
When using the Docker (CAPD) provider, the load balancer image (HA Proxy) is pulled from DockerHub. DockerHub limits pulls per user and this can especially impact users who share a common IP, in the case of NAT or VPN. If DockerHub rate-limiting is an issue in your environment, you can pre-pull the load balancer image to your machine by running the following command.
docker pull kindest/haproxy:v20210715-a6da3463
This behavior will eventually be addressed in
https://github.com/vmware-tanzu/community-edition/issues/897.
Before You Begin ¶
To optimise your Docker system and ensure a successful deployment, you may wish to complete the next two optional steps.
(Optional): Stop all existing containers.
docker kill $(docker ps -q)
(Optional): Run the following command to prune all existing containers, volumes, and images.
Warning: Read the prompt carefully before running the command, as it erases the majority of what is cached in your Docker environment. While this ensures your environment is clean before starting, it also significantly increases bootstrapping time if you already had the Docker images downloaded.
docker system prune -a --volumes
Deploy a Management Cluster ¶
Initialize the Tanzu Community Edition installer interface.
tanzu management-cluster create --ui
Note: If you are bootstrapping from a Windows machine and encounter an
unable to ensure prerequisites
error, see the following troubleshooting topic.Complete the configuration steps in the installer interface for Docker and create the management cluster. The following configuration settings are recommended:
- The Kubernetes Network Settings are auto-filled with a default CNI Provider and Cluster Service CIDR.
- Docker Proxy settings are experimental and are to be used at your own risk.
- We will have more complete
tanzu
cluster bootstrapping documentation available here in the near future. - If you ran the
prune
command in the previous step, expect this to take some time, as it’ll download an image that is over 1GB.
(Alternative method) It is also possible to use the command line to create a Docker based management cluster:
tanzu management-cluster create -i docker --name <MGMT-CLUSTER-NAME> -v 10 --plan dev --ceip-participation=false
<MGMT-CLUSTER-NAME>
must end with a letter, not a numeric character, and must be compliant with DNS hostname requirements described here: RFC 1123.
Validate the management cluster started:
tanzu management-cluster get
The output should look similar to the following:
NAME READY SEVERITY REASON SINCE MESSAGE /tkg-mgmt-docker-20210601125056 True 28s ├─ClusterInfrastructure - DockerCluster/tkg-mgmt-docker-20210601125056 True 32s ├─ControlPlane - KubeadmControlPlane/tkg-mgmt-docker-20210601125056-control-plane True 28s │ └─Machine/tkg-mgmt-docker-20210601125056-control-plane-5pkcp True 24s │ └─MachineInfrastructure - DockerMachine/tkg-mgmt-docker-20210601125056-control-plane-9wlf2 └─MachineDeployment/tkg-mgmt-docker-20210601125056-md-0 └─Machine/tkg-mgmt-docker-20210601125056-md-0-5d895cbfd9-khj4s True 24s └─MachineInfrastructure - DockerMachine/tkg-mgmt-docker-20210601125056-md-0-d544k Providers: NAMESPACE NAME TYPE PROVIDERNAME VERSION WATCHNAMESPACE capd-system infrastructure-docker InfrastructureProvider docker v0.3.10 capi-kubeadm-bootstrap-system bootstrap-kubeadm BootstrapProvider kubeadm v0.3.14 capi-kubeadm-control-plane-system control-plane-kubeadm ControlPlaneProvider kubeadm v0.3.14 capi-system cluster-api CoreProvider cluster-api v0.3.14
Capture the management cluster’s kubeconfig and take note of the command for accessing the cluster in the message, as you will use this for setting the context in the next step.
tanzu management-cluster kubeconfig get <MGMT-CLUSTER-NAME> --admin
- Where <
MGMT-CLUSTER-NAME>
should be set to the name returned bytanzu management-cluster get
. - For example, if your management cluster is called ‘mtce’, you will see a message similar to:
Credentials of cluster 'mtce' have been saved. You can now access the cluster by running 'kubectl config use-context mtce-admin@mtce'
- Where <
Set your kubectl context to the management cluster.
kubectl config use-context <MGMT-CLUSTER-NAME>-admin@<MGMT-CLUSTER-NAME>
Validate you can access the management cluster’s API server.
kubectl get nodes
You will see output similar to:
NAME STATUS ROLES AGE VERSION guest-control-plane-tcjk2 Ready control-plane,master 59m v1.20.4+vmware.1 guest-md-0-f68799ffd-lpqsh Ready <none> 59m v1.20.4+vmware.1
Deploy a Workload Cluster ¶
Create your workload cluster.
tanzu cluster create <WORKLOAD-CLUSTER-NAME> --plan dev
Validate the cluster starts successfully.
tanzu cluster list
Capture the workload cluster’s kubeconfig.
tanzu cluster kubeconfig get <WORKLOAD-CLUSTER-NAME> --admin
Set your
kubectl
context accordingly.kubectl config use-context <WORKLOAD-CLUSTER-NAME>-admin@<WORKLOAD-CLUSTER-NAME>
Verify you can see pods in the cluster.
kubectl get pods --all-namespaces
The output will look similar to the following:
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system antrea-agent-9d4db 2/2 Running 0 3m42s kube-system antrea-agent-vkgt4 2/2 Running 1 5m48s kube-system antrea-controller-5d594c5cc7-vn5gt 1/1 Running 0 5m49s kube-system coredns-5d6f7c958-hs6vr 1/1 Running 0 5m49s kube-system coredns-5d6f7c958-xf6cl 1/1 Running 0 5m49s kube-system etcd-tce-guest-control-plane-b2wsf 1/1 Running 0 5m56s kube-system kube-apiserver-tce-guest-control-plane-b2wsf 1/1 Running 0 5m56s kube-system kube-controller-manager-tce-guest-control-plane-b2wsf 1/1 Running 0 5m56s kube-system kube-proxy-9825q 1/1 Running 0 5m48s kube-system kube-proxy-wfktm 1/1 Running 0 3m42s kube-system kube-scheduler-tce-guest-control-plane-b2wsf 1/1 Running 0 5m56s
You now have local clusters running on Docker. The nodes can be seen by running the following command:
docker ps
The output will be similar to the following:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
33e4e422e102 projects.registry.vmware.com/tkg/kind/node:v1.20.4_vmware.1 "/usr/local/bin/entr…" About an hour ago Up About an hour guest-md-0-f68799ffd-lpqsh
4ae2829ab6e1 projects.registry.vmware.com/tkg/kind/node:v1.20.4_vmware.1 "/usr/local/bin/entr…" About an hour ago Up About an hour 41637/tcp, 127.0.0.1:41637->6443/tcp guest-control-plane-tcjk2
c0947823840b kindest/haproxy:2.1.1-alpine "/docker-entrypoint.…" About an hour ago Up About an hour 42385/tcp, 0.0.0.0:42385->6443/tcp guest-lb
a2f156fe933d projects.registry.vmware.com/tkg/kind/node:v1.20.4_vmware.1 "/usr/local/bin/entr…" About an hour ago Up About an hour mgmt-md-0-b8689788f-tlv68
128bf25b9ae9 projects.registry.vmware.com/tkg/kind/node:v1.20.4_vmware.1 "/usr/local/bin/entr…" About an hour ago Up About an hour 40753/tcp, 127.0.0.1:40753->6443/tcp mgmt-control-plane-9rdcq
e59ca95c14d7 kindest/haproxy:2.1.1-alpine "/docker-entrypoint.…" About an hour ago Up About an hour 35621/tcp, 0.0.0.0:35621->6443/tcp mgmt-lb
The above reflects 1 management cluster and 1 workload cluster, both featuring 1 control plane node and 1 worker node.
Each cluster gets an haproxy
container fronting the control plane node(s). This enables scaling the control plane into
an HA configuration.
🛠️: For troubleshooting failed bootstraps, you can exec into a container and use the kubeconfig at /etc/kubernetes/admin.conf
to access
the API server directly. For example:
$ docker exec -it 4ae /bin/bash
root@guest-control-plane-tcjk2:/# kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes
NAME STATUS ROLES AGE VERSION
guest-control-plane-tcjk2 Ready control-plane,master 67m v1.20.4+vmware.1
guest-md-0-f68799ffd-lpqsh Ready <none> 67m v1.20.4+vmware.1
In the above
4ae
is a control plane node.
⚠️: If the Docker host machine is rebooted, the cluster will need to be re-created. Support for clusters surviving a host reboot is tracked in issue
#832.
Create vSphere Clusters ¶
This section describes setting up a management and workload cluster on vSphere.
Import a Base Image Template into vSphere ¶
Before you can deploy a cluster to vSphere, you must import a base image template into vSphere that contains the OS and Kubernetes versions that the cluster nodes run on. These are available in VMware Customer Connect. For each supported pair of OS and Kubernetes versions, VMware publishes a base image template in OVA format, for deploying clusters to vSphere. After you import the OVA into vSphere, you must convert the resulting VM into a VM template.
Open the Tanzu Community Edition product page on VMware Customer Connect.
If you do not have a Customer Connect account, register here.
Ensure you have the version selected corresponding to your installation.
Locate and download the machine image (OVA) for your desired operating system and Kubernetes version.
Log in to your vCenter instance.
In vCenter, right-click on your datacenter and choose Deploy OVF Template.
Follow the prompts, browsing to the local file that is the
.ova
downloaded in a previous step.Allow the template deployment to complete.
Right-click on the newly imported OVF template and choose Template > Convert to Template.
Verify the template is added by selecting the VMs and Templates icon and locating it within your datacenter.
Deploy a Management Cluster ¶
Initialize the Tanzu Community Edition installer interface.
tanzu management-cluster create --ui
Note: If you are bootstrapping from a Windows machine and encounter an
unable to ensure prerequisites
error, see the following troubleshooting topic.Choose VMware vSphere from the provider tiles.
Fill out the IaaS Provider section.
A
: The IP or DNS name pointing at your vCenter instance. This is the same instance you uploaded the OVA to in previous steps.B
: The username, with elevated privileges, that can be used to create infrastructure in vSphere.C
: The password corresponding to that username.D
: With the above filled out, connect to the instance to continue. You may be prompted to verify the SSL fingerprint for vCenter.E
: The datacenter you’ll deploy Tanzu Community Edition into. This should be the same datacenter you uploaded the OVA to.F
: The public key you’d like to use for your VM instances. This is how you’ll SSH into control plane and worker nodes.
Fill out the Management Cluster Settings.
A
: Choose between Development profile, with 1 control plane node or Production, which features a highly-available three node control plane. Additionally, choose the instance type you’d like to use for control plane nodes.B
: Name the cluster. This is a friendly name that will be used to reference your cluster in the Tanzu CLI andkubectl
.C
: Choose whether to enable Cluster API’s machine health checks.D
: Choose how to expose your control plane endpoint. If you have NSX available and would like to use it, choose NSX Advanced Load Balancer, otherwise choose Kube-vip, which will expose a virtual IP in your network.E
: Set the IP address your Kubernetes API server should be accessible from. This should be an IP that is routable in your network but excluded from your DHCP range.F
: Set the instance type you’d like to use for workload nodes.G
: Choose whether you’d like to enable Kubernetes API server auditing.
If you choose NSX as your Control Plane Endpoint Provider in the above step, fill out the VMware NSX Advanced Load Balancer section.
If you would like additional metadata to be tagged in your soon-to-be-created vSphere infrastructure, fill out the Metadata section.
Fill out the Resources section.
Fill out the Kubernetes Network section.
A
: Select the vSphere network where host/virtual machine networking will be setup in.B
: Set the CIDR for Kubernetes Services (Cluster IPs). These are internal IPs that, by default, are only exposed and routable within Kubernetes.C
: Set the CIDR range for Kubernetes Pods. These are internal IPs that, by default, are only exposed and routable within Kubernetes.D
: Set a network proxy that internal traffic should egress through to access external network(s).
Fill out the Identity Management section.
A
: Select whether you want to enable identity management. If this is off, certificates (via kubeconfig) are used to authenticate users. For most development scenarios, it is preferred to keep this off.B
: If identity management is on, choose whether to authenticate usingC
: Fill out connection details for identity management.
Fill out the OS Image section.
A
: The OVA image to use for Kubernetes host VMs. This list should populate based on the OVA you uploaded in previous steps. If it’s missing, you may have uploaded an incompatible OVA.
Skip the TMC Registration section.
Click the Review Configuration button.
For your record, the configuration settings have been saved to
${HOME}/.config/tanzu/tkg/clusterconfigs
.Deploy the cluster.
If you experience issues deploying your cluster, visit the Troubleshooting documentation.
Validate the management cluster started successfully.
tanzu management-cluster get
Capture the management cluster’s kubeconfig.
tanzu management-cluster kubeconfig get <MGMT-CLUSTER-NAME> --admin
Where <
<MGMT-CLUSTER-NAME>
should be set to the name returned bytanzu management-cluster get
above.
For example, if your management cluster is called ‘mtce’, you will see a message similar to:Credentials of cluster 'mtce' have been saved. You can now access the cluster by running 'kubectl config use-context mtce-admin@mtce'
Set your kubectl context to the management cluster.
kubectl config use-context <MGMT-CLUSTER-NAME>-admin@<MGMT-CLUSTER-NAME>
Where <
MGMT-CLUSTER-NAME>
should be set to the name returned bytanzu management-cluster get
.Validate you can access the management cluster’s API server.
kubectl get nodes NAME STATUS ROLES AGE VERSION 10-0-1-133 Ready <none> 123m v1.20.1+vmware.2 10-0-1-76 Ready control-plane,master 125m v1.20.1+vmware.2
Deploy a Workload Cluster ¶
Next you will create a workload cluster. First, create a workload cluster configuration file by taking a copy of the management cluster YAML configuration file that was created when you deployed your management cluster. This example names the workload cluster configuration file
workload1.yaml
.cp ~/.config/tanzu/tkg/clusterconfigs/<MGMT-CONFIG-FILE> ~/.config/tanzu/tkg/clusterconfigs/workload1.yaml
Where
<MGMT-CONFIG-FILE>
is the name of the management cluster YAML config file. The management cluster YAML configuration file will either have the name you assigned to the management cluster, or if no name was assigned, it will be a randomly generated name.The duplicated file (
workload1.yaml
) will be used as the configuration file for your workload cluster. You can edit the parameters in this new file as required. For an example of a workload cluster template, see vSphere Workload Cluster Template.- In the next two steps you will edit the parameters in this new file (
workload1.yaml
) and then use the file to deploy a workload cluster.
- In the next two steps you will edit the parameters in this new file (
In the new workload cluster file (
~/.config/tanzu/tkg/clusterconfigs/workload.yaml
), edit the CLUSTER_NAME parameter to assign a name to your workload cluster. For example,CLUSTER_CIDR: 100.96.0.0/11 CLUSTER_NAME: my-workload-cluster CLUSTER_PLAN: dev
- If you did not specify a name for your management cluster, the installer generated a random unique name. In this case, you must manually add the CLUSTER_NAME parameter and assign a workload cluster name. The workload cluster names must be must be 42 characters or less and must comply with DNS hostname requirements as described here: RFC 1123
- If you specified a name for your management cluster, the CLUSTER_NAME parameter is present and needs to be changed to the new workload cluster name.
In the workload cluster file (
~/.config/tanzu/tkg/clusterconfigs/workload1.yaml
), edit the VSPHERE_CONTROL_PLANE_ENDPOINT parameter to apply a viable IP.This will be the API Server IP for your workload cluster. You must choose an IP that is routable and not used elsewhere in your network, e.g., out of your DHCP range.
The other parameters in
workload1.yaml
are likely fine as-is. Validation is performed on the file prior to applying it, so thetanzu
command will return a message if something necessary is omitted. However, you can parameters as required. Reference an example configuration template here: vSphere Workload Cluster Template.
Create your workload cluster.
tanzu cluster create <WORKLOAD-CLUSTER-NAME> --file ~/.config/tanzu/tkg/clusterconfigs/workload1.yaml
Validate the cluster starts successfully.
tanzu cluster list
Capture the workload cluster’s kubeconfig.
tanzu cluster kubeconfig get <WORKLOAD-CLUSTER-NAME> --admin
Set your
kubectl
context accordingly.kubectl config use-context <WORKLOAD-CLUSTER-NAME>-admin@<WORKLOAD-CLUSTER-NAME>
Verify you can see pods in the cluster.
kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system antrea-agent-9d4db 2/2 Running 0 3m42s kube-system antrea-agent-vkgt4 2/2 Running 1 5m48s kube-system antrea-controller-5d594c5cc7-vn5gt 1/1 Running 0 5m49s kube-system coredns-5d6f7c958-hs6vr 1/1 Running 0 5m49s kube-system coredns-5d6f7c958-xf6cl 1/1 Running 0 5m49s kube-system etcd-tce-guest-control-plane-b2wsf 1/1 Running 0 5m56s kube-system kube-apiserver-tce-guest-control-plane-b2wsf 1/1 Running 0 5m56s kube-system kube-controller-manager-tce-guest-control-plane-b2wsf 1/1 Running 0 5m56s kube-system kube-proxy-9825q 1/1 Running 0 5m48s kube-system kube-proxy-wfktm 1/1 Running 0 3m42s kube-system kube-scheduler-tce-guest-control-plane-b2wsf 1/1 Running 0 5m56s kube-system kube-vip-tce-guest-control-plane-b2wsf 1/1 Running 0 5m56s kube-system vsphere-cloud-controller-manager-nwrg4 1/1 Running 2 5m48s kube-system vsphere-csi-controller-5b6f54ccc5-trgm4 5/5 Running 0 5m49s kube-system vsphere-csi-node-drnkz 3/3 Running 0 5m48s kube-system vsphere-csi-node-flszf 3/3 Running 0 3m42s
Deploy a Package ¶
Installing a Package ¶
This section walks you through installing the cert-manager package in your cluster as an example of package installation.
For detailed instruction on package management, see Work with Packages.
Prerequisites ¶
Before you install packages, you should have the following cluster configurations running:
- A management cluster and a workload cluster.
For more information, see Planning Your Installation.
Procedure ¶
Make sure your
kubectl
context is set to the workload cluster.kubectl config use-context <CLUSTER-NAME>-admin@<CLUSTER-NAME>
Where
<CLUSTER-NAME>
is the name of the workload where you want to install a package.Install the Tanzu Community Edition package repository into the
tanzu-package-repo-global
namespace.tanzu package repository add tce-repo --url projects.registry.vmware.com/tce/main:0.10.0 --namespace tanzu-package-repo-global
- Package repositories are installed into the
default
namespace by default. - Packages are installed in the same namespace where the PackageRepository is installed. If you install a package repository into another non-default namespace, you must specify that same namespace as an argument in the
tanzu package install
command when you install a package from that repository. - Package repositories installed into the
tanzu-package-repo-global
namespace are available to the entire cluster. In this case, the packages can be installed in a different namespace to the PackageRepository, they don’t need to be installed into thetanzu-package-repo-global
namespace. - A
tanzu-core
repository is also installed in thetkg-system
namespace clusters. This repository holds lower-level components that are not meant to be installed by the user. These packages are used during cluster boostrapping.
- Package repositories are installed into the
Verify the package repository has reconciled.
tanzu package repository list --namespace tanzu-package-repo-global
The output will look similar to the following:
/ Retrieving repositories... NAME REPOSITORY STATUS DETAILS tce-repo projects.registry.vmware.com/tce/main:0.10.0 Reconcile succeeded
It may take some time to see
Reconcile succeeded
. Until then, packages won’t show up in the available list described in the next step.List the available packages.
tanzu package available list
The output will look similar to the following:
- Retrieving available packages... NAME DISPLAY-NAME SHORT-DESCRIPTION cert-manager.community.tanzu.vmware.com cert-manager Certificate management contour.community.tanzu.vmware.com Contour An ingress controller external-dns.community.tanzu.vmware.com external-dns This package provides DNS... fluent-bit.community.tanzu.vmware.com fluent-bit Fluent Bit is a fast Log Processor and... gatekeeper.community.tanzu.vmware.com gatekeeper policy management grafana.community.tanzu.vmware.com grafana Visualization and analytics software harbor.community.tanzu.vmware.com Harbor OCI Registry knative-serving.community.tanzu.vmware.com knative-serving Knative Serving builds on Kubernetes to... local-path-storage.community.tanzu.vmware.com local-path-storage This package provides local path node... multus-cni.community.tanzu.vmware.com multus-cni This package provides the ability for... prometheus.community.tanzu.vmware.com prometheus A time series database for your metrics velero.community.tanzu.vmware.com velero Disaster recovery capabilities
List the available versions for the
cert-manager
package.tanzu package available list cert-manager.community.tanzu.vmware.com
The output will look similar to the following:
/ Retrieving package versions for cert-manager.community.tanzu.vmware.com... NAME VERSION RELEASED-AT cert-manager.community.tanzu.vmware.com 1.3.3 2021-08-06T12:31:21Z cert-manager.community.tanzu.vmware.com 1.4.4 2021-08-23T16:47:51Z cert-manager.community.tanzu.vmware.com 1.5.3 2021-08-23T17:22:51Z
NOTE: The available versions of a package may have changed since this guide was written.
Install the package to the cluster.
tanzu package install cert-manager --package-name cert-manager.community.tanzu.vmware.com --version 1.5.3
The output will look similar to the following:
| Installing package 'cert-manager.community.tanzu.vmware.com' / Getting package metadata for cert-manager.community.tanzu.vmware.com - Creating service account 'cert-manager-default-sa' \ Creating cluster admin role 'cert-manager-default-cluster-role' Creating package resource / Package install status: Reconciling Added installed package 'cert-manager' in namespace 'default'
Note: Use one of the available package versions, since the one described in this guide might no longer be available. Note: While the underlying resources associated with cert-manager are installed in the cert-manager namespace, the actual cert-manager package is installed to the
default
namespace as per the installation output message. For an explanation of this behavior, see 2 above and the Package Repositories topic.Verify cert-manager is installed in the cluster.
tanzu package installed list
The output will look similar to the following:
| Retrieving installed packages... NAME PACKAGE-NAME PACKAGE-VERSION STATUS cert-manager cert-manager.community.tanzu.vmware.com 1.5.3 Reconcile succeeded
To remove a package from the cluster, run the following command:
tanzu package installed delete cert-manager
The output will look similar to the following:
| Uninstalling package 'cert-manager' from namespace 'default' | Getting package install for 'cert-manager' \ Deleting package install 'cert-manager' from namespace 'default' \ Package uninstall status: ReconcileSucceeded \ Package uninstall status: Reconciling \ Package uninstall status: Deleting | Deleting admin role 'cert-manager-default-cluster-role' / Deleting service account 'cert-manager-default-sa' Uninstalled package 'cert-manager' from namespace 'default'
For more information about package management, see
Work with Packages. For details on installing a specific package,
see the package’s documentations in the left navigation bar (Packages > ${PACKAGE_NAME}
).
Install a Local Dashboard (octant) ¶
This section describes how to use octant to visually navigate cluster(s). Using Octant is not required for Tanzu Community Edition.
Install
octant
using one of their documented methods.Ensure your context is pointed at the correct cluster you wish to monitor.
kubectl config use-context ${CLUSTER_NAME}-admin@${CLUSTER_NAME}
${CLUSTER_NAME}
should be replaced with the name of the cluster you wish to visually inspect.Run
octant
.octant
In most environments,
octant
should be able to start without arguments or flags. For details on how to configure Octant, runoctant --help
.Navigate the Octant UI.
Delete Clusters ¶
After going through this guide, the following enables you to clean-up resources.
Delete any deployed workload clusters.
tanzu cluster delete <WORKLOAD-CLUSTER-NAME>
Once all workload clusters have been deleted, the management cluster can then be removed as well. Run the following commands to get the name of the cluster and delete the cluster
tanzu management-cluster get tanzu management-cluster delete <MGMT-CLUSTER-NAME>
Note for AWS: If the cluster you are deleting is deployed on AWS, you must precede the delete command with the region. For example,
AWS_REGION=us-west-2 tanzu management-cluster delete my-mgmt-cluster
For more information on deleting clusters, see Delete Management Clusters, and Delete Workload Clusters.
Next Steps ¶
Now that you have deployed a management cluster and a workload cluster, you can take the next steps to deploy and manage application workloads. Here are some next steps you can take:
Learn more about how to work with packages or furthermore learn how to create your own packages. For more information see:
Learn how to enable access to your applications in an easy and secure way. For more information, see:
Learn how you can package up multiple packages and configurations into an opinionated package so that you then only need one single package installation and configuration. For more information, see:
Learn how to deploy a monitoring stack using the packages that are available with Tanzu Community Edition. For more information, see: