Documentation
Docker-based Clusters on Windows ¶
In order to run Docker-based clusters on Windows, multiple additional steps are required. At this time, we don’t recommend deploying Tanzu Community Edition clusters onto Docker for Windows unless you’re willing to tinker with lower level details around Windows Subsystem for Linux. If you wish to continue, the following steps will take you through deploying Docker-based clusters on Windows.
Compile the WSL Kernel ¶
⚠️ : These steps will have you use a custom-built kernel that will be used for all your WSL-based VMs.
The CNI used by Tanzu Community Edition (antrea) requires specific configuration in the kernel that is not enabled in the default WSL kernel. In future versions of antrea, this kernel configuration will not be required (tracked in antrea#2635). This section covers compiling a kernel that will work with Antrea.
Thanks to the kind project for hosting these instructions, which we were able to build atop.
Run and enter an Ubuntu container to build the kernel
docker run --name wsl-kernel-builder --rm -it ubuntu@sha256:9d6a8699fb5c9c39cf08a0871bd6219f0400981c570894cd8cbea30d3424a31f bash
From inside the container, run the following
WSL_COMMIT_REF=linux-msft-5.4.72 # change this line to the version you want to build apt update apt install -y git build-essential flex bison libssl-dev libelf-dev bc mkdir src cd src git init git remote add origin https://github.com/microsoft/WSL2-Linux-Kernel.git git config --local gc.auto 0 git -c protocol.version=2 fetch --no-tags --prune --progress --no-recurse-submodules --depth=1 origin +${WSL_COMMIT_REF}:refs/remotes/origin/build/linux-msft-wsl-5.4.y git checkout --progress --force -B build/linux-msft-wsl-5.4.y refs/remotes/origin/build/linux-msft-wsl-5.4.y # adds support for clientIP-based session affinity sed -i 's/# CONFIG_NETFILTER_XT_MATCH_RECENT is not set/CONFIG_NETFILTER_XT_MATCH_RECENT=y/' Microsoft/config-wsl # required module for antrea sed -i 's/# CONFIG_NETFILTER_XT_TARGET_CT is not set/CONFIG_NETFILTER_XT_TARGET_CT=y/' Microsoft/config-wsl # build the kernel make -j2 KCONFIG_CONFIG=Microsoft/config-wsl
Once the above completes, in a new powershell session, run the following
docker cp wsl-kernel-builder:/src/arch/x86/boot/bzImage .
Set the
.wslconfig
file to point at the new kernel (bzImage
).[wsl2] kernel=C:\\Users\\<your_user>\\bzImage
As seen above, you should escape the
\
by writing\\
. The above path may differ for you depending on where you compiled/saved the kernel.Shutdown WSL.
wsl --shutdown
Restart WSL VMs.
This can be done via Docker desktop or using
wsl
. You may need to restart Docker desktop even after restarting wsl.Verify the kernel version run by WSL is consistent with what you compiled above.
wsl uname -a Linux DESKTOP-4T1VL4L 5.4.72-microsoft-standard-WSL2+ #1 SMP Sat Sep 11 16:50:20 UTC 2021 x86_64 Linux
In a WSL VM with appropriate tools (e.g. Ubuntu) verify the kernel configuration required by antrea is present.
wsl zgrep CONFIG_NETFILTER_XT_TARGET_CT /proc/config.gz CONFIG_NETFILTER_XT_TARGET_CT=y
Create Local Docker Clusters ¶
This section describes setting up a management cluster on your local workstation using Docker.
⚠️: Tanzu Community Edition support for Docker is experimental and may require troubleshooting on your system.
⚠️ Warning on DockerHub Rate Limiting ¶
When using the Docker (CAPD) provider, the load balancer image (HA Proxy) is pulled from DockerHub. DockerHub limits pulls per user and this can especially impact users who share a common IP, in the case of NAT or VPN. If DockerHub rate-limiting is an issue in your environment, you can pre-pull the load balancer image to your machine by running the following command.
docker pull kindest/haproxy:v20210715-a6da3463
This behavior will eventually be addressed in
https://github.com/vmware-tanzu/community-edition/issues/897.
Local Docker Bootstrapping ¶
Ensure your Docker engine has adequate resources. The minimum requirements with no other containers running are: 6 GB of RAM and 4 CPUs.
- Linux: Run
docker system info
- Mac: Select Preferences > Resources > Advanced
- Linux: Run
Create the management cluster.
CLUSTER_PLAN=dev tanzu management-cluster create -i docker <MANAGEMENT-CLUSTER-NAME>
For increased logs, you can append
-v 10
.⚠️ Capture the name of your cluster, it will be referenced as ${CLUSTER_NAME} going forward.
⚠️ The deployment will fail due to the CLI client being unable to reach the API server running in the WSL VM. This is expected.
Let the deployment report failure.
Retrieve the address of the WSL VM.
⚠️ Capture the VM IP of your cluster, it will be referenced as ${WSL_VM_IP} going forward.
Query the docker daemon to get the forwarded port for HA Proxy. In the following example, the port is
44393
docker ps | grep -i ha 44c0a71735ef kindest/haproxy:v20210715-a6da3463 "haproxy -sf 7 -W -d…" 2 days ago Up 2 days 35093/tcp, 0.0.0.0:44393->6443/tcp muuhmuh-lb
⚠️ Capture the port mentioned above, it will be referenced as ${HA_PROXY_PORT} going forward.
Edit your
~/.kube/config
file.Locate the YAML entry for your
${CLUSTER_NAME}
In that YAML entry, replace
certificate-authority-data: < BASE64 DATA >
withinsecure-skip-tls-verify: true
.In the YAML entry, replace
server: < api server value >
with${WSL_VM_IP}:${HA_PROXY_PORT}
. Assuming the${CLUSTER_NAME}
was test, the entry would now look as follows.- cluster: insecure-skip-tls-verify: true server: https://192.0.1.1:44393 name: test
Save the file and exit.
kubectl
andtanzu
CLI should now be able to interact with your cluster.Validate the management cluster started:
tanzu management-cluster get
The output should look similar to the following:
NAME READY SEVERITY REASON SINCE MESSAGE /tkg-mgmt-docker-20210601125056 True 28s ├─ClusterInfrastructure - DockerCluster/tkg-mgmt-docker-20210601125056 True 32s ├─ControlPlane - KubeadmControlPlane/tkg-mgmt-docker-20210601125056-control-plane True 28s │ └─Machine/tkg-mgmt-docker-20210601125056-control-plane-5pkcp True 24s │ └─MachineInfrastructure - DockerMachine/tkg-mgmt-docker-20210601125056-control-plane-9wlf2 └─MachineDeployment/tkg-mgmt-docker-20210601125056-md-0 └─Machine/tkg-mgmt-docker-20210601125056-md-0-5d895cbfd9-khj4s True 24s └─MachineInfrastructure - DockerMachine/tkg-mgmt-docker-20210601125056-md-0-d544k Providers: NAMESPACE NAME TYPE PROVIDERNAME VERSION WATCHNAMESPACE capd-system infrastructure-docker InfrastructureProvider docker v0.3.10 capi-kubeadm-bootstrap-system bootstrap-kubeadm BootstrapProvider kubeadm v0.3.14 capi-kubeadm-control-plane-system control-plane-kubeadm ControlPlaneProvider kubeadm v0.3.14 capi-system cluster-api CoreProvider cluster-api v0.3.14
Capture the management cluster’s kubeconfig and take note of the command for accessing the cluster in the message, as you will use this for setting the context in the next step.
tanzu management-cluster kubeconfig get <MGMT-CLUSTER-NAME> --admin
- Where <
MGMT-CLUSTER-NAME>
should be set to the name returned bytanzu management-cluster get
. - For example, if your management cluster is called ‘mtce’, you will see a message similar to:
Credentials of cluster 'mtce' have been saved. You can now access the cluster by running 'kubectl config use-context mtce-admin@mtce'
- Where <
Set your kubectl context to the management cluster.
kubectl config use-context <MGMT-CLUSTER-NAME>-admin@<MGMT-CLUSTER-NAME>
Validate you can access the management cluster’s API server.
kubectl get nodes
You will see output similar to:
NAME STATUS ROLES AGE VERSION guest-control-plane-tcjk2 Ready control-plane,master 59m v1.20.4+vmware.1 guest-md-0-f68799ffd-lpqsh Ready <none> 59m v1.20.4+vmware.1
Create your workload cluster.
tanzu cluster create <WORKLOAD-CLUSTER-NAME> --plan dev
Validate the cluster starts successfully.
tanzu cluster list
Capture the workload cluster’s kubeconfig.
tanzu cluster kubeconfig get <WORKLOAD-CLUSTER-NAME> --admin
Set your
kubectl
context accordingly.kubectl config use-context <WORKLOAD-CLUSTER-NAME>-admin@<WORKLOAD-CLUSTER-NAME>
Verify you can see pods in the cluster.
kubectl get pods --all-namespaces
The output will look similar to the following:
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system antrea-agent-9d4db 2/2 Running 0 3m42s kube-system antrea-agent-vkgt4 2/2 Running 1 5m48s kube-system antrea-controller-5d594c5cc7-vn5gt 1/1 Running 0 5m49s kube-system coredns-5d6f7c958-hs6vr 1/1 Running 0 5m49s kube-system coredns-5d6f7c958-xf6cl 1/1 Running 0 5m49s kube-system etcd-tce-guest-control-plane-b2wsf 1/1 Running 0 5m56s kube-system kube-apiserver-tce-guest-control-plane-b2wsf 1/1 Running 0 5m56s kube-system kube-controller-manager-tce-guest-control-plane-b2wsf 1/1 Running 0 5m56s kube-system kube-proxy-9825q 1/1 Running 0 5m48s kube-system kube-proxy-wfktm 1/1 Running 0 3m42s kube-system kube-scheduler-tce-guest-control-plane-b2wsf 1/1 Running 0 5m56s
You now have local clusters running on Docker. The nodes can be seen by running the following command:
docker ps
The output will be similar to the following:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
33e4e422e102 projects.registry.vmware.com/tkg/kind/node:v1.20.4_vmware.1 "/usr/local/bin/entr…" About an hour ago Up About an hour guest-md-0-f68799ffd-lpqsh
4ae2829ab6e1 projects.registry.vmware.com/tkg/kind/node:v1.20.4_vmware.1 "/usr/local/bin/entr…" About an hour ago Up About an hour 41637/tcp, 127.0.0.1:41637->6443/tcp guest-control-plane-tcjk2
c0947823840b kindest/haproxy:2.1.1-alpine "/docker-entrypoint.…" About an hour ago Up About an hour 42385/tcp, 0.0.0.0:42385->6443/tcp guest-lb
a2f156fe933d projects.registry.vmware.com/tkg/kind/node:v1.20.4_vmware.1 "/usr/local/bin/entr…" About an hour ago Up About an hour mgmt-md-0-b8689788f-tlv68
128bf25b9ae9 projects.registry.vmware.com/tkg/kind/node:v1.20.4_vmware.1 "/usr/local/bin/entr…" About an hour ago Up About an hour 40753/tcp, 127.0.0.1:40753->6443/tcp mgmt-control-plane-9rdcq
e59ca95c14d7 kindest/haproxy:2.1.1-alpine "/docker-entrypoint.…" About an hour ago Up About an hour 35621/tcp, 0.0.0.0:35621->6443/tcp mgmt-lb
The above reflects 1 management cluster and 1 workload cluster, both featuring 1 control plane node and 1 worker node.
Each cluster gets an haproxy
container fronting the control plane node(s). This enables scaling the control plane into
an HA configuration.
🛠️: For troubleshooting failed bootstraps, you can exec into a container and use the kubeconfig at /etc/kubernetes/admin.conf
to access
the API server directly. For example:
docker exec -it 4ae /bin/bash
root@guest-control-plane-tcjk2:/# kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes
NAME STATUS ROLES AGE VERSION
guest-control-plane-tcjk2 Ready control-plane,master 67m v1.20.4+vmware.1
guest-md-0-f68799ffd-lpqsh Ready <none> 67m v1.20.4+vmware.1
In the above
4ae
is a control plane node.
This section describes installing the cert-manager package in your cluster as an example of package installation. For detailed instruction on package management, see Work with Packages.
Procedure ¶
Make sure your
kubectl
context is set to the workload cluster.kubectl config use-context <CLUSTER-NAME>-admin@<CLUSTER-NAME>
Where
<CLUSTER-NAME>
is the name of the workload where you want to install a package.Install the Tanzu Community Edition package repository into the
tanzu-package-repo-global
namespace.tanzu package repository add tce-repo --url projects.registry.vmware.com/tce/main:0.11.0 --namespace tanzu-package-repo-global
- Package repositories are installed into the
default
namespace by default. - Packages are installed in the same namespace where the PackageRepository is installed. If you install a package repository into another non-default namespace, you must specify that same namespace as an argument in the
tanzu package install
command when you install a package from that repository. - Package repositories installed into the
tanzu-package-repo-global
namespace are available to the entire cluster. In this case, the packages can be installed in a different namespace to the PackageRepository, they don’t need to be installed into thetanzu-package-repo-global
namespace. - A
tanzu-core
repository is also installed in thetkg-system
namespace clusters. This repository holds lower-level components that are not meant to be installed by the user. These packages are used during cluster boostrapping.
- Package repositories are installed into the
Verify the package repository has reconciled.
tanzu package repository list --namespace tanzu-package-repo-global
The output will look similar to the following:
/ Retrieving repositories... NAME REPOSITORY STATUS DETAILS tce-repo projects.registry.vmware.com/tce/main:0.11.0 Reconcile succeeded
It may take some time to see
Reconcile succeeded
. Until then, packages won’t show up in the available list described in the next step.List the available packages.
tanzu package available list
The output will look similar to the following:
- Retrieving available packages... NAME DISPLAY-NAME SHORT-DESCRIPTION cert-manager.community.tanzu.vmware.com cert-manager Certificate management contour.community.tanzu.vmware.com Contour An ingress controller external-dns.community.tanzu.vmware.com external-dns This package provides DNS... fluent-bit.community.tanzu.vmware.com fluent-bit Fluent Bit is a fast Log Processor and... gatekeeper.community.tanzu.vmware.com gatekeeper policy management grafana.community.tanzu.vmware.com grafana Visualization and analytics software harbor.community.tanzu.vmware.com Harbor OCI Registry knative-serving.community.tanzu.vmware.com knative-serving Knative Serving builds on Kubernetes to... local-path-storage.community.tanzu.vmware.com local-path-storage This package provides local path node... multus-cni.community.tanzu.vmware.com multus-cni This package provides the ability for... prometheus.community.tanzu.vmware.com prometheus A time series database for your metrics velero.community.tanzu.vmware.com velero Disaster recovery capabilities
List the available versions for the
cert-manager
package.tanzu package available list cert-manager.community.tanzu.vmware.com
The output will look similar to the following:
/ Retrieving package versions for cert-manager.community.tanzu.vmware.com... NAME VERSION RELEASED-AT cert-manager.community.tanzu.vmware.com 1.3.3 2021-08-06T12:31:21Z cert-manager.community.tanzu.vmware.com 1.4.4 2021-08-23T16:47:51Z cert-manager.community.tanzu.vmware.com 1.5.3 2021-08-23T17:22:51Z
NOTE: The available versions of a package may have changed since this guide was written.
Install the package to the cluster.
tanzu package install cert-manager --package-name cert-manager.community.tanzu.vmware.com --version 1.5.3
The output will look similar to the following:
| Installing package 'cert-manager.community.tanzu.vmware.com' / Getting package metadata for cert-manager.community.tanzu.vmware.com - Creating service account 'cert-manager-default-sa' \ Creating cluster admin role 'cert-manager-default-cluster-role' Creating package resource / Package install status: Reconciling Added installed package 'cert-manager' in namespace 'default'
Note: Use one of the available package versions, since the one described in this guide might no longer be available. Note: While the underlying resources associated with cert-manager are installed in the cert-manager namespace, the actual cert-manager package is installed to the
default
namespace as per the installation output message. For an explanation of this behavior, see 2 above and the Package Repositories topic.Verify cert-manager is installed in the cluster.
tanzu package installed list
The output will look similar to the following:
| Retrieving installed packages... NAME PACKAGE-NAME PACKAGE-VERSION STATUS cert-manager cert-manager.community.tanzu.vmware.com 1.5.3 Reconcile succeeded
To remove a package from the cluster, run the following command:
tanzu package installed delete cert-manager
The output will look similar to the following:
| Uninstalling package 'cert-manager' from namespace 'default' | Getting package install for 'cert-manager' \ Deleting package install 'cert-manager' from namespace 'default' \ Package uninstall status: ReconcileSucceeded \ Package uninstall status: Reconciling \ Package uninstall status: Deleting | Deleting admin role 'cert-manager-default-cluster-role' / Deleting service account 'cert-manager-default-sa' Uninstalled package 'cert-manager' from namespace 'default'
For more information about package management, see
Work with Packages. For details on installing a specific package,
see the package’s documentations in the left navigation bar (Packages > ${PACKAGE_NAME}
).