Documentation
Package Process ¶
This document covers the creation of packages for use in Tanzu Community Edition. This is a working design doc that will evolve over time as packages are implemented. Along with being a design doc, this asset walks you through the packaging process.
Terminology ¶
For definitions of extensions, add-ons, core add-ons, user-managed add-ons and more, see the Glossary. The packaging details in most of this document are relevant to core and user-managed packages. However, much of the details around discovery, repositories, and CLI interaction are only relevant to user-managed packages.
Packages ¶
Packaging of external, third party software and functionality is done with the
Carvel toolkit.
The end result is an OCI bundle stored in a container registry. For discovery,
deployment, and management operations, the tanzu
CLI is used, as shown below.
$ tanzu package install gatekeeper --package-name gatekeeper.community.tanzu.vmware.com --version 3.2.3 --namespace default
| Installing package 'gatekeeper.community.tanzu.vmware.com'
/ Getting package metadata for gatekeeper.community.tanzu.vmware.com
- Creating service account 'gatekeeper-default-sa'
\ Creating cluster admin role 'gatekeeper-default-cluster-role'
- Creating package resource
/ Package install status: Reconciling
Added installed package 'gatekeeper' in namespace 'default'
This experience is specific to user-managed packages.
For details on how these packages are discovered, deployed, and managed, see Package Management.
Packaging Workflow ¶
The following flow describes how we package user-managed packages. These steps are described in detail in the subsequent sections.
1. Create Directory Structure ¶
Each package lives in a separate directory, named after the package. The
create-package make target will construct the directories and default files. You
can run it by setting NAME
and VERSION
variables.
make create-package NAME=gatekeeper VERSION=3.2.3 ─╯
mkdir: created directory 'addons/packages/gatekeeper/3.2.3/bundle/.imgpkg'
mkdir: created directory 'addons/packages/gatekeeper/3.2.3/bundle/config/overlay'
mkdir: created directory 'addons/packages/gatekeeper/3.2.3/bundle/config/upstream'
package bootstrapped at addons/packages/gatekeeper/3.2.3
The above script creates the following files and directory structure.
addons/packages/gatekeeper
├── 3.2.3
│ ├── README.md
│ ├── bundle
│ │ ├── .imgpkg
│ │ ├── config
│ │ │ ├── overlay
│ │ │ ├── upstream
│ │ │ └── values.yaml
│ │ └── vendir.yml
│ └── package.yaml
└── metadata.yaml
The files and directories are used for the following.
- README: Contains the package’s documentation.
- bundle: Contains the package’s imgpkg bundle.
- bundle/.imgpkg: Contains metadata for the bundle.
- bundle/config/upstream: Contains the package’s deployment manifests. Typically sourced by upstream.
- bundle/config/overlay: Contains the package’s overlay applied atop the upstream manifest.
- bundle/config/values.yaml: User configurable values
- bundle/vendir.yml: Defines the location of the upstream resources
- package.yaml: Descriptive metadata for the specific version of the package
- metadata.yaml: Descriptive metadata for the package
2. Add Manifest(s) ¶
In order to stay aligned with upstream, store unmodified manifests. For example, gatekeeper’s upstream manifest is located here. By storing the configuration of the upstream manifest, you can easily update the manifest and have customizations applied via overlays.
To ensure integrity of the sourced upstream manifests, vendir is used. It will download and create a lock file that ensures the manifest matches a specific commit.
In the bundle
directory, create/modify the vendir.yml
file. The following
demonstrates the configuration for gatekeeper.
apiVersion: vendir.k14s.io/v1alpha1
kind: Config
minimumRequiredVersion: 0.12.0
directories:
- path: config/upstream
contents:
- path: .
git:
url: https://github.com/open-policy-agent/gatekeeper
ref: v3.2.3
newRootPath: deploy
There are multiple sources you can use. Ideally, packages use either
git
orgithubReleases
such that we can lock in the version. Using thehttp
source does not give us the same guarantee as the aforementioned sources.
This configuration means vendir will manage the config/upstream
directory. To
download the assets and produce a lock file, run the following.
vendir sync
There is also a make task for this.
make vendir-sync-package PACKAGE=gatekeeper VERSION=3.2.3
A lock file will be created at bundle/vendir.lock.yml
. It will contain the
following lock metadata.
apiVersion: vendir.k14s.io/v1alpha1
directories:
- contents:
- git:
commitTitle: Prepare v3.2.3 release (#1084)...
sha: 15def468c9cbfffc79c6d8e29c484b71713303ae
tags:
- v3.2.3
path: .
path: config/upstream
kind: LockConfig
With the above in place, the directories and files will appear as follows.
addons/packages/gatekeeper
├── 3.2.3
│ ├── README.md
│ ├── bundle
│ │ ├── .imgpkg
│ │ ├── config
│ │ │ ├── overlays
│ │ │ ├── upstream
│ │ │ │ └── gatekeeper.yaml
│ │ │ └── values.yaml
│ │ ├── vendir.lock.yml
│ │ └── vendir.yml
│ └── package.yaml
└── metadata.yaml
3. Create Overlay(s) ¶
For each object (e.g. Deployment
) you need to modify from upstream, an overlay
file should be created. Overlays are used to ensure we import
unmodified-upstream manifests and apply specific configuration on top.
Consider the following gatekeeper.yaml
added in the previous step.
---
#! upstream.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: gatekeeper-deployment
labels:
app: gatekeeper
spec:
replicas: 1
selector:
matchLabels:
app: gatekeeper
template:
metadata:
labels:
app: gatekeeper
spec:
containers:
- name: gatekeeper
image: gatekeeper:1.14.2
ports:
- containerPort: 80
Assume you want to modify metadata.labels
to a static value and
spec.replicas
to a user-configurable value.
Create a file named overlay-deployment.yaml
in the bundle/overlay
directory.
---
#! overlay-deployment.yaml
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind":"Deployment", "metadata":{"name":"gatekeeper-deployment"}})
---
metadata:
labels:
#@overlay/match missing_ok=True
class: gatekeeper
#@overlay/match missing_ok=True
owned-by: tanzu
#@overlay/match by=overlay.subset({"kind":"Deployment", "metadata": {"name": "gatekeeper-deployment"}})
---
spec:
#@overlay/match missing_ok=True
replicas: #@ data.values.runtime.replicas
⚠️: Do not templatize or overlay container image fields.
kbld
will be used to create and/or reference image digest SHAs.
Detailed overlay documentation is available in the Carvel site.
4. Default Values ¶
For every user-configurable value defined above, a values.yaml
file should
contain defaults and documentation for what the parameter impacts. If a value
is overriding an upstream value, prefer to use that upstream value. For example,
if the upstream default namespace is foo-ns
, prefer to use foo-ns
as the
default setting for the namespace in the values.yaml file.
Create/modify a values.yaml
file in bundle/config
.
#@data/values
---
#! The namespace in which to deploy gatekeeper.
namespace: gatekeeper
#! The amount of replicas that should exist in gatekeeper.
runtime:
replicas: 3
[Optional]: Validate Templating ¶
With the above in place, you can validate that overlays and templating are working as expected. The conceptual flow is as follows.
To run the above, you can use ytt
as follows. If successful, the transformed manifest will be displayed,
otherwise, an error message is displayed.
ytt -f addons/packages/gatekeeper/3.2.3/bundle/config
5. Resolve and reference image digests ¶
To ensure integrity of packages, it is important to reference an image digest rather than a tag. A tag’s underlying image can change arbitrarily. Whereas referencing a SHA (via digest) will ensure consistency on every pull.
kbld is used to create a lock file, which we name images.yml
. This file contains an
ImagesLock
resource. ImagesLock
is similar to a
go.sum. The image field in the source manifests
are not mutated. Instead, the SHA will be swapped out for the tag upon
deployment. The relationship is as follows.
To find all container image references, create an ImagesLock
, and ensure the
digest’s SHA is referenced, you can run kbld
as follows.
kbld --file addons/packages/gatekeeper/3.2.3/bundle \
--imgpkg-lock-output addons/packages/gatekeeper/3.2.3/bundle/.imgpkg/images.yml
There is also a make task for this.
make lock-package-images PACKAGE=gatekeeper VERSION=3.2.3
This will produce the following file bundle/.imgpkg/images.yml
.
---
apiVersion: imgpkg.carvel.dev/v1alpha1
images:
- annotations:
kbld.carvel.dev/id: openpolicyagent/gatekeeper:v3.2.3
image: index.docker.io/openpolicyagent/gatekeeper@sha256:9cd6e864...
kind: ImagesLock
By placing this file in bundle/.imgpkg
, it will not pollute the
bundle/config
directory and risk being deployed into Kubernetes
clusters. At this point, the following directories and files should be in place.
addons/packages/gatekeeper
├── 3.2.3
│ ├── README.md
│ ├── bundle
│ │ ├── .imgpkg
│ │ │ └── images.yml
│ │ ├── config
│ │ │ ├── overlays
│ │ │ │ ├── overlay-deployment.yaml
│ │ │ ├── upstream
│ │ │ │ └── gatekeeper.yaml
│ │ │ └── values.yaml
│ │ ├── vendir.lock.yml
│ │ └── vendir.yml
│ └── package.yaml
└── metadata.yaml
6. Bundle configuration and deploy to registry ¶
All the manifests and configuration are bundled in an OCI-compliant package. This ensures immutability of configuration upon a release. The bundles are stored in a container registry.
imgpkg
is used to create the bundle and push it to the container registry. It
leverages your underlying container registry, so you must set up authentication
on the system you’ll create the bundle from (e.g. docker login
).
To ensure metadata about the package is captured, add the following Bundle
file
into bundle/.imgpkg/bundle.yaml
.
apiVersion: imgpkg.carvel.dev/v1alpha1
kind: Bundle
metadata:
name: gatekeeper
authors:
- name: Joe Engineer
email: engineerj@example.com
websites:
- url: github.com/open-policy-agent/gatekeeper
The following packages and pushes the bundle.
imgpkg push \
--bundle $(OCI_REGISTRY)/gatekeeper/3.2.3:$(BUNDLE_TAG) \
--file addons/packages/gatekeeper/3.2.3/bundle
There is also a make task for this.
make push-package PACKAGE=gatekeeper VERSION=3.2.3 TAG=3.2.3-beta.1
The results of this look as follows. Notice at the end of a successful push, imgpkg reports the URL and digest of the package. This information will be used in the next step.
===> pushing gatekeeper/3.2.3
dir: .
dir: .imgpkg
file: .imgpkg/bundle.yml
file: .imgpkg/images.yml
dir: config
dir: config/overlays
file: config/overlays/overlay-deployment.yaml
dir: config/upstream
file: config/upstream/gatekeeper.yaml
file: config/values.yaml
file: vendir.lock.yml
file: vendir.yml
Pushed 'projects.registry.vmware.com/tce/gatekeeper@sha256:b7a21027...'
Succeeded
7. Create/Modify a Package CR ¶
A Package
is used to define metadata and templating information about a piece
of software. A Package
CR is created for every addon and points to the OCI
registry where the imgpkg
bundle can be found. This file also captures some version specific
information about the package, such as version, license, release notes. The Package
CR is put into
a directory structure with other packages to eventually form a
PackageRepository
. The Package
CR is not deployed to the cluster,
instead the PackageRepsoitory
bundle, containing many Package
s is. Once
the PackageRepository
is in place, kapp-controller
will make Package
CRs
in the cluster. This relationship can be seen as follows.
An example Package
for gatekeeper
would read as follows.
apiVersion: data.packaging.carvel.dev/v1alpha1
kind: Package
metadata:
name: gatekeeper.community.tanzu.vmware.com.3.2.3
namespace: gatekeeper
spec:
refName: gatekeeper.community.tanzu.vmware.com
version: 3.2.3
releaseNotes: "gatekeeper 3.2.3 https://github.com/open-policy-agent/gatekeeper/releases/tag/v3.2.3"
licenses:
- "Apache 2.0"
template:
spec:
fetch:
- imgpkgBundle:
image: projects.registry.vmware.com/tce/gatekeeper@sha256:b7a21027...
template:
- ytt:
paths:
- config/
- kbld:
paths:
- "-"
- .imgpkg/images.yml
deploy:
- kapp: {}
metadata.name
: Concatenation ofspec.refName
andversion
(see below).spec.refName
: Name that will show up to consumers. Must be unique across packages.version
: version number of this package instance, must use semver. The version used should reflect the version of the packaged software. For example, ifgatekeeper
’s main container image is version3.2.3
, this package should be the same.spec.template.spec.fetch[0].imgpkgBundle.image
: THe URL of the location of this package in an OCI registry. This value is obtained from the result of theimgpkg push
command.
8. Generate openAPIv3 schema and embed it in a package ¶
Follow the below mentioned steps to get started on generating openAPIv3 schema and specifying it in a package. This process works for both kinds of packages: one which have sample values defined like csi, cpi; also for ones which don’t like secretgen-controller and kapp-controller to name a few. For packages which have sample values, assumption is sample-values directory exists under version directory.
Create a schema file (
schema.yaml
) for given data values file. In ytt, before a Data Value can be used in a template, it must be declared. This is typically done via Data Values Schema Check out How to write Schema, to explore the different annotations that can be used when writing a schema.Generate OpenAPI v3 schema using the following make target
Let’s use the secretgen-controller package as an example to generate the schema and embed it in the package yaml file.
cd ~/community-edition/ make generate-openapischema-package PACKAGE=secretgen-controller VERSION=0.7.1
This performs 2 important steps:
- Checking if the values abide by declared schema. If not, you will get an error.
- If schema is successfully validated, it generates openAPIv3 schema and embeds it in package.yaml
Output for generated package.yaml for secretgen-controller is pasted below as an example. You can also see that openAPIv3 schema has been embedded in the Package for secretgen-controller
apiVersion: data.packaging.carvel.dev/v1alpha1 kind: Package metadata: name: secretgen-controller.community.tanzu.vmware.com.0.7.1 spec: refName: secretgen-controller.community.tanzu.vmware.com version: 0.7.1 releaseNotes: secretgen-controller 0.7.1 https://github.com/vmware-tanzu/carvel-secretgen-controller licenses: - Apache 2.0 template: spec: fetch: - imgpkgBundle: image: projects.registry.vmware.com/tce/secretgen-controller@sha256:4d47a1ece799e3b47428e015804e4c822b58adf8afdcf67175e56245b09fbcd2 template: - ytt: paths: - config/ - kbld: paths: - '-' - .imgpkg/images.yml deploy: - kapp: {} valuesSchema: openAPIv3: type: object additionalProperties: false description: OpenAPIv3 Schema for secretgen-controller properties: secretgenController: type: object additionalProperties: false description: Configuration for secretgen-controller properties: namespace: type: string default: secretgen-controller description: The namespace in which to deploy secretgen-controller createNamespace: type: boolean default: true description: Whether to create namespace specified for secretgen-controller
You can now use
make push-package
for your packageThis performs 2 steps:
- Verifies if the openAPIv3 schema embedded in package matches exactly with the openAPIv3 schema generated. It also prevents pushing the package’s imgpkg bundle without the openAPI schema embedded.
- If correct schema has been embedded, it builds and pushes package’s imgpkg bundle
Output for running the make push-package for secretgen-controller package, after openAPIv3 schema has been embedded
===> pushing secretgen-controller/0.7.1 dir: . dir: .imgpkg file: .imgpkg/bundle.yml file: .imgpkg/images.yml dir: config dir: config/overlays file: config/overlays/change-namespace.yaml file: config/schema.yaml dir: config/upstream file: config/upstream/secretgen-controller.yaml file: config/values.star file: config/values.yaml file: vendir.lock.yml file: vendir.yml Pushed 'projects.registry.vmware.com/tce/secretgen-controller@sha256:4d47a1ece799e3b47428e015804e4c822b58adf8afdcf67175e56245b09fbcd2' Succeeded
Now copy the SHA from the above output and paste in package.yaml file at the appropriate location.
You are now ready to create a PR with the generated files for your package.
9. Package Metadata ¶
The final step in creating a package is to update the metadata.yaml
file. This file contains general
information about the package overall, not specific to a version. Here is an overview of the types of metadata captured
in the file.
- Display friendly name
- Short and long descriptions.
- Authoring organization
- Maintainers
- Descriptive categories
- SVG logo
At this point, the package has been created, pushed and documented. The package is ready to be deployed to a cluster as part of a package repository.
10. Creating a Package Repository ¶
Tanzu Community Edition maintains a addons/repos
directory where the main repository definition file, main.yaml
, is kept.
This file is a simple, custom yaml file defining the specific versions of packages to be included in the repository.
An example of this file is as follows:
---
packages:
- name: gatekeeper
versions:
- 3.2.3
There is a makefile task, generate-package-repo
, that generates the package repository from this file. kapp-controller
currently expects the package repositories to be in the format of an imgpkgBundle. This task will generate that bundle.
When the task is executed, make generate-package-repo CHANNEL=main TAG=0.11.0
, the following steps are performed:
- Create
addons/repos/generated/main
directory - Create
addons/repos/generated/main/.imgpkg
for imgpkg - Create
addons/repos/generated/main/packages/packages.yaml
- Iterate over packages and concatenates the package
metadata.yaml
and specific package version’spackage.yaml
into the repository’spackages.yaml
file - Create an imgpkg
images.yml
lock file - Push the bundle to the OCI Registry.
The package repository has immutable tags enabled, so a unique tag must be provided
Upon successful completion, instructions for installing the package repository to your cluster are shown.
tanzu package repository add repo-name --namespace default --url projects.registry...
Tanzu Community Edition will maintain a main
repo, but a beta
or package-foo
repo could be created for development work or to provide
multiple versions of the foo
software.
Common Packaging Considerations ¶
Preventing kapp-controller from Mutating Resources After Deploy ¶
At times, a resource deployed and managed by kapp-controller may be expectedly mutated by another process. For example, a configmap may be deployed alongside an operator. When the operator mutates the configmap, kapp-controller will eventually trigger an update and refresh the configmap back to its original state.
To prevent this behavior, an annotation is added named
kapp.k14s.io/update-strategy
set to the value of skip
. It’s likely you’ll do
this via an
overlay. Below is an example of how you’d set
this up for an upstream configmap.
Upstream Configmap
apiVersion: v1
kind: ConfigMap
metadata:
name: config-domain
namespace: knative-serving
labels:
serving.knative.dev/release: "v0.20.0"
annotations:
knative.dev/example-checksum: "74c3fc6a"
data:
_example: |
################################
# #
# EXAMPLE CONFIGURATION #
# #
################################
Overlay
---
#! overlay-configmap-configdomain.yaml
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind":"ConfigMap", "metadata":{"name":"config-domain"}})
---
metadata:
annotations:
#@overlay/match missing_ok=True
kapp.k14s.io/update-strategy: skip
With the above in place, updates will not cause the config-domain
ConfigMap to
be mutated.
For more details on this annotation, see the kapp Apply Ordering documentation.
Ensuring Order of Deploying Assets ¶
It may be important that your package deploys specific components before others. For example, you may wish for a Deployment that satisfies a validating webhook to be up before applying a ValidatingWebhookConfiguration. This would ensure the service that does validation is up and healthy before blocking API traffic to its endpoint.
To prevent this behavior, the annotations kapp.k14s.io/change-group
and
kapp.k14s.io/change-rule
are used. It’s likely you’ll do this via an
overlay. Below is an example of how you’d set this up for
an upstream Deployment and ValidatingWebhookConfiguration.
Upstream
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
control-plane: controller-manager
gatekeeper.sh/operation: audit
gatekeeper.sh/system: "yes"
name: gatekeeper-controller-manager
namespace: gatekeeper-system
annotations:
spec:
---
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
creationTimestamp: null
labels:
gatekeeper.sh/system: "yes"
name: gatekeeper-validating-webhook-configuration
annotations:
# it is very important this resource (ValidatingWebhookConfiguration) is applied
# last. Otherwise, it can wire up the admission request before components required
# to satisfy it are deployed.
kapp.k14s.io/change-group: "tce.gatekeeper/vwc"
kapp.k14s.io/change-rule: "upsert after upserting tce.gatekeeper/deployment"
webhooks:
Overlays
---
#! overlay-deployment-gatekeeperaudit.yaml
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind":"Deployment", "metadata":{"name":"gatekeeper-controller-manager"}})
---
metadata:
annotations:
#@overlay/match missing_ok=True
kapp.k14s.io/change-group: "tce.gatekeeper/deployment"
---
#! overlay-validatingwebhookconfiguration-gatekeeper.yaml
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#@overlay/match by=overlay.subset({"kind":"ValidatingWebhookConfiguration", "metadata":{"name":"gatekeeper-validating-webhook-configuration"}})
---
metadata:
annotations:
#@overlay/match missing_ok=True
kapp.k14s.io/change-group: "tce.gatekeeper/vwc"
#@overlay/match missing_ok=True
kapp.k14s.io/change-rule: "upsert after upserting tce.gatekeeper/deployment"
With the above overlays applied, the ValidatingWebhookConfiguration will not be applied until the Deployment is healthy.
For more details on this annotation, see the kapp Apply Ordering documentation.
Designed Pending Details ¶
This section covers concerns that need design work.
Versioning of Multiple PackageRepository Instances ¶
With the introduction of the PackageRepository
, we need to determine how we
are going to handle the ever growing number of package instances
(package+version) that will grow over time.
- Do we maintain a
default
repo with all the latest packages? - How to we offer older packages?