This is the multi-page printable view of this section. Click here to print.
Addons
1 - Cluster Autoscaler
By leveraging CAPI cluster lifecycle hooks, this handler deploys Cluster Autoscaler on the management cluster
for every Cluster at the AfterControlPlaneInitialized
phase.Unlike other addons, the Cluster Autoscaler
is deployed on the management cluster because it also interacts with the CAPI resources to scale the number of Machines.
The Cluster Autoscaler Pod will not start on the management cluster until the CAPI resources are pivoted
to that management cluster.
Note the Cluster Autoscale controller needs to be running for any scaling operations to occur, just updating the min and max size annotations in the Cluster object will not be enough. You can however manually change the number of replicas by modifying the MachineDeployment object directly.
Deployment of Cluster Autoscaler is opt-in via the provider-specific cluster configuration.
The hook uses either the Cluster API Add-on Provider for Helm or ClusterResourceSet
to deploy the cluster-autoscaler
resources depending on the selected deployment strategy.
Example
To enable deployment of Cluster Autoscaler on a cluster, specify the following values:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
addons:
clusterAutoscaler:
strategy: HelmAddon
workers:
machineDeployments:
- class: default-worker
metadata:
annotations:
# Set the following annotations to configure the Cluster Autoscaler
# The initial MachineDeployment will have 1 Machine
cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: "3"
cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: "1"
name: md-0
# Do not set the replicas field, otherwise the topology controller will revert back the autoscaler's changes
To deploy the addon via ClusterResourceSet
replace the value of strategy
with ClusterResourceSet
.
2 - CNI
When deploying a cluster with CAPI, deployment and configuration of CNI is up to the user. By leveraging CAPI cluster
lifecycle hooks, this handler deploys a requested CNI provider on the new cluster at the AfterControlPlaneInitialized
phase.
The hook uses either the Cluster API Add-on Provider for Helm or ClusterResourceSet
to deploy the CNI resources
depending on the selected deployment strategy.
Currently the hook supports Cilium and Calico CNI providers.
Cilium
Deployment of Cilium is opt-in via the provider-specific cluster configuration.
Cilium Example
To enable deployment of Cilium on a cluster, specify the following values:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
addons:
cni:
provider: Cilium
strategy: HelmAddon
To deploy the addon via ClusterResourceSet
replace the value of strategy
with ClusterResourceSet
.
Calico
Deployment of Calico is opt-in via the provider-specific cluster configuration.
Calico Example
To enable deployment of Calico on a cluster, specify the following values:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
addons:
cni:
provider: Calico
strategy: HelmAddon
ClusterResourceSet strategy
To deploy the addon via ClusterResourceSet
replace the value of strategy
with ClusterResourceSet
.
When using the ClusterResourceSet
strategy, the hook creates two ClusterResourceSets
: one to deploy the Tigera
Operator, and one to deploy Calico via the Tigera Installation
CRD. The Tigera Operator CRS is shared between all
clusters in the operator, whereas the Calico installation CRS is unique per cluster.
As ClusterResourceSets must exist in the same name as the cluster they apply to, the lifecycle hook copies default ConfigMaps from the same namespace as the CAPI runtime extensions hook pod is running in. This enables users to configure defaults specific for their environment rather than compiling the defaults into the binary.
The Helm chart comes with default configurations for the Calico Installation CRS per supported provider, but overriding is possible. For example. to change Docker provider's Calico configuration, specify following helm argument when deploying cluster-api-runtime-extensions-nutanix chart:
--set-file hooks.cni.calico.crsStrategy.defaultInstallationConfigMaps.DockerCluster.configMap.content=<file>
3 - Node Feature Discovery
By leveraging CAPI cluster lifecycle hooks, this handler deploys Node Feature Discovery (NFD) on the new cluster at
the AfterControlPlaneInitialized
phase.
Deployment of NFD is opt-in via the provider-specific cluster configuration.
The hook uses either the Cluster API Add-on Provider for Helm or ClusterResourceSet
to deploy the NFD resources
depending on the selected deployment strategy.
Example
To enable deployment of NFD on a cluster, specify the following values:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
addons:
nfd:
strategy: HelmAddon
To deploy the addon via ClusterResourceSet
replace the value of strategy
with ClusterResourceSet
.
4 - Service LoadBalancer
When an application running in a cluster needs to be exposed outside of the cluster, one option is
to use an external load balancer, by creating a Kubernetes Service of the
LoadBalancer
type.
The Service Load Balancer is the component that backs this Kubernetes Service, either by creating a Virtual IP, creating a machine that runs load balancer software, by delegating to APIs, such as the underlying infrastructure, or a hardware load balancer.
The Service Load Balancer can choose the Virtual IP from a pre-defined address range. You can use CAREN to configure one or more IPv4 ranges. For additional options, configure the Service Load Balancer yourself after it is deployed.
CAREN currently supports the following Service Load Balancers:
Examples
To enable deployment of MetalLB on a cluster, specify the following values:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
addons:
serviceLoadBalancer:
provider: MetalLB
To enable MetalLB, and configure two address IPv4 ranges, specify the following values:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
addons:
serviceLoadBalancer:
provider: MetalLB
configuration:
addressRanges:
- start: 10.100.1.1
end: 10.100.1.20
- start: 10.100.1.51
end: 10.100.1.70
See MetalLB documentation for more configuration details.