Cluster customizations
The cluster configuration handlers wrap all the other mutation handlers in a convenient single patch for inclusion in
your ClusterClasses, allowing for a single configuration variable with nested values. This provides the most flexibility
with the least configuration.
To enable the handler, add the provider-specific clusterconfigvars
and clusterconfigpatch
external patches on
ClusterClass
. This will enable all of the generic cluster customizations, along with the
relevant provider-specific variables.
Regardless of provider, a single variable called clusterConfig
will be available for use on the ClusterClass
. The
schema (and therefore the configuration options) will be customized for each provider. To use the exposed configuration
options, specify the desired values on the Cluster
resource:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
kubernetesImageRepository: "my-registry.io/my-org/my-repo"
etcd:
image:
repository: my-registry.io/my-org/my-repo
tag: "v3.5.99_custom.0"
extraAPIServerCertSANs:
- a.b.c.example.com
- d.e.f.example.com
proxy:
http: http://example.com
https: https://example.com
additionalNo:
- no-proxy-1.example.com
- no-proxy-2.example.com
imageRegistries:
credentials:
- url: https://my-registry.io
secretRef:
name: my-registry-credentials
cni:
provider: calico
AWS
See AWS customizations for the AWS specific customizations.
apiVersion: cluster.x-k8s.io/v1beta1
kind: ClusterClass
metadata:
name: <NAME>
spec:
patches:
- name: cluster-config
external:
generateExtension: "awsclusterconfigpatch.cluster-api-runtime-extensions-nutanix"
discoverVariablesExtension: "awsclusterconfigvars.cluster-api-runtime-extensions-nutanix"
Docker
See generic customizations for the Docker specific customizations.
apiVersion: cluster.x-k8s.io/v1beta1
kind: ClusterClass
metadata:
name: <NAME>
spec:
patches:
- name: cluster-config
external:
generateExtension: "dockerclusterconfigpatch.cluster-api-runtime-extensions-nutanix"
discoverVariablesExtension: "dockerclusterconfigvars.cluster-api-runtime-extensions-nutanix"
1 - Generic
The customizations in this section are applicable to all providers.
1.1 - Audit policy
Kubernetes auditing provides a security-relevant, chronological set of records documenting the sequence of actions in a
cluster. The cluster audits the activities generated by users, by applications that use the Kubernetes API, and by the
control plane itself.
There are currently no configuration options for the Audit Policy customization and this customization will be
automatically applied when the provider-specific cluster configuration patch is included in the
ClusterClass
.
1.2 - Auto-renewal of control plane certificates
autoRenewCertificates
variable enables automatic renewal of control plane certificates by triggering a rollout of the
control plane nodes when the certificates on the control plane machines are about to expire.
More information about certificate renewal: Automatically rotating certificates using Kubeadm Control Plane provider.
Example
To enable automatic certificate renewal use the following configuration, applicable to all CAPI providers supported by
CAREN:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
controlPlane:
autoRenewCertificates:
daysBeforeExpiry: 30
Applying this configuration will result in the following configuration being applied:
1.3 - Containerd metrics
Containerd exports metrics to a Prometheus endpoint. The metrics cover
containerd itself, its plugins, e.g. CRI, and information about the
containers managed by containerd.
There are currently no configuration options for metrics, and this
customization will be automatically applied when the provider-specific
cluster configuration patch is included in the
ClusterClass
.
1.4 - Encryption At Rest
encryptionAtRest
variable enables encrypting kubernetes resources at rest using provided encryption provider.
When this variable is set, kuberntetes secrets
and configmap
s are encrypted before writing them at etcd
.
If the encryptionAtRest
property is not specified, then
the customization will be skipped. The secrets
and configmaps
will not be stored as encrypted in etcd
.
We support following encryption providers
More information about encryption at-rest: Encrypting Confidential Data at Rest
Example
To encrypt configmaps
and secrets
kubernetes resources using aescbc
encryption provider:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
encryptionAtRest:
providers:
- aescbc: {}
Applying this configuration will result in
<CLUSTER_NAME>-encryption-config
secret generated.
A secret key for the encryption provider is generated and stored in <CLUSTER_NAME>-encryption-config
secret.
The APIServer will be configured to use the secret key to encrypt secrets
and
configmaps
kubernetes resources before writing them to etcd.
When reading resources from etcd
, encryption provider that matches the stored data attempts in order to decrypt the data.
CAREN currently does not rotate the key once it generated.
- Configure APIServer with encryption configuration:
1.5 - etcd
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass
.
The DNS configuration can then be manipulated via the cluster variables.
If the dns
property is not specified, then the customization will be skipped.
CoreDNS
The CoreDNS configuration can then be manipulated via the cluster variables.
If the dns.coreDNS
property is not specified, then the customization will be skipped.
Example
The CoreDNS version can be updated automatically. To do this, set coreDNS
to an empty object:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
dns:
coreDNS: {}
Applying this configuration will result in the following value being set,
with the version of the CoreDNS image being set based on the cluster's Kubernetes version:
To change the repository and tag for the container image for the CoreDNS pod, specify the following configuration:
Note do not include "coredns" in the repository, kubeadm already appends it.
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
dns:
coreDNS:
image:
repository: my-registry.io/my-org/my-repo
tag: "v1.11.3_custom.0"
Applying this configuration will result in the following value being set:
1.6 - etcd
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass
.
The etcd configuration can then be manipulated via the cluster variables. If the etcd
property is not specified, then
the customization will be skipped.
Example
To change the repository and tag for the container image for the etcd pod, specify the following configuration:
Note do not include "etcd" in the repository, kubeadm already appends it.
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
etcd:
image:
repository: my-registry.io/my-org/my-repo
tag: "v3.5.99_custom.0"
Applying this configuration will result in the following value being set:
1.7 - Extra API Server Certificate SANs
If the API server can be accessed by alternative DNS addresses then setting additional SANs on the API server
certificate is necessary in order for clients to successfully validate the API server certificate.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass
.
Example
To add extra SANs to the API server certificate, specify the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
extraAPIServerCertSANs:
- a.b.c.example.com
- d.e.f.example.com
Applying this configuration will result in the following value being set:
1.8 - Global Image Registry Mirror
Add containerd image registry mirror configuration to all Nodes in the cluster.
When the globalImageRegistryMirror
variable is set, files
with configurations for
Containerd default mirror.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass
.
Example
To provide an image registry mirror with a CA certificate, specify the following configuration:
If the registry mirror requires a private or self-signed CA certificate,
create a Kubernetes Secret with the ca.crt
key populated with the CA certificate in PEM format:
kubectl create secret generic my-mirror-ca-cert \
--from-file=ca.crt=registry-ca.crt
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
globalImageRegistryMirror:
url: https://example.com
credentials:
secretRef:
name: my-mirror-ca-cert
Applying this configuration will result in following new files on the
KubeadmControlPlaneTemplate
and KubeadmConfigTemplate
resources:
/etc/containerd/certs.d/_default/hosts.toml
/etc/certs/mirror.pem
To use a public hosted image registry (e.g. ECR) as a registry mirror, specify the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
globalImageRegistryMirror:
url: https://123456789.dkr.ecr.us-east-1.amazonaws.com
Applying this configuration will result in following new files on the
KubeadmControlPlaneTemplate
and KubeadmConfigTemplate
resources:
/etc/containerd/certs.d/_default/hosts.toml
1.9 - HTTP proxy
In some network environments it is necessary to use HTTP proxy to successfuly execute HTTP requests.
This customization will configure Kubernetes components (containerd
, kubelet
) with appropriate configuration for
control plane and worker nodes, utilising systemd drop-ins to configure the necessary environment variables.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass
.
Example
To configure HTTP proxy values, specify the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
proxy:
http: http://example.com
https: http://example.com
additionalNo:
- no-proxy-1.example.com
- no-proxy-2.example.com
The additionalNo
list will be added to default pre-calculated values that apply on k8s networking
localhost,127.0.0.1,<POD CIDRS>,<SERVICE CIDRS>,kubernetes,kubernetes.default,.svc,.svc.cluster.local
, plus
provider-specific addresses as required.
Applying this configuration will result in the following value being set:
Applying this configuration will result in new bootstrap files on the KubeadmControlPlaneTemplate
and KubeadmConfigTemplate
.
1.10 - Image registries
Add image registry configuration to all Nodes in the cluster.
When the credentials
variable is set, files
and preKubeadmnCommands
with configurations for
Kubelet image credential provider
and dynamic credential provider will be added.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass
.
Example
If your registry requires static credentials, create a Kubernetes Secret with keys for username
and password
:
kubectl create secret generic my-registry-credentials \
--from-literal username=${REGISTRY_USERNAME} --from-literal password=${REGISTRY_PASSWORD}
If your registry requires a private or self-signed CA certificate,
create a Kubernetes Secret with the ca.crt
key populated with the CA certificate in PEM format:
kubectl create secret generic my-mirror-ca-cert \
--from-file=ca.crt=registry-ca.crt
To set both image registry credentials and CA certificate,
create a Kubernetes Secret with keys for username
, password
, and ca.crt
:
kubectl create secret generic my-registry-credentials \
--from-literal username=${REGISTRY_USERNAME} --from-literal password=${REGISTRY_PASSWORD} \
--from-file=ca.crt=registry-ca.crt
To add image registry credentials and/or CA certificate, specify the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
imageRegistries:
- url: https://my-registry.io
credentials:
secretRef:
name: my-registry-credentials
Applying this configuration will result in new files and preKubeadmCommands
on the KubeadmControlPlaneTemplate
and KubeadmConfigTemplate
.
1.11 - Kubernetes Image Repository
Override the container image repository used when pulling Kubernetes images.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass
.
Example
To configure HTTP proxy values, specify the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
kubernetesImageRepository: "my-registry.io/my-org/my-repo"
Applying this configuration will result in the following value being set:
- KubeadmControlPlaneTemplate:
/spec/template/spec/kubeadmConfigSpec/clusterConfiguration/imageRepository: my-registry.io/my-org/my-repo
1.12 - Tainting nodes
Tainting nodes prevents pods from being scheduled on them unless they explicitly tolerate the taints applied to the
nodes. See the Kubernetes Taints and Tolerations documentation for more details.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass
.
Example
Control plane taints
To configure taints for the control plane nodes, specify the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
controlPlane:
taints:
- key: some-key
effect: NoSchedule
value: some-value
Applying this configuration will result in the following value being set:
Default control-plane taint applied by kubeadm
When using this customization, the default taint added by kubeadm to the control plane nodes will not be added unless
explicitly specified as well.
To add the default taint back to the control-plane, add the following taint along with any custom taints you wish to add
to the control-plane taints:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
Removing all taints from control-plane nodes
To remove the default control plane taints set by kubeadm (and therefore allow scheduling to control plane nodes without
adding explicit tolerations to your pod manifests), set controlPlane.taints
to an empty array:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
controlPlane:
taints: []
Worker node taints
Taints for individual nodepools can be configured similarly:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
workers:
machineDeployments:
- class: default-worker
name: md-0
variables:
overrides:
- name: workerConfig
value:
taints:
- key: some-key
effect: NoSchedule
value: some-value
Applying this configuration will result in the following value being set:
KubeadmConfigTemplate
:
spec:
joinConfiguration:
nodeRegistration:
taints:
- key: some-key
effect: NoSchedule
value: some-value
1.13 - Users
Configure users for all machines in the cluster, the user's superuser capabilities using sudo
user specifications, and
the login authentication mechanism.
SSH authorized keys are just public SSH keys that are used to authenticate a login. See the SSH man
page for more information.
For information on sudo user specifications, see the sudo
documentation.
Local password authentication is disabled for the user by default. It is enabled only when a hashed password is
provided.
Examples
Admin user with SSH public key login
Creates a user with the name admin
, grants the user the ability to run any command as the superuser, and allows you to
login via SSH using the username and private key corresponding to the authorized public key.
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
users:
- name: username
sshAuthorizedKeys:
- "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAua0lo8BiGWgvIiDCKnQDKL5uERHfnehm0ns5CEJpJw optionalcomment"
sudo: "ALL=(ALL) NOPASSWD:ALL"
Admin user with serial console password login
Creates a user with the name admin,
grants the user the ability to run any command as the superuser, and allows you to
login via serial console using the username and password.
Note that this does not allow you to login via SSH using the username and password; in most cases, you must also
configure the SSH server to allow password authentication.
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
users:
- name: admin
hashedPassword: "$y$j9T$UraH8eN4XvapXBmmSaUrP0$Nyxdf1cJDGZcp0WDKu.CFHprrkPG4ubirqSqiD43Ix3"
sudo: "ALL=(ALL) NOPASSWD:ALL"
2 - AWS
The customizations in this section are applicable only to AWS clusters. They will only be applied to clusters that
use the AWS
infrastructure provider, i.e. a CAPI Cluster
that references an AWSCluster
.
2.1 - AWS Additional Security Group Spec
The AWS additional security group customization allows the user to specify security groups to the created machines.
The customization can be applied to both control plane and nodepool machines.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass
.
Example
To specify addiitonal security groups for all control plane and nodepools, use the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
controlPlane:
aws:
additionalSecurityGroups:
- id: "sg-0fcfece738d3211b8"
- name: workerConfig
value:
aws:
additionalSecurityGroups:
- id: "sg-0fcfece738d3211b8"
We can further customize individual MachineDeployments by using the overrides field with the following configuration:
spec:
topology:
# ...
workers:
machineDeployments:
- class: default-worker
name: md-0
variables:
overrides:
- name: workerConfig
value:
aws:
additionalSecurityGroups:
- id: "sg-0fcfece738d3211b8"
Applying this configuration will result in the following value being set:
2.2 - AWS AMI ID and Format spec
The AWS AMI customization allows the user to specify the AMI or AMI Lookup arguments for a AWS machine.
The AMI customization can be applied to both control plane and nodepool machines.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass
.
Example
To specify the AMI ID or format for all control plane and nodepools, use the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
controlPlane:
aws:
ami:
# Specify one of id or lookup.
id: "ami-controlplane"
# lookup:
# format: "my-cp-ami-{{.BaseOS}}-?{{.K8sVersion}}-*"
# org: "123456789"
# baseOS: "ubuntu-20.04"
- name: workerConfig
value:
aws:
ami:
# Specify one of id or lookup.
id: "ami-allWorkers"
# lookup:
# format: "my-default-workers-ami-{{.BaseOS}}-?{{.K8sVersion}}-*"
# org: "123456789"
# baseOS: "ubuntu-20.04"
We can further customize individual MachineDeployments by using the overrides field with the following configuration:
spec:
topology:
# ...
workers:
machineDeployments:
- class: default-worker
name: md-0
variables:
overrides:
- name: workerConfig
value:
ami:
# Specify one of id or lookup.
id: "ami-customWorker"
# lookup:
# format: "gpu-workers-ami-{{.BaseOS}}-?{{.K8sVersion}}-*"
# org: "123456789"
# baseOS: "ubuntu-20.04"
Applying this configuration will result in the following value being set:
2.3 - Control Plane Load Balancer
The control-plane load balancer customization allows the user
to modify the load balancer configuration for the control-plane's API server.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass
.
Example
To use an internal ELB scheme, use the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
aws:
controlPlaneLoadBalancer:
scheme: internal
Applying this configuration will result in the following value being set:
AWSClusterTemplate
:
spec:
controlPlaneLoadBalancer:
scheme: internal
2.4 - IAM Instance Profile
The IAM instance profile customization allows the user to specify the profile to use for control-plane
and worker Machines.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass
.
Example
To specify the IAM instance profile, use the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
controlPlane:
aws:
iamInstanceProfile: custom-control-plane.cluster-api-provider-aws.sigs.k8s.io
- name: workerConfig
value:
aws:
iamInstanceProfile: custom-nodes.cluster-api-provider-aws.sigs.k8s.io
Applying this configuration will result in the following value being set:
2.5 - Instance type
The instance type customization allows the user to specify the profile to use for control-plane
and worker Machines.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass
.
Example
To specify the instance type, use the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
controlPlane:
aws:
instanceType: m5.xlarge
- name: workerConfig
value:
aws:
instanceType: m5.2xlarge
Applying this configuration will result in the following value being set:
2.6 - Network
The network customization allows the user to specify existing infrastructure to use for the cluster.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass
.
Example
To specify existing AWS VPC, use the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
aws:
network:
vpc:
id: vpc-1234567890
To also specify existing AWS Subnets, use the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
aws:
network:
vpc:
id: vpc-1234567890
subnets:
- id: subnet-1
- id: subnet-2
- id: subnet-3
Applying this configuration will result in the following value being set:
AWSClusterTemplate
:
spec:
network:
subnets:
- id: subnet-1
- id: subnet-2
- id: subnet-3
vpc:
id: vpc-1234567890
2.7 - Region
The region customization allows the user to specify the region to deploy a cluster into.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass
.
Example
To specify the AWS region to deploy into, use the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
aws:
region: us-west-2
Applying this configuration will result in the following value being set:
AWSClusterTemplate
:
spec:
template:
spec:
region: us-west-2
3 - Docker
The customizations in this section are applicable only to AWS clusters. They will only be applied to clusters that
use the Docker
infrastructure provider, i.e. a CAPI Cluster
that references an DockerCluster
.
3.1 - Custom image
The custom image customization allows the user to specify the OCI image to use for control-plane and worker Machines.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass
.
Example
To specify the custom image, use the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
controlPlane:
docker:
customImage: ghcr.io/mesosphere/kind-node:v1.2.3-cp
- name: workerConfig
value:
docker:
customImage: ghcr.io/mesosphere/kind-node:v1.2.3-worker
The configuration above will apply customImage to all workers.
You can further customize individual MachineDeployments by using the overrides
field with the following configuration:
spec:
topology:
# ...
workers:
machineDeployments:
- class: default-worker
name: md-0
variables:
overrides:
- name: workerConfig
value:
docker:
customImage: ghcr.io/mesosphere/kind-node:v1.2.3-custom
Applying this configuration will result in the following value being set:
4 - Nutanix
The customizations in this section are applicable only to Nutanix clusters. They will only be applied to clusters that
use the Nutanix
infrastructure provider, i.e. a CAPI Cluster
that references an NutanixCluster
.
4.1 - Control Plane Endpoint
Configure Control Plane Endpoint. Defines the host IP and port of the CAPX Kubernetes cluster.
Examples
Set Control Plane Endpoint
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
nutanix:
controlPlaneEndpoint:
host: x.x.x.x
port: 6443
virtualIP: {}
Applying this configuration will result in the following value being set:
spec:
template:
spec:
controlPlaneEndpoint:
host: x.x.x.x
port: 6443
KubeadmControlPlaneTemplate
spec:
kubeadmConfigSpec:
files:
- content: |
apiVersion: v1
kind: Pod
metadata:
name: kube-vip
namespace: kube-system
spec:
...
owner: root:root
path: /etc/kubernetes/manifests/kube-vip.yaml
permissions: "0600"
postKubeadmCommands:
# Only added for clusters version >=v1.29.0
- |-
if [ -f /run/kubeadm/kubeadm.yaml ]; then
sed -i 's#path: /etc/kubernetes/super-admin.conf#path: ...
fi
preKubeadmCommands:
# Only added for clusters version >=v1.29.0
- |-
if [ -f /run/kubeadm/kubeadm.yaml ]; then
sed -i 's#path: /etc/kubernetes/admin.conf#path: ...
fi
4.2 - Machine Details
Configure Machine Details of Control plane and Worker nodes
Examples
(Required) Set Machine details for Control Plane and Worker nodes
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
controlPlane:
nutanix:
machineDetails:
bootType: legacy
cluster:
name: pe-cluster-name
type: name
image:
name: os-image-name
type: name
memorySize: 4Gi
subnets:
- name: subnet-name
type: name
systemDiskSize: 40Gi
vcpuSockets: 2
vcpusPerSocket: 1
- name: workerConfig
value:
nutanix:
machineDetails:
bootType: legacy
cluster:
name: pe-cluster-name
type: name
image:
name: os-image-name
type: name
memorySize: 4Gi
subnets:
- name: subnet-name
type: name
systemDiskSize: 40Gi
vcpuSockets: 2
vcpusPerSocket: 1
Applying this configuration will result in the following value being set:
- control-plane
NutanixMachineTemplate
:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: NutanixMachineTemplate
metadata:
name: nutanix-quick-start-cp-nmt
spec:
template:
spec:
bootType: legacy
cluster:
name: pe-cluster-name
type: name
image:
name: os-image-name
type: name
memorySize: 4Gi
providerID: nutanix://vm-uuid
subnet:
- name: subnet-name
type: name
systemDiskSize: 40Gi
vcpuSockets: 2
vcpusPerSocket: 1
- worker
NutanixMachineTemplate
:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: NutanixMachineTemplate
metadata:
name: nutanix-quick-start-md-nmt
spec:
template:
spec:
bootType: legacy
cluster:
name: pe-cluster-name
type: name
image:
name: os-image-name
type: name
memorySize: 4Gi
providerID: nutanix://vm-uuid
subnet:
- name: subnet-name
type: name
systemDiskSize: 40Gi
vcpuSockets: 2
vcpusPerSocket: 1
(Optional) Set Additional Categories for Control Plane and Worker nodes
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
controlPlane:
nutanix:
machineDetails:
additionalCategories:
- key: example-key
value: example-value
- name: workerConfig
value:
nutanix:
machineDetails:
additionalCategories:
- key: example-key
value: example-value
Applying this configuration will result in the following value being set:
- control-plane
NutanixMachineTemplate
:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: NutanixMachineTemplate
metadata:
name: nutanix-quick-start-cp-nmt
spec:
template:
spec:
additionalCategories:
- key: example-key
value: example-value
- worker
NutanixMachineTemplate
:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: NutanixMachineTemplate
metadata:
name: nutanix-quick-start-md-nmt
spec:
template:
spec:
additionalCategories:
- key: example-key
value: example-value
(Optional) Set Project for Control Plane and Worker nodes
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
controlPlane:
nutanix:
machineDetails:
project:
type: name
name: project-name
- name: workerConfig
value:
nutanix:
machineDetails:
project:
type: name
name: project-name
Applying this configuration will result in the following value being set:
- control-plane
NutanixMachineTemplate
:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: NutanixMachineTemplate
metadata:
name: nutanix-quick-start-cp-nmt
spec:
template:
spec:
project:
type: name
name: project-name
- worker
NutanixMachineTemplate
:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: NutanixMachineTemplate
metadata:
name: nutanix-quick-start-md-nmt
spec:
template:
spec:
project:
type: name
name: project-name
(Optional) Add a GPU to a machine deployment
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: workerConfig
value:
nutanix:
machineDetails:
gpus:
- type: name
name: "Ampere 40"
workers:
- class: nutanix-quick-start-worker
metadata:
annotations:
cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: "1"
cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: "1"
name: gpu-0
Applying this configuration will result in the following value being set:
- control-plane
NutanixMachineTemplate
:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: NutanixMachineTemplate
metadata:
name: nutanix-quick-start-gpu-nmt
spec:
template:
spec:
gpus:
- type: name
name: "Ampere 40"
4.3 - Prism Central Endpoint
Configure Prism Central Endpoint to create machines on.
Examples
Set Prism Central Endpoint
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
nutanix:
prismCentralEndpoint:
credentials:
secretRef:
name: secret-name
url: https://x.x.x.x:9440
insecure: false
Applying this configuration will result in the following value being set:
spec:
template:
spec:
prismCentral:
address: x.x.x.x
insecure: false
port: 9440
credentialRef:
kind: Secret
name: secret-name
Provide an Optional Trusted CA Bundle
If the Prism Central endpoint uses a self-signed certificate, you can provide an additional trust bundle
to be used by the Nutanix provider.
This is a base64 PEM encoded x509 cert for the RootCA that was used to create the certificate for a Prism Central
See Nutanix Security Guide for more information.
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
nutanix:
prismCentralEndpoint:
# ...
additionalTrustBundle: "LS0...="
Applying this configuration will result in the following value being set:
spec:
template:
spec:
prismCentral:
# ...
additionalTrustBundle:
kind: String
data: |-
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----