The customizations in this section are applicable only to AWS clusters. They will only be applied to clusters that
use the AWS infrastructure provider, i.e. a CAPI Cluster that references an AWSCluster.
This is the multi-page printable view of this section. Click here to print.
AWS
- 1: AWS Additional Security Group Spec
- 2: AWS Additional Tags
- 3: AWS AMI ID and Format spec
- 4: AWS Placement Group
- 5: AWS Placement Group Node Feature Discovery
- 6: AWS Volumes Configuration
- 7: Control Plane Load Balancer
- 8: IAM Instance Profile
- 9: Identity Reference
- 10: Instance type
- 11: Network
- 12: Region
1 - AWS Additional Security Group Spec
The AWS additional security group customization allows the user to specify security groups to the created machines.
The customization can be applied to both control plane and nodepool machines.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass.
Example
To specify addiitonal security groups for all control plane and nodepools, use the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
controlPlane:
aws:
additionalSecurityGroups:
- id: "sg-0fcfece738d3211b8"
- name: workerConfig
value:
aws:
additionalSecurityGroups:
- id: "sg-0fcfece738d3211b8"
We can further customize individual MachineDeployments by using the overrides field with the following configuration:
spec:
topology:
# ...
workers:
machineDeployments:
- class: default-worker
name: md-0
variables:
overrides:
- name: workerConfig
value:
aws:
additionalSecurityGroups:
- id: "sg-0fcfece738d3211b8"
Applying this configuration will result in the following value being set:
control-plane
AWSMachineTemplate:spec: template: spec: additionalSecurityGroups: - id: sg-0fcfece738d3211b8
worker
AWSMachineTemplate:spec: template: spec: additionalSecurityGroups: - id: sg-0fcfece738d3211b8
2 - AWS Additional Tags
The AWS additional tags customization allows the user to specify custom tags to be applied to AWS resources created by the cluster.
The customization can be applied at the cluster level, control plane level, and worker node level.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass.
Example
To specify additional tags for all AWS resources, use the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
aws:
additionalTags:
Environment: production
Team: platform
CostCenter: "12345"
controlPlane:
aws:
additionalTags:
NodeType: control-plane
- name: workerConfig
value:
aws:
additionalTags:
NodeType: worker
Workload: general
We can further customize individual MachineDeployments by using the overrides field with the following configuration:
spec:
topology:
# ...
workers:
machineDeployments:
- class: default-worker
name: md-0
variables:
overrides:
- name: workerConfig
value:
aws:
additionalTags:
NodeType: worker
Workload: database
Environment: production
Tag Precedence
When tags are specified at multiple levels, the following precedence applies (higher precedence overrides lower):
- Worker level tags and Control plane level tags (highest precedence)
- Cluster level tags (lowest precedence)
This means that if the same tag key is specified at multiple levels, the worker and contorl-plane level values will take precedence over the cluster level values.
Applying this configuration will result in the following values being set
AWSCluster:spec: template: spec: additionalTags: Environment: production Team: platform CostCenter: "12345"
control-plane
AWSMachineTemplate:spec: template: spec: additionalTags: Environment: production Team: platform CostCenter: "12345" NodeType: control-plane
worker
AWSMachineTemplate:spec: template: spec: additionalTags: Environment: production Team: platform CostCenter: "12345" NodeType: worker Workload: general
3 - AWS AMI ID and Format spec
The AWS AMI customization allows the user to specify the AMI or AMI Lookup arguments for a AWS machine.
The AMI customization can be applied to both control plane and nodepool machines.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass.
Example
To specify the AMI ID or format for all control plane and nodepools, use the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
controlPlane:
aws:
ami:
# Specify one of id or lookup.
id: "ami-controlplane"
# lookup:
# format: "my-cp-ami-{{.BaseOS}}-?{{.K8sVersion}}-*"
# org: "123456789"
# baseOS: "ubuntu-20.04"
- name: workerConfig
value:
aws:
ami:
# Specify one of id or lookup.
id: "ami-allWorkers"
# lookup:
# format: "my-default-workers-ami-{{.BaseOS}}-?{{.K8sVersion}}-*"
# org: "123456789"
# baseOS: "ubuntu-20.04"
We can further customize individual MachineDeployments by using the overrides field with the following configuration:
spec:
topology:
# ...
workers:
machineDeployments:
- class: default-worker
name: md-0
variables:
overrides:
- name: workerConfig
value:
ami:
# Specify one of id or lookup.
id: "ami-customWorker"
# lookup:
# format: "gpu-workers-ami-{{.BaseOS}}-?{{.K8sVersion}}-*"
# org: "123456789"
# baseOS: "ubuntu-20.04"
Applying this configuration will result in the following value being set:
control-plane
AWSMachineTemplate:spec: template: spec: ami: ami-controlplane # lookupFormat: "my-default-workers-ami-{{.BaseOS}}-?{{.K8sVersion}}-*" # lookupOrg: "123456789" # lookupBaseOS: "ubuntu-20.04"
worker
AWSMachineTemplate:spec: template: spec: ami: ami-customWorker # lookupFormat: "gpu-workers-ami-{{.BaseOS}}-?{{.K8sVersion}}-*" # lookupOrg: "123456789" # lookupBaseOS: "ubuntu-20.04"
4 - AWS Placement Group
The AWS placement group customization allows the user to specify placement groups for control-plane and worker machines to control their placement strategy within AWS.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass.
What are Placement Groups?
AWS placement groups are logical groupings of instances within a single Availability Zone that influence how instances are placed on underlying hardware. They are useful for:
- Cluster Placement Groups: For applications that benefit from low network latency, high network throughput, or both
- Partition Placement Groups: For large distributed and replicated workloads, such as HDFS, HBase, and Cassandra
- Spread Placement Groups: For applications that have a small number of critical instances that should be kept separate
Configuration
The placement group configuration supports the following field:
| Field | Type | Required | Description |
|---|---|---|---|
name | string | Yes | The name of the placement group (1-255 characters) |
Examples
Control Plane and Worker Placement Groups
To specify placement groups for both control plane and worker machines:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
controlPlane:
aws:
placementGroup:
name: "control-plane-pg"
- name: workerConfig
value:
aws:
placementGroup:
name: "worker-pg"
Control Plane Only
To specify placement group only for control plane machines:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
controlPlane:
aws:
placementGroup:
name: "control-plane-pg"
MachineDeployment Overrides
You can customize individual MachineDeployments by using the overrides field:
spec:
topology:
# ...
workers:
machineDeployments:
- class: default-worker
name: md-0
variables:
overrides:
- name: workerConfig
value:
aws:
placementGroup:
name: "special-worker-pg"
Resulting CAPA Configuration
Applying the placement group configuration will result in the following value being set:
control-plane
AWSMachineTemplate:spec: template: spec: placementGroupName: control-plane-pg
worker
AWSMachineTemplate:spec: template: spec: placementGroupName: worker-pg
Best Practices
Placement Group Types: Choose the appropriate placement group type based on your workload:
- Cluster: For applications requiring low latency and high throughput
- Partition: For large distributed workloads that need fault isolation
- Spread: For critical instances that need maximum availability
Naming Convention: Use descriptive names that indicate the purpose and type of the placement group
Availability Zone: Placement groups are constrained to a single Availability Zone, so plan your cluster topology accordingly
Instance Types: Some instance types have restrictions on placement groups (e.g., some bare metal instances)
Capacity Planning: Consider the placement group capacity limits when designing your cluster
Important Notes
- Placement groups must be created in AWS before they can be referenced
- Placement groups are constrained to a single Availability Zone
- You cannot move an existing instance into a placement group
- Some instance types cannot be launched in placement groups
- Placement groups have capacity limits that vary by type and instance family
5 - AWS Placement Group Node Feature Discovery
The AWS placement group NFD (Node Feature Discovery) customization automatically discovers and labels nodes with their placement group information, enabling workload scheduling based on placement group characteristics.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass.
What is Placement Group NFD?
Placement Group NFD automatically discovers the placement group information for each node and creates node labels that can be used for workload scheduling. This enables:
- Workload Affinity: Schedule pods on nodes within the same placement group for low latency
- Fault Isolation: Schedule critical workloads on nodes in different placement groups
- Resource Optimization: Use placement group labels for advanced scheduling strategies
How it Works
The NFD customization:
- Deploys a Discovery Script: Automatically installs a script on each node that queries AWS metadata
- Queries AWS Metadata: Uses EC2 instance metadata to discover placement group information
- Creates Node Labels: Generates Kubernetes node labels with placement group details
- Updates Continuously: Refreshes labels as nodes are added or moved
Generated Node Labels
The NFD customization creates the following node labels:
| Label | Description | Example |
|---|---|---|
feature.node.kubernetes.io/aws-placement-group | The name of the placement group | my-cluster-pg |
feature.node.kubernetes.io/partition | The partition number (for partition placement groups) | 0, 1, 2 |
Configuration
The placement group NFD customization is automatically enabled when a placement group is configured. No additional configuration is required.
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
controlPlane:
aws:
placementGroup:
name: "control-plane-pg"
- name: workerConfig
value:
aws:
placementGroup:
name: "worker-pg"
Usage Examples
Workload Affinity
Schedule pods on nodes within the same placement group for low latency:
apiVersion: apps/v1
kind: Deployment
metadata:
name: high-performance-app
spec:
replicas: 3
selector:
matchLabels:
app: high-performance-app
template:
metadata:
labels:
app: high-performance-app
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: feature.node.kubernetes.io/aws-placement-group
operator: In
values: ["worker-pg"]
containers:
- name: app
image: my-app:latest
Fault Isolation
Distribute critical workloads across different placement groups:
apiVersion: apps/v1
kind: Deployment
metadata:
name: critical-app
spec:
replicas: 6
selector:
matchLabels:
app: critical-app
template:
metadata:
labels:
app: critical-app
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values: ["critical-app"]
topologyKey: feature.node.kubernetes.io/aws-placement-group
containers:
- name: app
image: critical-app:latest
Partition-Aware Scheduling
For partition placement groups, schedule workloads on specific partitions:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: distributed-database
spec:
replicas: 3
selector:
matchLabels:
app: distributed-database
template:
metadata:
labels:
app: distributed-database
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: feature.node.kubernetes.io/partition
operator: In
values: ["0", "1", "2"]
containers:
- name: database
image: my-database:latest
Verification
You can verify that the NFD labels are working by checking the node labels:
# Check all nodes and their placement group labels
kubectl get nodes --show-labels | grep placement-group
# Check specific node labels
kubectl describe node <node-name> | grep placement-group
# Check partition labels
kubectl get nodes --show-labels | grep partition
Troubleshooting
Check NFD Script Status
Verify that the discovery script is running:
# Check if the script exists on nodes
kubectl debug node/<node-name> -it --image=busybox -- chroot /host ls -la /etc/kubernetes/node-feature-discovery/source.d/
# Check script execution
kubectl debug node/<node-name> -it --image=busybox -- chroot /host cat /etc/kubernetes/node-feature-discovery/features.d/placementgroup
Integration with Other Features
Placement Group NFD works seamlessly with:
- Pod Affinity/Anti-Affinity: Use placement group labels for advanced scheduling
- Topology Spread Constraints: Distribute workloads across placement groups
Security Considerations
- The discovery script queries AWS instance metadata (IMDSv2)
- No additional IAM permissions are required beyond standard node permissions
- Labels are automatically managed and do not require manual intervention
- The script runs with appropriate permissions and security context
6 - AWS Volumes Configuration
The AWS volumes customization allows the user to specify configuration for both root and non-root storage volumes for AWS machines.
The volumes customization can be applied to both control plane and worker machines.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass.
Configuration Options
The volumes configuration supports two types of volumes:
- Root Volume: The primary storage volume for the instance (typically
/dev/sda1) - Non-Root Volumes: Additional storage volumes that can be attached to the instance
Volume Configuration Fields
Each volume can be configured with the following fields:
| Field | Type | Required | Description | Default |
|---|---|---|---|---|
deviceName | string | No | Device name for the volume (e.g., /dev/sda1, /dev/sdf) | - |
size | int64 | No | Size in GiB (minimum 8) | Based on AMI, usually 20GiB |
type | string | No | EBS volume type (gp2, gp3, io1, io2) | - |
iops | int64 | No | IOPS for provisioned volumes (io1, io2, gp3) | - |
throughput | int64 | No | Throughput in MiB/s (gp3 only) | - |
encrypted | bool | No | Whether the volume should be encrypted | false |
encryptionKey | string | No | KMS key ID or ARN for encryption | AWS default key |
Supported Volume Types
- gp2: General Purpose SSD (up to 16,000 IOPS)
- gp3: General Purpose SSD with configurable IOPS and throughput
- io1: Provisioned IOPS SSD (up to 64,000 IOPS)
- io2: Provisioned IOPS SSD with higher durability (up to 64,000 IOPS)
Examples
Root Volume Only
To specify only a root volume configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
controlPlane:
aws:
volumes:
root:
deviceName: "/dev/sda1"
size: 100
type: "gp3"
iops: 3000
throughput: 125
encrypted: true
encryptionKey: "arn:aws:kms:us-west-2:123456789012:key/12345678-1234-1234-1234-123456789012"
- name: workerConfig
value:
aws:
volumes:
root:
size: 200
type: "gp3"
iops: 4000
throughput: 250
encrypted: true
Non-Root Volumes Only
To specify only additional non-root volumes:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
controlPlane:
aws:
volumes:
nonroot:
- deviceName: "/dev/sdf"
size: 500
type: "gp3"
iops: 4000
throughput: 250
encrypted: true
- deviceName: "/dev/sdg"
size: 1000
type: "gp2"
encrypted: false
- name: workerConfig
value:
aws:
volumes:
nonroot:
- deviceName: "/dev/sdf"
size: 200
type: "io1"
iops: 10000
encrypted: true
Both Root and Non-Root Volumes
To specify both root and non-root volumes:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
controlPlane:
aws:
volumes:
root:
size: 100
type: "gp3"
iops: 3000
throughput: 125
encrypted: true
nonroot:
- deviceName: "/dev/sdf"
size: 500
type: "gp3"
iops: 4000
throughput: 250
encrypted: true
- deviceName: "/dev/sdg"
size: 1000
type: "gp2"
encrypted: false
- name: workerConfig
value:
aws:
volumes:
root:
size: 200
type: "gp3"
iops: 4000
throughput: 250
encrypted: true
nonroot:
- deviceName: "/dev/sdf"
size: 100
type: "io1"
iops: 10000
encrypted: true
MachineDeployment Overrides
You can customize individual MachineDeployments by using the overrides field:
spec:
topology:
# ...
workers:
machineDeployments:
- class: default-worker
name: md-0
variables:
overrides:
- name: workerConfig
value:
aws:
volumes:
root:
size: 500
type: "gp3"
iops: 10000
throughput: 500
encrypted: true
nonroot:
- deviceName: "/dev/sdf"
size: 1000
type: "io2"
iops: 20000
encrypted: true
Resulting CAPA Configuration
Applying the volumes configuration will result in the following values being set in the AWSMachineTemplate:
Root Volume Configuration
When a root volume is specified, it will be set in the rootVolume field:
spec:
template:
spec:
rootVolume:
deviceName: "/dev/sda1"
size: 100
type: "gp3"
iops: 3000
throughput: 125
encrypted: true
encryptionKey: "arn:aws:kms:us-west-2:123456789012:key/12345678-1234-1234-1234-123456789012"
Non-Root Volumes Configuration
When non-root volumes are specified, they will be set in the nonRootVolumes field:
spec:
template:
spec:
nonRootVolumes:
- deviceName: "/dev/sdf"
size: 500
type: "gp3"
iops: 4000
throughput: 250
encrypted: true
- deviceName: "/dev/sdg"
size: 1000
type: "gp2"
encrypted: false
EKS Configuration
For EKS clusters, the volumes configuration follows the same structure but is specified under the EKS worker configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: workerConfig
value:
eks:
volumes:
root:
size: 200
type: "gp3"
iops: 4000
throughput: 250
encrypted: true
nonroot:
- deviceName: "/dev/sdf"
size: 500
type: "gp3"
iops: 4000
throughput: 250
encrypted: true
Best Practices
- Root Volume: Always specify a root volume for consistent boot disk configuration
- Encryption: Enable encryption for sensitive workloads using either AWS default keys or customer-managed KMS keys
- IOPS and Throughput: Use gp3 volumes for better price/performance ratio with configurable IOPS and throughput
- Device Names: Use standard device naming conventions (
/dev/sda1for root,/dev/sdfonwards for additional volumes) - Size Planning: Consider future growth when sizing volumes, as resizing EBS volumes requires downtime
- Volume Types: Choose appropriate volume types based on workload requirements:
- gp2/gp3: General purpose workloads
- io1/io2: High-performance database workloads requiring consistent IOPS
7 - Control Plane Load Balancer
The control-plane load balancer customization allows the user to modify the load balancer configuration for the control-plane's API server.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass.
Example
To use an internal ELB scheme, use the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
aws:
controlPlaneLoadBalancer:
scheme: internal
Applying this configuration will result in the following value being set:
AWSCluster:spec: controlPlaneLoadBalancer: scheme: internal
8 - IAM Instance Profile
The IAM instance profile customization allows the user to specify the profile to use for control-plane and worker Machines.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass.
Example
To specify the IAM instance profile, use the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
controlPlane:
aws:
iamInstanceProfile: custom-control-plane.cluster-api-provider-aws.sigs.k8s.io
- name: workerConfig
value:
aws:
iamInstanceProfile: custom-nodes.cluster-api-provider-aws.sigs.k8s.io
Applying this configuration will result in the following value being set:
control-plane
AWSMachineTemplate:spec: template: spec: iamInstanceProfile: custom-control-plane.cluster-api-provider-aws.sigs.k8s.io
worker
AWSMachineTemplate:spec: template: spec: iamInstanceProfile: custom-nodes.cluster-api-provider-aws.sigs.k8s.io
9 - Identity Reference
The identity reference customization allows the user to specify the AWS identity to use when reconciling the cluster. This identity reference can be used to authenticate with AWS services using different identity types such as AWSClusterControllerIdentity, AWSClusterRoleIdentity, or AWSClusterStaticIdentity.
This customization is available for AWS clusters when the
provider-specific cluster configuration patch is included in the ClusterClass.
For detailed information about AWS multi-tenancy and identity management, see the Cluster API AWS Multi-tenancy documentation.
Example
To specify the AWS identity reference for an AWS cluster, use the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
aws:
identityRef:
kind: AWSClusterStaticIdentity
name: my-aws-identity
Identity Types
The following identity types are supported:
- AWSClusterControllerIdentity: Uses the default identity for the controller
- AWSClusterRoleIdentity: Assumes a role using the provided source reference
- AWSClusterStaticIdentity: Uses static credentials stored in a secret
Example with Different Identity Types
Using AWSClusterRoleIdentity
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
aws:
identityRef:
kind: AWSClusterRoleIdentity
name: my-role-identity
Using AWSClusterStaticIdentity
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
aws:
identityRef:
kind: AWSClusterStaticIdentity
name: my-static-identity
Applying this configuration will result in the following value being set:
AWSCluster:spec: template: spec: identityRef: kind: AWSClusterStaticIdentity name: my-aws-identity
Notes
- If no identity is specified, the default identity for the controller will be used
- The identity reference must exist in the cluster before creating the cluster
- For AWSClusterStaticIdentity, the referenced secret must contain the required AWS credentials
- For AWSClusterRoleIdentity, the role must be properly configured with the necessary permissions
10 - Instance type
The instance type customization allows the user to specify the profile to use for control-plane and worker Machines.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass.
Example
To specify the instance type, use the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
controlPlane:
aws:
instanceType: m5.xlarge
- name: workerConfig
value:
aws:
instanceType: m5.2xlarge
Applying this configuration will result in the following value being set:
control-plane
AWSMachineTemplate:spec: template: spec: instanceType: m5.xlarge
worker
AWSMachineTemplate:spec: template: spec: instanceType: m5.2xlarge
11 - Network
The network customization allows the user to specify existing infrastructure to use for the cluster.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass.
Example
To specify existing AWS VPC, use the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
aws:
network:
vpc:
id: vpc-1234567890
To also specify existing AWS Subnets, use the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
aws:
network:
vpc:
id: vpc-1234567890
subnets:
- id: subnet-1
- id: subnet-2
- id: subnet-3
Applying this configuration will result in the following value being set:
AWSCluster:spec: network: subnets: - id: subnet-1 - id: subnet-2 - id: subnet-3 vpc: id: vpc-1234567890
12 - Region
The region customization allows the user to specify the region to deploy a cluster into.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass.
Example
To specify the AWS region to deploy into, use the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
aws:
region: us-west-2
Applying this configuration will result in the following value being set:
AWSCluster:spec: template: spec: region: us-west-2