The customizations in this section are applicable only to EKS clusters. They will only be applied to clusters that
use the EKS infrastructure provider, i.e. a CAPI Cluster that references an AWSManagedControlPlane.
This is the multi-page printable view of this section. Click here to print.
EKS
- 1: EKS Additional Tags
- 2: EKS Placement Group
- 3: EKS Placement Group Node Feature Discovery
- 4: Identity Reference
1 - EKS Additional Tags
The EKS additional tags customization allows the user to specify custom tags to be applied to AWS resources created by the EKS cluster.
The customization can be applied at the cluster level and worker node level.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass.
Example
To specify additional tags for EKS resources, use the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
eks:
additionalTags:
Environment: production
Team: platform
CostCenter: "12345"
We can further customize individual MachineDeployments by using the overrides field with the following configuration:
spec:
topology:
# ...
workers:
machineDeployments:
- class: default-worker
name: md-0
variables:
overrides:
- name: workerConfig
value:
eks:
additionalTags:
NodeType: worker
Workload: database
Environment: production
Tag Precedence
When tags are specified at multiple levels, the following precedence applies (higher precedence overrides lower):
- Worker level tags (highest precedence)
- Cluster level tags (lowest precedence)
This means that if the same tag key is specified at multiple levels, the worker level values will take precedence over the cluster level values.
Applying this configuration will result in the following values being set
AWSManagedControlPlane:spec: template: spec: additionalTags: Environment: production Team: platform CostCenter: "12345"
worker
AWSMachineTemplate:spec: template: spec: additionalTags: Environment: production Team: platform CostCenter: "12345" NodeType: worker Workload: general
2 - EKS Placement Group
The EKS placement group customization allows the user to specify placement groups for EKS worker nodes to control their placement strategy within AWS.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass.
What are Placement Groups?
AWS placement groups are logical groupings of instances within a single Availability Zone that influence how instances are placed on underlying hardware. They are useful for:
- Cluster Placement Groups: For applications that benefit from low network latency, high network throughput, or both
- Partition Placement Groups: For large distributed and replicated workloads, such as HDFS, HBase, and Cassandra
- Spread Placement Groups: For applications that have a small number of critical instances that should be kept separate
Configuration
The placement group configuration supports the following field:
| Field | Type | Required | Description |
|---|---|---|---|
name | string | Yes | The name of the placement group (1-255 characters) |
Examples
EKS Worker Placement Groups
To specify placement groups for EKS worker nodes:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: workerConfig
value:
eks:
placementGroup:
name: "eks-worker-pg"
Multiple Node Groups with Different Placement Groups
You can configure different placement groups for different node groups:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: workerConfig
value:
eks:
placementGroup:
name: "general-worker-pg"
workers:
machineDeployments:
- class: high-performance-worker
name: md-0
variables:
overrides:
- name: workerConfig
value:
eks:
placementGroup:
name: "high-performance-pg"
- class: general-worker
name: md-1
variables:
overrides:
- name: workerConfig
value:
eks:
placementGroup:
name: "general-worker-pg"
Resulting EKS Configuration
Applying the placement group configuration will result in the following value being set in the EKS worker node configuration:
worker
AWSMachineTemplate:spec: template: spec: placementGroupName: worker-pg
Best Practices
Placement Group Types: Choose the appropriate placement group type based on your workload:
- Cluster: For applications requiring low latency and high throughput
- Partition: For large distributed workloads that need fault isolation
- Spread: For critical instances that need maximum availability
Naming Convention: Use descriptive names that indicate the purpose and type of the placement group
Availability Zone: Placement groups are constrained to a single Availability Zone, so plan your cluster topology accordingly
Instance Types: Some instance types have restrictions on placement groups (e.g., some bare metal instances)
Capacity Planning: Consider the placement group capacity limits when designing your cluster
EKS Node Groups: Consider using different placement groups for different node groups based on workload requirements
Important Notes
- Placement groups must be created in AWS before they can be referenced
- Placement groups are constrained to a single Availability Zone
- You cannot move an existing instance into a placement group
- Some instance types cannot be launched in placement groups
- Placement groups have capacity limits that vary by type and instance family
- EKS managed node groups support placement groups for enhanced networking performance
3 - EKS Placement Group Node Feature Discovery
The EKS placement group NFD (Node Feature Discovery) customization automatically discovers and labels EKS worker nodes with their placement group information, enabling workload scheduling based on placement group characteristics.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass.
What is Placement Group NFD?
Placement Group NFD automatically discovers the placement group information for each EKS worker node and creates node labels that can be used for workload scheduling. This enables:
- Workload Affinity: Schedule pods on nodes within the same placement group for low latency
- Fault Isolation: Schedule critical workloads on nodes in different placement groups
- Resource Optimization: Use placement group labels for advanced scheduling strategies
How it Works
The NFD customization:
- Deploys a Discovery Script: Automatically installs a script on each EKS worker node that queries AWS metadata
- Queries AWS Metadata: Uses EC2 instance metadata to discover placement group information
- Creates Node Labels: Generates Kubernetes node labels with placement group details
- Updates Continuously: Refreshes labels as nodes are added or moved
Generated Node Labels
The NFD customization creates the following node labels:
| Label | Description | Example |
|---|---|---|
feature.node.kubernetes.io/aws-placement-group | The name of the placement group | my-eks-worker-pg |
feature.node.kubernetes.io/partition | The partition number (for partition placement groups) | 0, 1, 2 |
Configuration
The placement group NFD customization is automatically enabled when a placement group is configured for EKS workers. No additional configuration is required.
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: workerConfig
value:
eks:
placementGroup:
name: "eks-worker-pg"
Usage Examples
Workload Affinity
Schedule pods on nodes within the same placement group for low latency:
apiVersion: apps/v1
kind: Deployment
metadata:
name: high-performance-app
spec:
replicas: 3
selector:
matchLabels:
app: high-performance-app
template:
metadata:
labels:
app: high-performance-app
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: feature.node.kubernetes.io/aws-placement-group
operator: In
values: ["eks-worker-pg"]
containers:
- name: app
image: my-app:latest
Fault Isolation
Distribute critical workloads across different placement groups:
apiVersion: apps/v1
kind: Deployment
metadata:
name: critical-app
spec:
replicas: 6
selector:
matchLabels:
app: critical-app
template:
metadata:
labels:
app: critical-app
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values: ["critical-app"]
topologyKey: feature.node.kubernetes.io/aws-placement-group
containers:
- name: app
image: critical-app:latest
Partition-Aware Scheduling
For partition placement groups, schedule workloads on specific partitions:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: distributed-database
spec:
replicas: 3
selector:
matchLabels:
app: distributed-database
template:
metadata:
labels:
app: distributed-database
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: feature.node.kubernetes.io/partition
operator: In
values: ["0", "1", "2"]
containers:
- name: database
image: my-database:latest
Verification
You can verify that the NFD labels are working by checking the node labels:
# Check all nodes and their placement group labels
kubectl get nodes --show-labels | grep placement-group
# Check specific node labels
kubectl describe node <node-name> | grep placement-group
# Check partition labels
kubectl get nodes --show-labels | grep partition
Troubleshooting
Check NFD Script Status
Verify that the discovery script is running:
# Check if the script exists on nodes
kubectl debug node/<node-name> -it --image=busybox -- chroot /host ls -la /etc/kubernetes/node-feature-discovery/source.d/
# Check script execution
kubectl debug node/<node-name> -it --image=busybox -- chroot /host cat /etc/kubernetes/node-feature-discovery/features.d/placementgroup
Integration with Other Features
Placement Group NFD works seamlessly with:
- Pod Affinity/Anti-Affinity: Use placement group labels for advanced scheduling
- Topology Spread Constraints: Distribute workloads across placement groups
Security Considerations
- The discovery script queries AWS instance metadata (IMDSv2)
- No additional IAM permissions are required beyond standard EKS node permissions
- Labels are automatically managed and do not require manual intervention
- The script runs with appropriate permissions and security context
4 - Identity Reference
The identity reference customization allows the user to specify the AWS identity to use when reconciling the EKS cluster. This identity reference can be used to authenticate with AWS services using different identity types such as AWSClusterControllerIdentity, AWSClusterRoleIdentity, or AWSClusterStaticIdentity.
This customization is available for EKS clusters when the
provider-specific cluster configuration patch is included in the ClusterClass.
For detailed information about AWS multi-tenancy and identity management, see the Cluster API AWS Multi-tenancy documentation.
Example
To specify the AWS identity reference for an EKS cluster, use the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
eks:
identityRef:
kind: AWSClusterStaticIdentity
name: my-aws-identity
Identity Types
The following identity types are supported:
- AWSClusterControllerIdentity: Uses the default identity for the controller
- AWSClusterRoleIdentity: Assumes a role using the provided source reference
- AWSClusterStaticIdentity: Uses static credentials stored in a secret
Example with Different Identity Types
Using AWSClusterRoleIdentity
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
eks:
identityRef:
kind: AWSClusterRoleIdentity
name: my-role-identity
Using AWSClusterStaticIdentity
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
eks:
identityRef:
kind: AWSClusterStaticIdentity
name: my-static-identity
Applying this configuration will result in the following value being set:
AWSManagedControlPlane:spec: template: spec: identityRef: kind: AWSClusterStaticIdentity name: my-aws-identity
Notes
- If no identity is specified, the default identity for the controller will be used
- The identity reference must exist in the cluster before creating the cluster
- For AWSClusterStaticIdentity, the referenced secret must contain the required AWS credentials
- For AWSClusterRoleIdentity, the role must be properly configured with the necessary permissions