Kubernetes: How many pods are available per node on AWS EKS?
When getting started to use AWS EKS for building Kubernetes clusters, we should consider the limitations of the number of IP addresses it depends on the instance size. Actually, I didn’t know about it because I just overlooked it on the official document. I know it was my mistake but it should be a blind spot for some people so I’ll share my experience.
After a few months since I got started to use our cluster, I realize that there’s some error logs of kubelet like this.
1RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to set up sandbox container "xxxxxx" network for pod "hoge-1602061920-snvck": NetworkPlugin cni failed to set up pod "hoge-1602061920-snvck_apps" network: add cmd: failed to assign an IP address to container
I investigated this problem and then I found out there’s some limitation of the pod’s number by EC2 instance’s restrictions. EC2 instances has the limitation of the number of ENI (Elastic Network Interface) and how many IP addresses it’s able to have. The limitation is defined depending on the instance size. For example, m5.Large instance is able to have 3 ENIs and 10 IP addresses for each ENI so totally 30 addresses. However the primary address of each ENI cannot be used for Pods so 27 addresses will be available for pods. Another example, m5.4xlarge instance is able to have 8 ENIs and 30 IP addresses for each ENI so 232 addresses will be available for pods.
I’m not sure it’s enough compared to the resources like CPUs and memories for most people. However it should be considered before using EKS especially the cluster size is not large yet. In my case, after reaching the limitation, I’m operating as follows.
- Use NodeAffinity option to avoid running multiple pods on the same node and don’t increase the replica number over the number of nodes
- When running batches as CronJobs, adjust the schedule to avoid overlapping
References