Amazon Elastic Kubernetes Service (EKS) is:
a managed service that you can use to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes.
Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications.
EKS is certified by the Kubernetes project, and so is guaranteed to run any standard Kubernetes applications, tools, or plugins without the need to modify your code.
EKS significantly simplifies Kubernetes deployment on AWS and provides advantages such as a managed control plane (including automatic security patches to the control plane nodes), control plane nodes spread across AWS availability zones to reduce single points of failure and Pod networking out of the box. As you would expect, EKS is also well-integrated with the wider AWS ecosystem, including the AWS Load Balancer Controller to manage AWS Elastic Load Balancers, and Amazon EFS CSI driver and Amazon EBS CSI driver to manage the lifecycle of storage file systems and volumes.
Key workload consideration in EKS
If you are planning to run a large number of small pods in the EKS worker node, you may find that you are limited by the IP availability for the pods even if your instance has enough resources like CPU and memory. In this case, you'll have to spin up another EKS worker node to accommodate your pods even when your current worker node is underutilized in terms of resources like CPU and memory. This is because, by default, Amazon EKS supports native VPC networking with the Amazon VPC Container Network Interface (CNI) plugin for Kubernetes which allows Kubernetes pods to have the same IP address inside the pod as they do on the VPC network. This feature works well for security use cases where you want to make sure that the pods inside the Kubernetes cluster have an IP assigned from the VPC CIDR pool range but it inadvertently limits the number of pods to the IPs available in the VPC and the number of ENIs that the EC2 instance you are using can support. Please check the maximum number of pods an EKS worker node can contain here.
Amazon is working on this proposal to increase the pod density but there are no timelines of when this would be implemented.
Also, EKS worker nodes maintain a default number of IPs in the warmup pool to attach to the pods based on the configuration of the aws-node daemonset. Under heavy load, these default parameters might not provide adequate performance for your workload and you may find that the scheduler is assigning pods to a worker node, but that node does not have enough IPs available in the warmup pool. This will cause pods scheduled in the nodes to fail due to IP unavailability until EKS is successful is attaching an ENI to the worker node, which might take up to 10 seconds. Hence, it becomes important that based on your workload behaviour under load you consider setting appropriate values of this config parameter. Please see the code for more details.