Many developers have found value in running Kubernetes on AWS. A CNCF survey confirmed that over 60% of Kubernetes workloads ran on AWS. To support this vast ecosystem of developers’ needs, AWS released a new service called Amazon EKS (Amazon Elastic Container Service for Kubernetes) at re:Invent 2017. This new service helps run Kubernetes open source container management framework, at scale, on AWS.
This article will cover the ‘Kubernetes on AWS’ world before and after EKS was released.
The Love for Kubernetes and AWS’ Move to Support it
It all started with the developer community extensively using Kubernetes.This was because Kubernetes has the tools that are required to solve modern application problems. It can run anywhere, either on a laptop in the form of Minikube, on-premise, or on the cloud. Kubernetes API is extremely powerful. This API can be thought of as like a single abstraction layer that can help abstract resources both within AWS and on-premise.
The Case: Kubernetes on AWS Before EKS
While using Kubernetes on AWS, a user can take advantage of the underlying platform. Like he/she can get all the performance, the reliability, the breadth of features that come with AWS via Kubernetes cloud integrations. But a user can also use that same familiar API on premise or on a laptop. Thus, Kubernetes makes it easy while developing and moving things to the cloud when ready. Above all, it has a really amazing and enthusiastic developer community.
The capability to deploy cloud-native applications: All the functionality of Kubernetes that comes with package are actually building blocks for cloud native applications. All the functionalities that are available in the Kubernetes package supports microservices and cloud native apps.
And it’s always critical where Kubernetes clusters run.
Factors like the quality of the underlying platform and other dynamics like the speed, stability, scalability and the integrations within the platform impact how much effort a user has to put in to build an application on Kubernetes.
If there’s something nice on the shelf or off the shelf with AWS, that a user wants to use, they can can do that in Kubernetes. He/she need not implement load balancer every time. They can just use a native version available in AWS. Here’s a quick walk through on how to deploy Kubernetes cluster on AWS.
Below is the typical architecture diagram of Kubernetes Cluster on AWS.
This is a natural deployment pattern, where a user can run masters and ETCDs across three AZ’s for HA control panel. Each Kubernetes master essentially runs a copy of the same components. In addition to the masters, a user can also run ETCD, which is the core persistence layer for Kubernetes.
It is here where all the critical data of a cluster lives. Guarding these clusters is essential. Finally, a user needs to run the actual worker nodes where the applications run. These applications can be deployed in auto-scaling groups across multiple availability zones. This requires a lot of control over the instance type, however, a user can use on-demand, reserved instances and whatever instance type that’s pertinent.
The Case: Kubernetes on AWS After EKS
Before EKS, a lot of failovers occurred and developers had a hard time. Hence, AWS built EKS. With this new service developers need not worry about deploying Kubernetes cluster or think about configuration management, ETCD clusters, and master’s high-availability.
What’s in store for users?
Developers using AWS always expect top-notch AWS integration. Today’s developers want to take advantage of the breadth of the AWS platform. On the other hand, there are AWS resources that are supported well in Kubernetes, but not everything.
Hence the EKS. According to AWS, EKS is a platform for enterprises to run production-grade workloads.
To elucidate further, EKS provides features and management capabilities that allow enterprises to run real workloads at real scale. It provides a native and upstream Kubernetes experience. So any modifications or improvements made at the back end, perhaps in building service, will be transparent to the Kubernetes end user experience.
If EKS customers want to use additional AWS services, the integrations are seamless. Moreover, it eliminates undifferentiated heavy lifting.
Further with EKS, master and ETCD will be managed by AWS and the worker nodes will be taken care by the users. So, in the end, it will look like this.
What’s noticeable is that the complete control panel is really simplified. So, instead of running the Kubernetes control panel from a user’s account, a user can connect to Manage Kubernetes endpoint in the AWS cloud.
The endpoint abstracts the complexity of the Kubernetes Control panel and the worker nodes check in to this endpoint. A user can interact with kubectl with this endpoint and replace all the complexity of running his/her Kubernetes Control panel.
A Step-by-Step Guide to Creating a Kubernetes cluster in AWS Console using EKS
Here’s a quick walk through on how to create a Kubernetes cluster in AWS Console using EKS.
1. Open the EKS preview page as shown below:
A page opens where the user needs to fill cluster name, K8S version, VPC-ID and Role ARN.
Note: Currently, EKS supports Kubernetes 1.7.
2. Key in VPC ID. VPC tells us where the worker nodes will run in a user account and where a user needs to create resources in their account.
3. Key in the Role ARN. The IAM role ARN used by EKS manages resources in a user’s account for the Kubernetes master.
4. Click on Create to launch a Kubernetes cluster. This will take approximately 6 to 7 minutes to setup a K8S cluster. The Master endpoint is ready once the cluster is up and running.
5. Copy the master endpoint and put it in .kube/config of home directory as shown below:
6. Deploy the pods here.
Command Line Support
Amazon EKS also provides AWS CLI commands to perform operations with EKS cluster.
If a user wants to create a cluster then he/she can use the below command line:
If a user wants to describe cluster details and get the metadata, he/she can use the below command line.
The cluster metadata will look like this:
To list the clusters, use the below command line.
To delete the clusters, use the below command line.
EKS Master Visibility
EKS provides all API logs to CloudTrail. When a user calls eks create-cluster, this will go into CloudTrail. Just like any other AWS service, all the logs based on API layer will be in CloudTrail. But there are a bunch of other stuff that happens on the masters. Things like kube-api server logs, the kube-scheduler logs, the kube-controller manager logs, etc. will be available in CloudWatch logs so that a user can aggregate the logs (like ELK stack) and monitor them.
Additionally, a user can also install Kube Add-ons on worker nodes.
EKS also provides an additional layer of Authentication with AWS IAM. The below diagram explains it all.
Even though Kubernetes has been there for a while now, it’s taken the DevWorld by storm. As an open source container management framework, it helps run containers at scale. Now with the support of AWS via EKS, developers will be able to build or run microservices or distributed applications seamlessly on Amazon Web Services.
— END —
P.S. : TotalCloud.io now supports EKS! Sign-up today, and see how easy it is to monitor EKS service in context with other AWS services in a visual console.
If you are more of a Docker person and trying to wrap your head around Docker Swarm versus Kubernetes, we have got you covered. Read here.
Originally published at blog.totalcloud.io.