Setting up an EKS Cluster and Managing Applications with kubectl

A step-by-step guide to deploying and managing Kubernetes applications on AWS

In this blog post, we'll walk through the process of setting up an Amazon Elastic Kubernetes Service (EKS) cluster and using kubectl to deploy and manage applications on Kubernetes from your laptop. This guide will help you get started with containerized applications on AWS using Kubernetes.

Prerequisites

Before we begin, make sure you have the following:

  • An AWS account with appropriate permissions
  • AWS CLI installed and configured on your laptop
  • kubectl installed on your laptop
  • eksctl installed on your laptop

Setting up the EKS Cluster

Let's start by creating an EKS cluster using eksctl, which simplifies the process significantly.

  1. Open your terminal and run the following command to create an EKS cluster:
    eksctl create cluster --name my-eks-cluster --region us-west-2 --nodegroup-name standard-workers --node-type t3.medium --nodes 3 --nodes-min 1 --nodes-max 4 --managed
  2. Wait for the cluster creation to complete. This process typically takes about 15-20 minutes.
  3. Once the cluster is ready, eksctl will automatically update your kubectl configuration file, allowing you to interact with your new EKS cluster.

Configuring kubectl

To ensure kubectl is properly configured to work with your new EKS cluster:

  1. Run the following command to update your kubeconfig:
    aws eks get-token --cluster-name my-eks-cluster | kubectl apply -f -
  2. Verify the connection by checking the cluster info:
    kubectl cluster-info

You should see information about your Kubernetes control plane and CoreDNS.

Deploying an Application

Now that we have our EKS cluster set up and kubectl configured, let's deploy a simple application:

  1. Create a file named deployment.yaml with the following content:
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-deployment
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx:1.14.2
            ports:
            - containerPort: 80
  2. Apply the deployment using kubectl:
    kubectl apply -f deployment.yaml
  3. Verify the deployment:
    kubectl get deployments

You should see your nginx-deployment with 3 replicas.

Exposing the Application

To make our application accessible, we need to create a Kubernetes Service:

  1. Create a file named service.yaml with the following content:
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx-service
    spec:
      selector:
        app: nginx
      ports:
        - protocol: TCP
          port: 80
          targetPort: 80
      type: LoadBalancer
  2. Apply the service:
    kubectl apply -f service.yaml
  3. Check the service status:
    kubectl get services

Wait for the EXTERNAL-IP to be assigned. This is the address where you can access your application.

Managing Your Application

Here are some useful kubectl commands for managing your application:

  • Scale the deployment:
    kubectl scale deployment nginx-deployment --replicas=5
  • Update the image:
    kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
  • View logs:
    kubectl logs deployment/nginx-deployment
  • Execute a command in a container:
    kubectl exec -it deployment/nginx-deployment -- /bin/bash

Cleaning Up

When you're done experimenting, don't forget to delete your EKS cluster to avoid unnecessary charges:

eksctl delete cluster --name my-eks-cluster --region us-west-2

Conclusion

In this guide, we've walked through setting up an EKS cluster, configuring kubectl, deploying an application, and managing it using various kubectl commands. This setup provides a solid foundation for running containerized applications on AWS using Kubernetes.

Remember to always follow AWS best practices and consider factors like security, scalability, and cost optimization when deploying production workloads.