Deleting an ECS service

Creating task definitions – Containers as a Service (CaaS) and Serverless Computing for Containers

ECS tasks are similar to Kubernetes pods. They are the basic building blocks of ECS and comprise one or more related containers. Task definitions are the blueprints for ECS tasks and define what the ECS task should look like. They are very similar to docker-compose files and are written in YAML format. ECS also uses all versions of docker-compose to allow us to define tasks. They help you define containers and their images, resource requirements, where they should run (EC2 or Fargate), volume and port mappings, and other networking requirements.

Tip

Using the docker-compose manifest to spin up tasks and services is a great idea, as it will help you align your configuration with an open standard.

A task is a finite process and only runs once. Even if it’s a long-running process, such as a web server, the task still runs once as it waits for the long-running process to end (which runs indefinitely in theory). The task’s life cycle follows the Pending -> Running -> Stopped states. So, when you schedule your task, the task enters the Pending state, attempting to pull the image from the container registry. Then, it tries to start the container. Once the container has started, it enters the Running state. When the container has completed executing or errored out, it ends up in the Stopped state. A container with startup errors directly transitions from the Pending state to the Stopped state.

Now, let’s go ahead and deploy an nginx web server task within the ECS cluster we just created.

To access the resources for this section, cd into the following directory:

$ cd ~/modern-devops/ch7/ECS/tasks/EC2/

We’ll use docker-compose task definitions here. So, let’s start by defining the following docker-compose.yml file:

version: ‘3’
services:
web:
image: nginx
ports:
“80:80”
logging: driver: awslogs options:
awslogs-group: /aws/webserver
awslogs-region: us-east-1
awslogs-stream-prefix: ecs

The YAML file defines a web container with an nginx image with host port 80 mapped to container port 80. It uses the awslogs logging driver, which streams logs into Amazon CloudWatch. It will stream the logs to the /aws/webserver log group in the us-east-1 region with the ecs stream prefix.

The task definition also includes the resource definition—that is, the amount of resources we want to reserve for our task. Therefore, we will have to define the following ecs-params.yaml file:

version: 1
task_definition:
services:
web:
cpu_shares: 100
mem_limit: 524288000

This YAML file defines cpu_shares in millicores and mem_limit in bytes for the container we plan to fire. Now, let’s look at scheduling this task as an EC2 task.

Installing the AWS and ECS CLIs – Containers as a Service (CaaS) and Serverless Computing for Containers

The AWS CLI is available as a deb package within the public apt repositories. To install it, run the following commands:

$ sudo apt update && sudo apt install awscli -y

$ aws –version

aws-cli/1.22.34 Python/3.10.6 Linux/5.19.0-1028-aws botocore/1.23.34

Installing the ECS CLI in the Linux ecosystem is simple. We need to download the binary and move to the system path using the following command:

$ sudo curl -Lo /usr/local/bin/ecs-cli \ https://amazon-ecs-cli.s3.amazonaws.com/ecs-cli-linux-amd64-latest $ sudo chmod +x /usr/local/bin/ecs-cli

Run the following command to check whether ecs-cli has been installed correctly:

$ ecs-cli –version

ecs-cli version 1.21.0 (bb0b8f0)

As we can see, ecs-cli has been successfully installed on our system.

The next step is to allow ecs-cli to connect with your AWS API. You need to export your AWS CLI environment variables for this. Run the following commands to do so:

$ export AWS_SECRET_ACCESS_KEY=…

$ export AWS_ACCESS_KEY_ID=…

$ export AWS_DEFAULT_REGION=…

Once we’ve set the environment variables, ecs-cli will use them to authenticate with the AWS API.

In the next section, we’ll spin up an ECS cluster using the ECS CLI.

Spinning up an ECS cluster

We can use the ECS CLI commands to spin up an ECS cluster. You can run your containers in EC2 and Fargate, so first, we will create a cluster that runs EC2 instances. Then, we will add Fargate tasks within the cluster.

To connect with your EC2 instances, you need to generate a key pair within AWS. To do so, run the following command:

$ aws ec2 create-key-pair –key-name ecs-keypair

The output of this command will provide the key pair in a JSON file. Extract the JSON file’s key material and save that in a separate file called ecs-keypair.pem. Remember to replace the \n characters with a new line when you save the file.

Once we’ve generated the key pair, we can use the following command to create an ECS cluster using the ECS CLI:

$ ecs-cli up –keypair ecs-keypair –instance-type t2.micro \ –size 2 –cluster cluster-1 –capability-iam

INFO[0002] Using recommended Amazon Linux 2 AMI with ECS Agent 1.72.0 and Docker version 20.10.23

INFO[0003] Created cluster cluster=cluster-1 region=us-east-1 INFO[0004] Waiting for your cluster resources to be created…

INFO[0130] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS VPC created: vpc-0448321d209bf75e2

Security Group created: sg-0e30839477f1c9881

Subnet created: subnet-02200afa6716866fa

Subnet created: subnet-099582f6b0d04e419

Cluster creation succeeded.

When we issue this command, in the background, AWS spins up a stack of resources using CloudFormation. CloudFormation is AWS’s Infrastructure-as-Code (IaC) solution that helps you deploy infrastructure on AWS through reusable templates. The CloudFormation template consists of several resources such as a VPC, a security group, a subnet within the VPC, a route table, a route, a subnet route table association, an internet gateway, an IAM role, an instance profile, a launch configuration, an ASG, a VPC gateway attachment, and the cluster itself. The ASG contains two EC2 instances running and serving the cluster. Keep a copy of the output; we will need the details later during the exercises.

Now that our cluster is up, we will spin up our first task.

Amazon ECS with EC2 and Fargate – Containers as a Service (CaaS) and Serverless Computing for Containers-1

Amazon ECS is a container orchestration platform that AWS offers. It is simple to use and manage, uses Docker behind the scenes, and can deploy your workloads to Amazon EC2, a virtual machine (VM)-based solution, or AWS Fargate, a serverless offering.

It is a highly scalable solution that deploys containers in seconds. It makes hosting, running, stopping, and starting your containers easy. Just as Kubernetes offers pods, ECS offers tasks that help you run your container workloads. A task can contain one or more containers grouped according to a logical relationship. You can also group one or more tasks into services . Services are similar to Kubernetes controllers, which manage tasks and can ensure that the required number of replicas of your tasks are running in the right place at the right time. ECS uses simple API calls to provide many functionalities, such as creating, updating, reading, and deleting tasks and services.

ECS also allows you to place your containers according to multiple placement strategies while keeping high availability (HA) and resource optimization in mind. You can tweak the placement algorithm according to your priority—cost, availability, or a mix of both. So, you can use ECS to run one-time batch workloads or long-running microservices, all using a simple-to-use API interface.

ECS architecture

Before we explore the ECS architecture, it is important to understand some common AWS terminologies to follow it. Let’s look at some AWS resources:

  • AWS Regions: An AWS Region is a geographical region where AWS provides its services. It is normally a city or a metropolitan region but can sometimes span multiple cities. It comprises multiple Availability Zones (AZs). Some examples of AWS Regions are us-east-1, us-west-1, ap-southeast-1, eu-central-1, and so on.
  • AWS AZs: AWS AZs are data centers within an AWS Region connected with low-latency, high-bandwidth networks. Most resources run within AZs. Examples of AZs are us-east-1a, us-east-1b, and so on.
  • AWS virtual private cloud (VPC): An AWS VPC is an isolated network resource you create within AWS. You associate a dedicated private IP address range to it from which the rest of your resources, such as EC2 instances, can derive their IP addresses. An AWS VPC spans an AWS Region.
  • Subnet: A subnet, as the name suggests, is a subnetwork within the VPC. You must subdivide the IP address ranges you provided to the VPC and associate them with subnets. Resources normally reside within subnets, and each subnet spans an AZ.
  • Containers as a Service (CaaS) and Serverless Computing for Containers
  • Route table: An AWS route table routes traffic within the VPC subnets and to the internet. Every AWS subnet is associated with a route table through subnet route table associations.
    • Internet gateways: An internet gateway allows connection to and from the internet to your AWS subnets.
  • Identity Access Management (IAM): AWS IAM helps you control access to resources by users and other AWS resources. They help you implement role-based access control (RBAC) and the principle of least privilege (PoLP).
  • Amazon EC2: EC2 allows you to spin up VMs within subnets, also known as instances.
  • AWS Auto Scaling groups (ASGs): An AWS ASG works with Amazon EC2 to provide HA and scalability to your instances. It monitors your EC2 instances and ensures that a defined number of healthy instances are always running. It also takes care of autoscaling your instances with increasing load in your machines to allow for handling more traffic. It uses theinstance profile and launch configuration to decide on the properties of new EC2 instances it spins up.
  • Amazon CloudWatch: Amazon CloudWatch is a monitoring and observability service. It allows you to collect, track, and monitor metrics, log files, and set alarms to take automated actions on specific conditions. CloudWatch helps understand application performance, health, and resource utilization.

Troubleshooting containers with busybox using an alias – Managing Advanced Kubernetes Resources

We use the following commands to open a busybox session:

$ kubectl run busybox-test –image=busybox -it –rm –restart=Never — <cmd>

Now, opening several busybox sessions during the day can be tiring. How about minimizing the overhead by using the following alias?

$ alias kbb=’kubectl run busybox-test –image=busybox -it –rm –restart=Never –‘

We can then open a shell session to a new busybox pod using the following command:

$ kbb sh

/ #

Now, that is much cleaner and easier. Likewise, you can also create aliases of other commands that you use frequently. Here’s an example:

$ alias kgp=’kubectl get pods’

$ alias kgn=’kubectl get nodes’

$ alias kgs=’kubectl get svc’

$ alias kdb=’kubectl describe’

$ alias kl=’kubectl logs’

$ alias ke=’kubectl exec -it’

And so on, according to your needs. You may also be used to autocompletion within bash, where your commands autocomplete when you press Tab after typing a few words. kubectl also provides autocompletion of commands, but not by default. Let’s now look at how to enable kubectl autocompletion within bash.

Using kubectl bash autocompletion

To enable kubectl bash autocompletion, use the following command:

$ echo “source <(kubectl completion bash)” >> ~/.bashrc

The command adds the kubectl completion bash command as a source to your .bashrc file. So, the next time you log in to your shell, you should be able to use kubectl autocomplete. That will save you a ton of time when typing commands.

Summary

We began this chapter by managing pods with Deployment and ReplicaSet resources and discussed some critical Kubernetes deployment strategies. We then looked into Kubernetes service discovery and models and understood why we required a separate entity to expose containers to the internal or external world. We then looked at different Service resources and where to use them. We talked about Ingress resources and how to use them to create reverse proxies for our container workloads. We then delved into Horizontal Pod autoscaling and used multiple metrics to scale our pods automatically.

We looked at state considerations and learned about static and dynamic storage provisioning using

PersistentVolume, PersistentVolumeClaim, and StorageClass resources, and talked about some best practices surrounding them. We looked at StatefulSet resources as essential resources that help you schedule and manage stateful containers. Finally, we looked at some best practices, tips, and tricks surrounding the kubectl command line and how to use them effectively.

The topics covered in this and the previous chapter are just the core of Kubernetes. Kubernetes is a vast tool with enough functionality to write an entire book, so these chapters only give you the gist of what it is all about. Please feel free to read about the resources in detail in the Kubernetes official documentation at https://kubernetes.io.

In the next chapter, we will delve into the world of the cloud and look at Container-as-a-Service (CaaS) and serverless offerings for containers.

Kubernetes command-line best practices, tips, and tricks – Managing Advanced Kubernetes Resources

For seasoned Kubernetes developers and administrators, kubectl is a command they run most of the time. The following steps will simplify your life, save you a ton of time, let you focus on more essential activities, and set you apart from the rest.

Using aliases

Most system administrators use aliases for an excellent reason—they save valuable time. Aliases in Linux are different names for commands, and they are mostly used to shorten the most frequently used commands; for example, ls -l becomes ll.

You can use the following aliases with kubectl to make your life easier.

k for kubectl

Yes—that’s right. By using the following alias, you can use k instead of typing kubectl:

$alias k=’kubectl’    
$k get node    
NAMESTATUSROLESAGEVERSION
kind-control-planeReadymaster5m7sv1.26.1
kind-workerReady<none>4m33sv1.26.1

That will save a lot of time and hassle.

Using kubectl –dry-run

kubectl –dry-run helps you to generate YAML manifests from imperative commands and saves you a lot of typing time. You can write an imperative command to generate a resource and append that with a –dry-run=client -o yaml string to generate a YAML manifest from the imperative command. The command does not create the resource within the cluster, but instead just outputs the manifest. The following command will generate a Pod manifest using –dry-run:

$ kubectl run nginx –image=nginx –dry-run=client -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
containers:
image: nginx
name: nginx
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

And you now have the skeleton YAML file that you can edit according to your liking.

Now, imagine typing this command multiple times during the day! At some point, it becomes tiring. Why not shorten it by using the following alias?

$ alias kdr=’kubectl –dry-run=client -o yaml’

You can then use the alias to generate other manifests.

To generate a Deployment resource manifest, use the following command:

$ kdr create deployment nginx –image=nginx

You can use the dry run to generate almost all resources from imperative commands. However, some resources do not have an imperative command, such as a DaemonSet resource. You can generate a manifest for the closest resource and modify it for such resources. A DaemonSet manifest is very similar to a Deployment manifest, so you can generate a Deployment manifest and change it to match the DameonSet manifest.

Now, let’s look at some of the most frequently used kubectl commands and their possible aliases.

kubectl apply and delete aliases

If you use manifests, you will use the kubectl apply and kubectl delete commands most of the time within your cluster, so it makes sense to use the following aliases:

$ alias kap=’kubectl apply -f’

$ alias kad=’kubectl delete -f’

You can then use them to apply or delete resources using the following commands:

$ kap nginx-deployment.yaml

$ kad nginx-deployment.yaml

While troubleshooting containers, most of us use busybox. Let’s see how to optimize it.