Cloud Certifications

terraform plan – Infrastructure as Code (IaC) with Terraform

To run a Terraform plan, use the following command:

$ terraform plan

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions:

  • azurerm_resource_group.rg will be created + resource “azurerm_resource_group” “rg” {
 +id= (known after apply)
 +location= “westeurope”
 + name= “terraform-exercise”
}   
Plan:1to add, 0 to change, 0 to destroy.

Note: You didn’t use the -out option to save this plan, so Terraform can’t guarantee to take exactly these actions if you run terraform apply now.

The plan output tells us that if we run terraform apply immediately, it will create a single terraform_exercise resource group. It also outputs a note that since we did not save this plan, the subsequent application is not guaranteed to result in the same action. Meanwhile, things might have changed; therefore, Terraform will rerun plan and prompt us for yes when applying. Thus, you should save the plan to a file if you don’t want surprises.

Tip

Always save terraform plan output to a file and use the file to apply the changes. This is to avoid any last-minute surprises with things that might have changed in the background and apply not doing what it is intended to do, especially when your plan is reviewed as a part of your process.

So, let’s go ahead and save the plan to a file first using the following command:

$ terraform plan -out rg_terraform_exercise.tfplan

This time, the plan is saved to a file calledrg_terraform_exercise.tfplan. We can use this file to apply the changes subsequently.

terraform apply

To apply the changes using the plan file, run the following command:

$ terraform apply “rg_terraform_exercise.tfplan”

azurerm_resource_group.rg: Creating…

azurerm_resource_group.rg: Creation complete after 2s [id=/subscriptions/id/ resourceGroups/terraform-exercise]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

And that’s it! Terraform has applied the configuration. Let’s use the Azure CLI to verify whether the resource group is created.

Run the following command to list all resource groups within your subscription:

$ az group list

“id”: “/subscriptions/id/resourceGroups/terraform-exercise”,

“location”: “westeurope”,

“name”: “terraform-exercise”,

We see that our resource group is created and within the list.

There might be instances when apply is partially successful. In that case, Terraform will automatically taint resources it believes weren’t created successfully. Such resources will be recreated automatically in the next run. If you want to taint a resource for recreation manually, you can use the terraform taint command:

$ terraform taint <resource>

Suppose we want to destroy the resource group as we no longer need it. We can use terraform destroy for that.

terraform init – Infrastructure as Code (IaC) with Terraform

To initialize a Terraform workspace, run the following command:

$ terraform init
Initializing the backend…
Initializing provider plugins…

  • Finding hashicorp/azurerm versions matching “3.63.0”…
  • Installing hashicorp/azurerm v3.63.0…
  • Installed hashicorp/azurerm v3.63.0 (signed by HashiCorp)

Terraform has created a lock file, .terraform.lock.hcl, to record the provider selections it made previously. Include this file in your version control repository so that Terraform can guarantee to make the same selections by default when you run terraform init in the future.

Terraform has been successfully initialized!

As the Terraform workspace has been initialized, we can create an Azure resource group to start working with the cloud.

Creating the first resource – Azure resource group

We must use the azurerm_resource_group resource within the main.tf file to create an

Azure resource group. Add the following to your main.tf file to do so:

resource “azurerm_resource_group” “rg” {
name         = var.rg_name
location = var.rg_location
}

As we’ve used two variables, we’ve got to declare those, so add the following to the vars.tf file:

variable “rg_name” {
type              = string
description = “The resource group name”
}
variable “rg_location” {
type              = string
description = “The resource group location”
}

Then, we need to add the resource group name and location to the terraform.tfvars file.

Therefore, add the following to the terraform.tfvars file:

rg_name=terraform-exercise
rg_location=”West Europe”

So, now we’re ready to run a plan, but before we do so, let’s use terraform fmt to format our files into the canonical standard.

terraform fmt

The terraform fmt command formats the .tf files into a canonical standard. Use the following command to format your files:

$ terraform fmt
terraform.tfvars
vars.tf

The command lists the files that it formatted. The next step is to validate your configuration.

terraform validate

The terraform validate command validates the current configuration and checks whether there are any syntax errors. To validate your configuration, run the following:

$ terraform validate
Success! The configuration is valid.

The success output denotes that our configuration is valid. If there were any errors, it would have highlighted them in the validated output.

Tip

Always run fmt and validate before every Terraform plan. It saves you a ton of planning time and helps you keep your configuration in good shape.

As the configuration is valid, we are ready to run a plan.

Authentication and authorization with Azure – Infrastructure as Code (IaC) with Terraform

The simplest way to authenticate and authorize with Azure is to log in to your account using the Azure CLI. When you use the Azure provider within your Terraform file, it will automatically act as your account and do whatever it needs to. Now, this sounds dangerous. Admins generally have a lot of access, and having a tool that acts as an admin might not be a great idea. What if you want to plug Terraform into your CI/CD pipelines? Well, there is another way to do it – by using Azure service principals. Azure service principals allow you to access the required features without using a nameduser account. You can then apply the principle of least privilege to the service principal and provide only the necessary access.

Before configuring the service principal, let’s install the Azure CLI on our machine. To do so, run the following command:

$ curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash

The preceding command will download a shell script and execute it using bash. The script will then automatically download and configure the Azure CLI. To confirm whether the Azure CLI is installed successfully, run the following command:

$ az –version 
azure-cli2.49.0

We see that the Azure CLI is correctly installed on the system. Now, let’s go ahead and configure the service principal.

To configure the Azure service principal, follow these steps.

Log in to Azure using the following command and follow all the steps the command prompts. You must browse to a specified URL and enter the given code. Once you’ve logged in, you will get a JSON response that will include some details, something like the following:

$ az login

To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter

the code XXXXXXXXX to authenticate:

[

{

“id”: “00000000-0000-0000-0000-0000000000000”,

}

]

Make a note of the id attribute, which is the subscription ID, and if you have more than one subscription, you can use the following to set it to the correct one:

$ export SUBSCRIPTION_ID=”<SUBSCRIPTION_ID>”

$ az account set –subscription=”$SUBSCRIPTION_ID”

Use the following command to create a service principal with the contributor role to allow Terraform to manage the subscription’s infrastructure.

Tip

Follow the principle of least privilege while granting access to the service principal. Do not give privileges thinking you might need them in the future. If any future access is required, you can grant it later.

We use contributor access for simplicity, but finer-grained access is possible and should be used:

$ az ad sp create-for-rbac –role=”Contributor” \

–scopes=”/subscriptions/$SUBSCRIPTION_ID”

Creating ‘Contributor’ role assignment under scope ‘/subscriptions/<SUBSCRIPTION_ID>’ The output includes credentials that you must protect. Ensure you do not include these credentials in your code or check the credentials into your source control (for more information, see https://aka.ms/azadsp-cli): {

“appId”: “00000000-0000-0000-0000-0000000000000”,

“displayName”: “azure-cli-2023-07-02-09-13-40”,

“password”: “00000000000.xx-00000000000000000”,

“tenant”: “00000000-0000-0000-0000-0000000000000”

}

We’ve successfully created the service principal. The response JSON consists of appId, password, and tenant. We will need these to configure Terraform to use the service principal. In the next section, let’s define the Azure Terraform provider with the details.

Installing Terraform – Infrastructure as Code (IaC) with Terraform

Installing Terraform is simple; go to https://www.terraform.io/downloads.html and follow the instructions for your platform. Most of it will require you to download a binary and move it to your system path.

Since we’ve been using Ubuntu throughout this book, I will show the installation on Ubuntu. Use the following commands to use the apt package manager to install Terraform:

$ wget -O- https://apt.releases.hashicorp.com/gpg | \

sudo gpg –dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg

$ echo “deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \ https://apt.releases.hashicorp.com $(lsb_release -cs) main” | \ sudo tee /etc/apt/sources.list.d/hashicorp.list

$ sudo apt update && sudo apt install terraform

Check whether Terraform has been installed successfully with the following command:

$ terraform version

Terraform v1.5.2

It shows that Terraform has been installed successfully. Terraform uses Terraform providers to interact with cloud providers, so let’s look at those in the next section.

Terraform providers

Terraform has a decentralized architecture. While the Terraform CLI contains Terraform’s core functionality and provides all functionalities not related to any specific cloud provider, Terraform providers provide the interface between the Terraform CLI and the cloud providers themselves. This decentralized approach has allowed public cloud vendors to offer their Terraform providers so that their customers can use Terraform to manage infrastructure in their cloud. Such is Terraform’s popularity that it has now become an essential requirement for every public cloud provider to offer a Terraform provider.

We will interact with Azure for this chapter’s entirety and use the Azure Terraform provider for our activity.

To access the resources for this section, cd into the following:

$ cd ~/modern-devops/ch8/terraform-exercise/

Before we go ahead and configure the provider, we need to understand how Terraform needs to authenticate and authorize with the Azure APIs.

Introduction to IaC – Infrastructure as Code (IaC) with Terraform

IaC is the concept of using code to define infrastructure. While most people can visualize infrastructure as something tangible, virtual infrastructure is already commonplace and has existed for around two decades. Cloud providers provide a web-based console through which you can manage your infrastructure intuitively. But the process is not repeatable or recorded.

If you spin up a set of infrastructure components using the console in one environment and want to replicate it in another, it is a duplication of effort. To solve this problem, cloud platforms provide APIs to manipulate resources within the cloud and some command-line tools that can help trigger the APIs. You can start writing scripts using commands to create the infrastructure and parameterize them to use the same scripts in another environment. Well, that solves the problem, right?

Not really! Writing scripts is an imperative way of managing infrastructure. Though you can still call it IaC, its problem is that it does not effectively manage infrastructure changes. Let me give you a few examples:

  • What would happen if you needed to modify something already in the script? Changing the script somewhere in the middle and rerunning the entire thing may create havoc with your infrastructure. Imperative management of infrastructure is not idempotent. So, managing changes becomes a problem.
  • What if someone manually changes the script-managed infrastructure using the console? Will your script be able to detect it correctly? What if you want to change the same thing using a script? It will soon start to get messy.
  • With the advent of hybrid cloud architecture, most organizations use multiple cloud platforms for their needs. When you are in such a situation, managing multiple clouds with imperative scripts soon becomes a problem. Different clouds have different ways of interacting with their APIs and have distinct command-line tools.

The solution to all these problems is a declarative IaC solution such as Terraform. HashiCorp’s Terraform is the most popular IaC tool available on the market. It helps you automate and manage your infrastructure using code and can run on various platforms. As it is declarative, you just need to define what you need (the desired end state) instead of describing how to achieve it. It has the following features:

  • It supports multiple cloud platforms via providers and exposes a single declarative HashiCorp Configuration Language (HCL)-based interface to interact with it. Therefore, it allows you to manage various cloud platforms using a similar language and syntax. So, having a few Terraform experts within your team can handle all your IaC needs.
  • It tracks the state of the resources it manages using state files and supports local and remote backends to store and manage them. That helps in making the Terraform configuration idempotent. So, if someone manually changes a Terraform-managed resource, Terraform can detect the difference in the next run and prompt corrective action to bring it to the defined configuration. The admin can then absorb the change or resolve any conflicts before applying it.
  • It enables GitOps in infrastructure management. With Terraform, you can have the infrastructure configuration alongside application code, making versioning, managing, and releasing infrastructure the same as managing code. You can also include code scanning and gating using pull requests so that someone can review and approve the changes to higher environments before you apply them. A great power indeed!

Terraform has multiple offerings – open source, cloud , and enterprise. The open source offering is a simple command- line interface (CLI)-based tool that you can download on any supported operating system (OS) and use. The cloud and enterprise offerings are more of a wrapper on top of the open source one. They provide a web-based GUI and advanced features such as policy as code with Sentinel, cost analysis, private modules, GitOps, and CI/CD pipelines.

This chapter will discuss the open source offering and its core functions.

Terraform open source is divided into two main parts – Terraform Core and Terraform providers, as seen in the following diagram:

Figure 8.1 – Terraform architecture

Let’s look at the functions of both components:

  • Terraform Core is the CLI that we will use to interact with Terraform. It takes two main inputs – your Terraform configuration files and the existing state. It then takes the difference in configuration and applies it.
  • Terraform providers are plugins that Terraform uses to interact with cloud providers. The providers translate the Terraform configuration into the respective cloud’s REST API calls so that Terraform can manage its associated infrastructure. For example, if you want Terraform to manage AWS infrastructure, you must use the Terraform AWS provider.

Now let’s see how we can install open source Terraform.

Technical requirements – Infrastructure as Code (IaC) with Terraform

Cloud computing is one of the primary factors of DevOps enablement today. The initial apprehensions about the cloud are a thing of the past. With an army of security and compliance experts manning cloud platforms 24×7, organizations are now trusting the public cloud like never before. Along with cloud computing, another buzzword has taken the industry by storm – Infrastructure as Code (IaC). This chapter will focus on IaC withTerraform, and by the end of this chapter, you will understand the concept and have enough hands-on experience with Terraform to get you started on your journey.

In this chapter, we’re going to cover the following main topics:

  • Introduction to IaC
  • Setting up Terraform and Azure providers
  • Understanding Terraform workflows and creating your first resource using Terraform
  • Terraform modules
  • Terraform state and backends
  • Terraform workspaces
  • Terraform outputs, state, console, and graphs

Technical requirements

For this chapter, you can use any machine to run Terraform. Terraform supports many platforms, including Windows, Linux, and macOS.

You will need an active Azure subscription to follow the exercises. Currently, Azure is offering a free trial for 30 days with $200 worth of free credits; you can sign up at https://azure.microsoft. com/en-in/free.

You will also need to clone the following GitHub repository for some of the exercises: https://github.com/PacktPublishing/Modern-DevOps-Practices-2e

Run the following command to clone the repository into your home directory, and cd into the ch8 directory to access the required resources:

$ git clone https://github.com/PacktPublishing/Modern-DevOps-Practices-2e.git \ modern-devops

$ cd modern-devops/ch8

So, let’s get started!

Spinning up GKE – Containers as a Service (CaaS) and Serverless Computing for Containers

Once you’ve signed up and are on your console, you can open the Google Cloud Shell CLI to run the following commands.

You need to enable the GKE API first using the following command:

$ gcloud services enable container.googleapis.com

To create a two-node autoscaling GKE cluster that scales from 1 node to 5 nodes, run the following command:

$ gcloud container clusters create cluster-1 –num-nodes 2 \

–enable-autoscaling –min-nodes 1 –max-nodes 5 –zone us-central1-a

And that’s it! The cluster is up and running.

You will also need to clone the following GitHub repository for some of the exercises:

https://github.com/PacktPublishing/Modern-DevOps-Practices-2e

Run the following command to clone the repository into your home directory. Then, cd into the ch7 directory to access the required resources:

$ git clone https://github.com/PacktPublishing/Modern-DevOps-Practices-2e.git \ modern-devops

Now that the cluster is up and running, let’s go ahead and install Knative.

Installing Knative

We will install the CRDs that define Knative resources as Kubernetes API resources.

To access the resources for this section, cd into the following directory:

$ cd ~/modern-devops/ch7/knative/

Run the following command to install the CRDs:

$ kubectl apply -f \

https://github.com/knative/serving/releases/download/knative-v1.10.2/serving-crds.yaml

As we can see, Kubernetes has installed some CRDs. Next, we must install the core components of the Knative serving module. Use the following command to do so:

$ kubectl apply -f \

https://github.com/knative/serving/releases/download/knative-v1.10.2/serving-core.yaml

Now that the core serving components have been installed, the next step is installing Istio within the

Kubernetes cluster. To do so, run the following commands:

$ curl -L https://istio.io/downloadIstio | sh –

$ sudo mv istio-*/bin/istioctl /usr/local/bin

$ istioctl install –set profile=demo -y

Now that Istio has been installed, we will wait for the Istio Ingress Gateway component to be assigned an external IP address. Run the following command to check this until you get an external IP in the response:

$ kubectl -n istio-system get service istio-ingressgateway

NAME TYPE EXTERNAL-IP PORT(S) istio-ingressgteway LoadBalancer 35.226.198.46 15021,80,443

As we can see, we’ve been assigned an external IP—35.226.198.46. We will use this IP for the rest of this exercise.

Now, we will install the Knative Istio controller by using the following command:

$ kubectl apply -f \

https://github.com/knative/net-istio/releases/download/knative-v1.10.1/net-istio.yaml

Now that the controller has been installed, we must configure the DNS so that Knative can provide custom endpoints. To do so, we can use the MagicDNS solution known as sslip.io, which you can use for experimentation. The MagicDNS solution resolves any endpoint to the IP address present in the subdomain. For example, 35.226.198.46.sslip.io resolves to 35.226.198.46.

Note

Do not use MagicDNS in production. It is an experimental DNS service and should only be used for evaluating Knative.

Run the following command to configure the DNS:

$ kubectl apply -f \

https://github.com/knative/serving/releases/download/knative-v1.10.2\

/serving-default-domain.yaml

As you can see, it provides a batch job that gets fired whenever there is a DNS request.

Now, let’s install the HorizontalPodAutoscaler (HPA) add-on to automatically help us autoscale pods on the cluster with traffic. To do so, run the following command:

$ kubectl apply -f \

https://github.com/knative/serving/releases/download/knative-v1.10.2/serving-hpa.yaml

That completes our Knative installation.

Now, we need to install and configure the kn command-line utility. Use the following commands to do so:

$ sudo curl -Lo /usr/local/bin/kn \

https://github.com/knative/client/releases/download/knative-v1.10.0/kn-linux-amd64

$ sudo chmod +x /usr/local/bin/kn

In the next section, we’ll deploy our first application on Knative.

Knative architecture– Containers as a Service (CaaS) and Serverless Computing for Containers

The Knative project combines elements of existing CNCF projects such as Kubernetes, Istio, Prometheus, and Grafana and eventing engines such as Kafka and Google Pub/Sub. Knative runs as a Kubernetes operator using Kubernetes Custom Resource Definitions (CRDs), which help operators administer Knative using the kubectl command line. Knative provides its API for developers, which the kn command-line utility can use. The users are provided access through Istio, which, with its traffic managementfeatures, is a crucial component of Knative. The following diagram describes

this graphically:

Figure 7.2 – Knative architecture

Knative consists of two main modules—serving and eventing. While the serving module helps us maintain stateless applications using HTTP/S endpoints, the eventing module integrates with eventing engines such as Kafka and Google Pub/Sub. As we’ve discussed mostly HTTP/S traffic, we will scope our discussion to Knative serving for this book.

Knative maintains serving pods, which help route traffic within workload pods and act as proxies using the Istio Ingress Gateway component. It provides a virtual endpoint for your service and listens to it. When it discovers a hit on the endpoint, it creates the required Kubernetes components to serve that traffic. Therefore, Knative has the functionality to scale from zero workload pods as it will spin up a pod when it receives traffic for it. The followingdiagram shows how:

Figure 7.3 – Knative serving architecture

Knative endpoints are made up of three basic parts—<app-name>, <namespace>, and <custom-domain>. While name and namespace are similar to Kubernetes Services, custom-domain is defined by us. It can be a legitimate domain for your organization or a MagicDNS solution, such as sslip.io, which we will use in our hands-on exercises. If you are using your organization domain, you must create your DNS configuration to resolve the domain to the Istio Ingress Gateway IP addresses.

Now, let’s go ahead and install Knative.

For the exercises, we will use GKE. Since GKE is a highly robust Kubernetes cluster, it is a great choice for integrating with Knative. As mentioned previously, Google Cloud provides a free trial of $300 for 90 days. You can sign up at https://cloud.google.com/free if you’ve not done so already.

Open source CaaS with Knative – Containers as a Service (CaaS) and Serverless Computing for Containers

As we’ve seen, several vendor-specific CaaS services are available on the market. Still, the problem with most of them is that they are tied up to a single cloud provider. Our container deployment specification then becomes vendor-specific and results in vendor lock-in. As modern DevOps engineers, we must ensure that the proposed solution best fits the architecture’s needs, and avoiding vendor lock-in is one of the most important requirements.

However, Kubernetes in itself is not serverless. You must have infrastructure defined, and long-running services should have at least a single instance running at a particular time. This makes managing microservices applications a pain and resource-intensive.

But wait! We said that microservices help optimize infrastructure consumption. Yes—that’s correct, but they do so within the container space. Imagine that you have a shared cluster of VMs where parts of the application scale with traffic, and each part of the application has its peaks and troughs. Doing this will save a lot of infrastructure by performing this simple multi-tenancy.

However, it also means that you must have at least one instance of each microservice running every time—even if there is zero traffic! Well, that’s not the best utilization we have. How about creating instances when you get the first hit and not having any when you don’t have traffic? This would save a lot of resources, especially when things are silent. You can have hundreds of microservices making up the application that would not have any instances during an idle period. If you combine it with a managed service that runs Kubernetes and then autoscale your VM instances with traffic, you can have minimal instances during the silent period.

There have been attempts within the open source and cloud-native space to develop an open source, vendor-agnostic, serverless framework for containers. We have Knative for this, which the Cloud Native Computing Foundation (CNCF) has adopted.

Tip

The Cloud Run service uses Knative behind the scenes. So, if you use Google Cloud, you can use Cloud Run to use a fully managed serverless offering.

To understand how Knative works, let’s look at the Knative architecture.

Scheduling EC2 tasks on ECS – Containers as a Service (CaaS) and Serverless Computing for Containers

Let’s use ecs-cli to apply the configuration and schedule our task using the following command:

$ ecs-cli compose up –create-log-groups –cluster cluster-1 –launch-type EC2

Now that the task has been scheduled and the container is running, let’s list all the tasks to get the container’s details and find out where it is running. To do so, run the following command:

$ ecs-cli ps –cluster cluster-1

Name                                   State       Ports                            TaskDefinition

cluster-1/fee1cf28/web  RUNNING  34.237.218.7:80->80  EC2:1

As we can see, the web container is running on cluster-1 on 34.237.218.7:80. Now, use the following command to curl this endpoint to see what we get:

$ curl 34.237.218.7:80

<html>

<head>

<title>Welcome to nginx!</title>

</html>

Here, we get the default nginx home page! We’ve successfully scheduled a container on ECS using the EC2 launch type. You might want to duplicate this task to handle more traffic. This is known as horizontal scaling. We’ll see how in the next section.

Scaling tasks

We can easily scale tasks using ecs-cli. Use the following command to scale the tasks to 2:

$ ecs-cli compose scale 2 –cluster cluster-1 –launch-type EC2

Now, use the following command to check whether two containers are running on the cluster:

$ ecs-cli ps –cluster cluster-1  
NameStatePortsTaskDefinition
cluster-1/b43bdec7/webRUNNING54.90.208.183:80->80EC2:1
cluster-1/fee1cf28/webRUNNING34.237.218.7:80->80EC2:1

As we can see, two containers are running on the cluster. Now, let’s query CloudWatch to get the logs of the containers.

Querying container logs from CloudWatch

To query logs from CloudWatch, we must list the log streams using the following command:

$ aws logs describe-log-streams –log-group-name /aws/webserver \ –log-stream-name-prefix ecs | grep logStreamName

“logStreamName”: “ecs/web/b43bdec7”,

“logStreamName”: “ecs/web/fee1cf28”,

As we can see, there are two log streams for this—one for each task. logStreamName follows the convention <log_stream_prefix>/<task_name>/<task_id>. So, to get the logs for ecs/ b43bdec7/web, run the following command:

$ aws logs get-log-events –log-group-name/aws/webserver \ –log-stream ecs/web/b43bdec7

Here, you will see a stream of logs in JSON format in the response. Now, let’s look at how we can stop running tasks.

Stopping tasks

ecs-cli uses the friendly docker-compose syntax for everything. Use the following command to stop the tasks in the cluster:

$ ecs-cli compose down –cluster cluster-1

Let’s list the containers to see whether the tasks have stopped by using the following command:

$ ecs-cli ps –cluster cluster-1

INFO[0001] Stopping container… container=cluster-1/b43bdec7/web INFO[0001] Stopping container… container=cluster-1/fee1cf28/web INFO[0008] Stopped container… container=cluster-1/b43bdec7/web desiredStatus=STOPPED lastStatus=STOPPED taskDefinition=”EC2:1″ INFO[0008] Stopped container… container=cluster-1/fee1cf28/web desiredStatus=STOPPED lastStatus=STOPPED taskDefinition=”EC2:1″

As we can see, both containers have stopped.

Running tasks on EC2 is not a serverless way of doing things. You still have to provision and manage the EC2 instances, and although ECS manages workloads on the cluster, you still have to pay for the amount of resources you’ve provisioned in the form of EC2 instances. AWS offers Fargate as a serverless solution where you pay per resource consumption. Let’s look at how we can create the same task as a Fargate task.