Serverless and AWS ECS Fargate
AWS Fargate is a serverless pay-as-you-go engine used for Amazon Elastic Container Service (ECS) to run Docker containers without having to manage servers or clusters. The goal of Fargate is to containerize your application and specify the OS, CPU and memory, networking, and IAM policies needed for launch. Additionally, AWS Fargate can be used with Elastic Kubernetes Service (EKS) in a similar manner.
Although the provisioning of servers would be handled by a third party, the need to understand the health and performance of containers within your serverless environment becomes even more vital in identifying root causes and system interruptions. Serverless still requires observability. Elastic Observability can provide observability for not only AWS ECS with Fargate, as we will discuss in this blog, but also for a number of AWS services (EC2, RDS, ELB, etc). See our previous blog on managing an EC2-based application with Elastic Observability.
Gaining full visibility with Elastic Observability
Elastic Observability is governed by the three pillars involved in creating full visibility within a system: logs, metrics, and traces. Logs list all the events that have taken place in the system. Metrics keep track of data that will tell you if the system is down, like response time, CPU usage, memory usage, and latency. Traces give a good indication of the performance of your system based on the execution of requests.
These pillars by themselves offer some insight, but combining them allows for you to see the full scope of your system and how it handles increases in load or traffic over time. Connecting Elastic Observability to your serverless environment will help you deal with outages quicker and perform root cause analysis to prevent any future problems.
In this article, we’ll guide you through how to install the Elastic Agent with the AWS Fargate integration as a sidecar container to send host metrics and logs to Elastic Observability.
Prerequisites:
- AWS account with AWS CLI configured
- GitHub account
- Elastic Cloud account
- An app running on a container in AWS
This tutorial is divided into two parts:
- Set up the Fleet server to be used by the sidecar container in AWS.
- Create the sidecar container in AWS Fargate to send data back to Elastic Observability.
Part I: Set up the Fleet server
First, let’s log in to Elastic Cloud.
You can either create a new deployment or use an existing one.
From the Home page, use the side panel to scroll to Management > Fleet > Agent policies. Click Add policy.
Click Create agent policy. Here we’ll create a policy to attach to the Fleet agent.
Give the policy a name and save changes.
Click Create agent policy. You should see the agent policy AWS Fargate in the list of policies.
Now that we have an agent policy, let’s add the integration to collect logs and metrics from the host. Click on AWS Fargate -> Add integration.
We’ll be adding to the policy AWS to collect overall AWS metrics and AWS Fargate to collect metrics from this integration. You can find each one by typing them in the search bar.
Once you click on the integration, it will take you to its landing page, where you can add it to the policy.
For the AWS integration, the only collection settings that we will configure are Collect billing metrics, Collect logs from CloudWatch, Collect metrics from CloudWatch, Collect ECS metrics, and Collect Usage metrics. Everything else can be left disabled.
Another thing to keep in mind when using this integration is the set of permissions required to collect data from AWS. This can be found on the AWS integration page under AWS permissions. Take note of these permissions, as we will use them to create an IAM policy.
Next, we will add the AWS Fargate integration, which doesn’t require further configuration settings.
Now that we have created the agent policy and attached the proper integrations, let’s create the agent that will implement the policy. Navigate back to the main Fleet page and click Add agent.
Since we’ll be connecting to AWS Fargate through ECS, the host type should be set to this value. All the other default values can stay the same.
Lastly, let’s create the enrollment token and attach the agent policy. This will enable AWS ECS Fargate to access Elastic and send data.
Once created, you should be able to see policy name, secret, and agent policy listed.
We’ll be using our Fleet credentials in the next step to send data to Elastic from AWS Fargate.
Part II: Send data to Elastic Observability
It’s time to create our ECS Cluster, Service, and task definition in order to start running the container.
Log in to your AWS account and navigate to ECS.
We’ll start by creating the cluster.
Add a name to the Cluster. And for subnets, only select the first two for us-east-1a and us-eastlb.
For the sake of the demo, we’ll keep the rest of the options set to default. Click Create.
We should see the cluster we created listed below.
Now that we’ve created our cluster to host our container, we want to create a task definition that will be used to set up our container. But before we do this, we will need to create a task role with an associated policy. This task role will allow for AWS metrics to be sent from AWS to the Elastic Agent.
Navigate to IAM in AWS.
Go to Policies -> Create policy.
Now we will reference the AWS permissions from the Fleet AWS integration page and use them to configure the policy. In addition to these permissions, we will also add the GetAtuhenticationToken action for ECR.
You can configure each one using the visual editor.
Or, use the JSON option. Don’t forget to replace the <account_id> with your own.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"sqs:DeleteMessage",
"sqs:ChangeMessageVisibility",
"sqs:ReceiveMessage",
"ecr:GetDownloadUrlForLayer",
"ecr:UploadLayerPart",
"ecr:PutImage",
"sts:AssumeRole",
"rds:ListTagsForResource",
"ecr:BatchGetImage",
"ecr:CompleteLayerUpload",
"rds:DescribeDBInstances",
"logs:FilterLogEvents",
"ecr:InitiateLayerUpload",
"ecr:BatchCheckLayerAvailability"
],
"Resource": [
"arn:aws:iam::<account_id>:role/*",
"arn:aws:logs:*:<account_id>:log-group:*",
"arn:aws:sqs:*:<account_id>:*",
"arn:aws:ecr:*:<account_id>:repository/*",
"arn:aws:rds:*:<account_id>:target-group:*",
"arn:aws:rds:*:<account_id>:subgrp:*",
"arn:aws:rds:*:<account_id>:pg:*",
"arn:aws:rds:*:<account_id>:ri:*",
"arn:aws:rds:*:<account_id>:cluster-snapshot:*",
"arn:aws:rds:*:<account_id>:cev:*/*/*",
"arn:aws:rds:*:<account_id>:og:*",
"arn:aws:rds:*:<account_id>:db:*",
"arn:aws:rds:*:<account_id>:es:*",
"arn:aws:rds:*:<account_id>:db-proxy-endpoint:*",
"arn:aws:rds:*:<account_id>:secgrp:*",
"arn:aws:rds:*:<account_id>:cluster:*",
"arn:aws:rds:*:<account_id>:cluster-pg:*",
"arn:aws:rds:*:<account_id>:cluster-endpoint:*",
"arn:aws:rds:*:<account_id>:db-proxy:*",
"arn:aws:rds:*:<account_id>:snapshot:*"
]
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"sqs:ListQueues",
"organizations:ListAccounts",
"ec2:DescribeInstances",
"tag:GetResources",
"cloudwatch:GetMetricData",
"ec2:DescribeRegions",
"iam:ListAccountAliases",
"sns:ListTopics",
"sts:GetCallerIdentity",
"cloudwatch:ListMetrics"
],
"Resource": "*"
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": "ecr:GetAuthorizationToken",
"Resource": "arn:aws:ecr:*:<account_id>:repository/*"
}
]
}
Review your changes.
Now let’s attach this policy to a role. Navigate to IAM -> Roles. Click Create role.
Select AWS service as Trusted entity type and select EC2 as Use case. Click Next.
Under permissions policies, select the policy we just created, as well as CloudWatchLogsFullAccess and AmazonEC2ContainerRegistryFullAccess. Click Next.
Give the task role a name and description.
Click Create role.
Now it’s time to create the task definition. Navigate to ECS -> Task definitions. Click Create new task definition.
Let’s give this task definition a name.
After giving the task definition a name, you’ll add the Fleet credentials to the container section, which you can obtain from the Enrollment Tokens section of the Fleet section in Elastic Cloud. This allows us to host the Elastic Agent on the ECS container as a sidecar and send data to Elastic using Fleet credentials.
-
Container name: elastic-agent-container
-
Image: docker.elastic.co/beats/elastic-agent:8.16.1
Now let’s add the environment variables:
-
FLEET_ENROLL: yes
-
FLEET_ENROLLMENT_TOKEN: <enrollment-token>
-
FLEET_URL: <fleet-server-url>
For the sake of the demo, leave Environment, Monitoring, Storage, and Tags as default values. Now we will need to create a second container to run the image for the golang app stored in ECR. Click Add more containers.
For Environment, we will reserve 1 vCPU and 3 GB of memory. Under Task role, search for the role we created that uses the IAM policy.
Review the changes, then click Create.
You should see your new task definition included in the list.
The final step is to create the service that will connect directly to the fleet server.
Navigate to the cluster you created and click Create under the Service tab.
Let’s get our service environment configured.
Set up the deployment configuration. Here you should provide the name of the task definition you created in the previous step. Also, provide the service with a unique name. Set the number of desired tasks to 2 instead of 1.
Click Create. Now your service is running two tasks in your cluster using the task definition you provided.
To recap, we set up a Fleet server in Elastic Cloud to receive AWS Fargate data. We then created our AWS Fargate cluster task definition with the Fleet credentials implemented within the container. Lastly, we created the service to send data about our host to Elastic.
Now let’s verify our Elastic Agent is healthy and properly receiving data from AWS Fargate.
We can also view a better breakdown of our agent on the Observability Overview page.
If we drill down to hosts, by clicking on host name we should be able to see more granular data. For instance, we can see the CPU Usage of the Elastic Agent itself that is deployed in our AWS Fargate environment.
Lastly, we can view the AWS Fargate dashboard generated using the data collected by our Elastic Agent. This is an out-of-the-box dashboard that can also be customized based on the data you would like to visualize.
As you can see in the dashboard we’re able to filter based on running tasks, as well as see a list of containers running in our environment. Something else that could be useful to show is the CPU usage per cluster as shown under CPU Utilization per Cluster.
The dashboard can pull data from different sources and in this case shows data for both AWS Fargate and the greater ECS cluster. The two containers at the bottom display the CPU and memory usage directly from ECS.
Conclusion
In this article, we showed how to send data from AWS Fargate to Elastic Observability using the Elastic Agent and Fleet. Serverless architectures are quickly becoming industry standard in offloading the management of servers to third parties. However, this does not alleviate the responsibility of operations engineers to manage the data generated within these environments. Elastic Observability provides a way to not only ingest the data from serverless architectures, but also establish a roadmap to address future problems.
Start your own 7-day free trial by signing up via AWS Marketplace and quickly spin up a deployment in minutes on any of the Elastic Cloud regions on AWS around the world. Your AWS Marketplace purchase of Elastic will be included in your monthly consolidated billing statement and will draw against your committed spend with AWS.
More resources on serverless and observability and AWS:
- Analyze your AWS application’s service metrics on Elastic Observability (EC2, ELB, RDS, and NAT)
- Get visibility into AWS Lambda serverless functions with Elastic Observability
- Trace-based testing with Elastic APM and Tracetest
- Sending AWS logs into Elastic via AWS Firehose
The release and timing of any features or functionality described in this post remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all.