Table of Contents

Have You Considered AWS Fargate? A practical use case scenario

Fargate is an AWS service, part of the container services offered by this growing industry. It stands out as a simple way of deploying containers wherever you want in your AWS cloud and with all the AWS embedded security, like IAM roles. It avoids the hassle of having EC2 instances merely for this purpose, or having to manage the docker service and its dangling images. Simple, right?

Why would you use this? Someone may consider implementing a way to execute code in the cloud for maintenance purposes, such as running scripts on databases and infrastructure. However, sometimes those tasks are time consuming. A simple task like “Create automatic database dump files” could have many dependencies that you would have to tackle somehow, and it might take a long time (specially with large amounts of data).

The challenge here has to do with avoiding the need to add instances / services that will demand attention and, most importantly, will contribute to your monthly bill. You’ll have to consider startup times and operational efficiency, as well as when you are thinking about adding EC2 instances to your solution. Focusing on not-so-critical maintenance tasks wouldn't be a good idea.

The same rationale applies to adding a docker ECS cluster with EC2 or EKS, as it would prove time-consuming in the future, for maintenance or monitoring purposes.

AWS Lambda could be a good option to consider. However, the lack of control over long running flows could turn into investing time on task orchestration, as Lambda has an execution time limit of 15 minutes. Anything beyond that would require extra work just to split up the process and orchestrate it in order to execute it in the order we want, with the correct parameters.

Hence, following a containerized approach to your scripts and running them with Fargate is not such a bad idea. Fargate wouldn’t require administration time in the future. Moreover, there wouldn’t be EC2’s running unnecessarily, awaiting security patches and monitoring options on our cloud, wasting available resources on the machine. The solution with Fargate is quite simple: a container with the desired scripts, a task in ECS with the correct parameters; and hit run and go.

 

The Fargate Approach

Base Architecture And Security

Fargate has proven to be a quick way to deploy your containers in a serverless manner. It not only helps you to remove the overhead of maintaining a critical task executed by your container services, but also delegates the security management to AWS and their toolset for securing infrastructure and services.

Fargate can operate anywhere in your cloud, as long as it has internet access (it usually needs a public IP to pull the images out of the ECR service). Since these scripts don’t need any exposed ports, like a running service would require, a security policy with no inbound port enabled is ideal.

In case the task needs to connect to a service running on your cloud, like a MYSQL Database and API, make sure to add the correct VPC peering and add the correct routes to your tables if you are using a multiple VPC setup.

Captura de pantalla 2019-03-12 a la(s) 11.29.34

 

 A general diagram of how the stack will look like [1]

Security is a concern everywhere now, and this proves true when working with Fargate. AWS offers secure parameter passing options to avoid exposing sensitive data, such as DB passwords:

  1. The usage of Environment Variables when describing your task
  2. SSM (System Manager) service.

If you chose to use the SSM service, your scripts would be in charge of pulling those credentials out of the AWS service like this.


SECURE_STRING=$(aws ssm get-parameter --name secure-string-variable --with-decryption --output text --query Parameter.Value --region us-east-1)
[2]

 

In order for the running task to be able to pull credentials from the SSM service, AWS ECS allows us to select Task Execution Roles and Task Roles:

  • Task Execution Roles help the AWS managed container service hpull images from your ECR and put logs into Cloudwatch, therefore this won’t be changing much.
  • Task Role is the one that allows our container to access AWS services like S3, RDS or SSM.
    • For example, if the task requires to put a file in S3, you should go to your Task Role definition and enable the AWS s3:PutObject action for your role.
    • The documentation on how to create Task Roles for ECS service can be found here

Your Docker Image

The very first thing to do is to set up a new image repository on AWS ECR service. The process is pretty straightforward and is documented here.

You will also need to create a Fargate cluster here, select Networking Only, and add a name to it and that’s it; your fargate cluster is ready. You can select the VPC at container launch time.

Once your repository is ready to receive the docker image, let’s work on the image itself.

This image consists of 3 elements: the Dockerfile, the entrypoint, and the scripts representing the task to run in the cloud.

A useful Dockerfile for this case may look like this:



------------------------------------------------------------Dockerfile
FROM alpine:3.8
 
RUN apk add --no-cache bash
 
RUN apk add --no-cache python & \
python -m ensurepip && \
rm -r /usr/lib/python*/ensurepip && \
pip install --upgrade pip setuptools && \
rm -r /root/.cache
 
RUN pip install awscli
 
RUN pip install boto3
 
WORKDIR /usr/src/app
 
COPY ./scripts ./scripts
 
COPY entrypoint.sh ./
 
ENTRYPOINT ["./entrypoint.sh"]
 
CMD []
 
---------------------------------------------------------------------------------------------------
[3]

Then you create an entrypoint; a script executed on container startup, to which you pass variables. With this entrypoint we would be able to select which script to run and what parameters we would use for container execution. 


  ------------------------------------------------------------entrypoint.sh 
  #!/bin/bash 
  parameters=$(echo "$3" | tr "'" " ") 
  $1 /usr/src/app/scripts/$2 $parameters 
  echo "Finished execution!" 
  --------------------------------------------------------------------------------------------------- 
  [4]
  

This simple entrypoint will work as a wrapper, capable of calling any bash or python script we put in usr/src/app/scripts/ with the desired parameters. To invoke it, just execute the following commands:


$./entrypoint.sh python test.py ‘param1 param2’
$./entrypoint.sh bash test.sh ‘param1 param2’
[5]
 

Finally, you create the scripts that describe the task to be executed.

Let’s assume we have the scripts “test.sh” and “test.py” copied in the container (as described in the Dockerfile definition before). In a real scenario, these scripts would be in charge of executing a more complex task, such as creating a cluster. These script files must be executable.



------------------------------------------------------scripts/test.sh
#!/bin/bash
 
echo Hello World From a Container
 
echo $# parameters passed
--------------------------------------------------------------------------------------------
[6]
 

------------------------------------------------------scripts/test.py
import sys
 
print "Hello World From a Container"
 
>print len(sys.argv) - 1, "parameters passed"
--------------------------------------------------------------------------------------------
[7]

The example so far has described the process as if you were running everything from console. Moving the scenario to Docker context, first you need to build your image with

docker build . -t toolbox.

Once your image is finished, you can run your container / containerized script like this:



$docker run toolbox [now add what we would have passed to the entrypoint so ...]
$docker run toolbox bash test.sh 'param1 param2'
$docker run toolbox python test.py 'param1 param2'
[8]
 

 

Once tested, you are ready to put the image on ECR.

Follow the process here to login to your previously created ECR repo and you will be able to push the image there. Since a real example would probably use some AWS services (RDS, S3, etc.), you would likely need to configure your access and secret keys into the container, in order to execute it locally.

Once the image has been pushed, you need to define the task in ECS to execute the desired script in AWS. This can be done here

Change your region accordingly and build up your task from there:

  • Select “Fargate” as the launch type and add the remaining parameters for your task.
    • Here is were the defined task role that we created above will be used.
  • At the “Add Container” section:
    • Name your container and add the repo URL, which looks like 1111111111111.dkr.ecr.us-east-1.amazonaws.com/your-repo-name:latest
    • You can select the environment variables, health checks, etc., if needed.
    • Add the entrypoint and working directory
    • Set ECS to select your script + parameters, separated by a coma, like this:

bash, test.sh, ‘param1 param2’

    • You can also map volumes there if you need to

Keep in mind that Fargate charges based on the CPU and Memory used, so use those wisely. You can also set up hard and soft limits to restrict your task resource boundaries.

You will see the created task in the tasks dashboard. You can have multiple revisions for a task, so that editing the task will add up to the revision number. If you chose to run a task, more execution options will be shown, like VPC and subnets.

Finally, you can execute your script and the logs will automatically fall to Cloudwatch as well as in the task execution page. You can also set up scheduled executions with Fargate, for cron-like maintenance tasks.

Take aways

The described approach has multiple benefits:

  • You would have full control over your scripts, for a local-like experience on the cloud. From parameter passing and execution flow management, to binaries packages and their dependencies.
  • Hard and soft limits will definitely help your monthly bill with these executions.
  • No one will have to be in charge of maintaining the docker instance service, EKS or orchestrating Lambda executions.
  • Once the solution is built up, the developers will only have to focus on creating the script, not on deployments.
  • It’s secure, as it uses AWS embedded security services.

On the downside, we find the AWS logging system. It does not give immediate feedback of the container output and sometimes the logs even show up disordered. To get proper logging, you will need to go to cloudwatch and look for the logs of the execution. Also, testing your scripts outside the AWS cloud is managed differently in terms of permissions. Inside the cloud, it uses roles, while outside it uses Users.

To wrap everything up, if you find yourself with the need to run time-consuming scripts against your secured infrastructure and services, but you don’t want to add/maintain new instances just for that purpose and Lambda is not the best choice, Fargate could be a good option to consider.

Contact Us

Learn More about Encora

We are the software development company fiercely committed and uniquely equipped to enable companies to do what they can’t do now.

Learn More

Global Delivery

READ MORE

Careers

READ MORE

Industries

READ MORE

Related Insights

Online Travel Agencies: Some Solutions to changes in booking and commission attributions

Discover how we can simplify travel changes for both travelers and OTAs using blockchain and ...

Read More

The AI-Powered Journey: How AI is Changing the Face of Travel

As travel elevates itself into an experience where every journey is as unique as the travelers ...

Read More

Enhancing Operational Excellence with AI: A Game-Changer for the Hospitality Industry

By AI, the hospitality industry can offer the best of both worlds: the efficiency and ...

Read More
Previous Previous
Next

Accelerate Your Path
to Market Leadership 

Encora logo

Santa Clara, CA

+1 669-236-2674

letstalk@encora.com

Innovation Acceleration

Speak With an Expert

Encora logo

Santa Clara, CA

+1 (480) 991 3635

letstalk@encora.com

Innovation Acceleration