ECS is a good product. Sadly it’s authored by the same UX designer that authored all other AWS products, so a lot of people couldn’t even succeed in starting a simple hello world container.
Some months ago
@fusillicode wrote a two-part tutorial on how to dockerize and deploy on ECS a WordPress app (you can find them here: part 1 and part 2). Of course, given we’re talking of docker, the technology you’re using is not so important.
What’s missing in those posts is how to do a painless deploy.
Normally you would:
- push on your docker repo the new image
- open your task definition
- update it without changing anything. This will create a new revision for it
- open the app service in the ECS cluster
- update the task definition revision
And ECS would take care of launching a new EC2 instance, starting a container with the updated image, and stopping the old container.
So, this is great news, ECS offers a zero-downtime deployment out of the box.
Nonetheless, a deploy can be tricky as you can see by the above checklist. A good way to avoid all of this is to have a continuous delivery system.
For this post I used a Rails app that is being tested on Travis CI. What I want to obtain is that when I push something in the master branch, if Travis is green, a deploy is automatically made in production.
I already have a running ECS Cluster, with a running service, and Travis is already running.
Continuous docker building & pushing
First thing I need is to let Travis build and push docker images when specs are green. Add the following to your
sudo: required services: - docker after_success: - bin/docker_push.sh
We’re telling Travis that:
- we need a privileged container
- we need to have access to the docker exec
- it has to call the script
bin/docker_pushif tests are green
The content of
docker_push.sh is the following:
#! /bin/bash # Push only if it's not a pull request if [ -z "$TRAVIS_PULL_REQUEST" ] || [ "$TRAVIS_PULL_REQUEST" == "false" ]; then # Push only if we're testing the master branch if [ "$TRAVIS_BRANCH" == "master" ]; then # This is needed to login on AWS and push the image on ECR # Change it accordingly to your docker repo pip install --user awscli export PATH=$PATH:$HOME/.local/bin eval $(aws ecr get-login --no-include-email --region $AWS_DEFAULT_REGION) # Build and push docker build -t $IMAGE_NAME . echo "Pushing $IMAGE_NAME:latest" docker tag $IMAGE_NAME:latest "$REMOTE_IMAGE_URL:latest" docker push "$REMOTE_IMAGE_URL:latest" echo "Pushed $IMAGE_NAME:latest" else echo "Skipping deploy because branch is not 'master'" fi else echo "Skipping deploy because it's a pull request" fi
In order to have this script work as expected you need also to add the following ENV variables on Travis:
AWS_DEFAULT_REGION: default region for your AWS account (e.g. eu-west-1)
AWS_ACCESS_KEY_ID: Your AWS access key
AWS_SECRET_ACCESS_KEY: Your AWS secret access key
IMAGE_NAME: Docker image name for your app (e.g. foo)
REMOTE_IMAGE_URL: Docker repo image url (e.g. your.repo.url/foo)
Do a push on your master branch, and if everything is ok you will see Travis pushing your image.
In order to trigger the deploy on ECS we’ll use the ecs-deploy script.
I downloaded it in my bin folder. In addition, I added a new script,
#! /bin/bash # Deploy only if it's not a pull request if [ -z "$TRAVIS_PULL_REQUEST" ] || [ "$TRAVIS_PULL_REQUEST" == "false" ]; then # Deploy only if we're testing the master branch if [ "$TRAVIS_BRANCH" == "master" ]; then echo "Deploying $TRAVIS_BRANCH on $TASK_DEFINITION" ./bin/ecs-deploy -c $TASK_DEFINITION -n $SERVICE -i $REMOTE_IMAGE_URL:$TRAVIS_BRANCH else echo "Skipping deploy because it's not an allowed branch" fi else echo "Skipping deploy because it's a PR" fi
We need two more ENV variables on Travis:
TASK_DEFINITION: the name of your ECS task definition
SERVICE: the name of your ECS cluster service
after_success block of your
.travis.yml to be like the following:
after_success: - bin/docker_push.sh - bin/ecs_deploy.sh
Push it, and voilà! Deploy in progress.
Helpful post, thanks!
There is a small but significant typo in docker_push.sh on line 6: need a space before ].
Thank you. Just fixed it
Also, AWS_ACCESS_KEY should be AWS_ACCESS_KEY_ID.
Firstly – thank you for the GREAT guide.
docker pushwas returning a
no basic auth credentialserror until I added the
Maybe it’s due to a newer version of the awscli ?
The push instructions in the AWS web console also include this flag.
Thanks again – this post saved me a ton of time.
Yes, it’s related to
docker login. With version 17.06.0 docker stopped using the email in the login, so you have to tell awscli not to include it. Thank you for the reporting, I’ve just updated the post.