Elastic IP Address (EIP) and ECS (EC2 Container Service) cluster, a naive solution

Recently I had the opportunity to set up another ECS cluster for a Ruby on Rails application that exposes a few API endpoints and a backend to manage some contents, i.e. images, videos and so on.

Considering our previous experience we decided to automate the provisioning of the infrastructure by using Ansible and after a bit we ended up with a few playbooks that allowed us to bring up everything we needed, from the DB to the instances, ELB, task definitions and services.

Everything was working quite well until we were asked to provide a static IP that could be used to access the aforementioned API endpoints.

Continue reading “Elastic IP Address (EIP) and ECS (EC2 Container Service) cluster, a naive solution”

Continuous delivery with Travis and ECS

ECS is a good product. Sadly it’s authored by the same UX designer that authored all other AWS products, so a lot of people couldn’t even succeed in starting a simple hello world container.

Some months ago @fusillicode wrote a two-part tutorial on how to dockerize and deploy on ECS a WordPress app (you can find them here: part 1 and part 2). Of course, given we’re talking of docker, the technology you’re using is not so important.

What’s missing in those posts is how to do a painless deploy.
Continue reading “Continuous delivery with Travis and ECS”

ECS and KISS dockerization of WordPress (Part 2)

‘Two article ago’ I wrote about my initial experience with Docker and ECS, the container service, built on top of EC2, offered by Amazon. Here it is if you want to take a look.

Today I want to continue in that direction describing the configuration of containers (or better container) I choose to serve the application selected as guinea pig to try ECS. Just as a reminder, the app was an almost standard WordPress blog with a custom theme and a few plugins.

Continue reading “ECS and KISS dockerization of WordPress (Part 2)”

Bash quirk in Dockerfile

In my last article I wrote about my initial experience with ECS, Docker and WordPress.

While already practical with the core concepts of Docker, after learning the basics of ECS and creating a working Dockerfile I encountered the need to dive deeper in many aspects of the “platform” and the technology under discussion.

Continue reading “Bash quirk in Dockerfile”

ECS and KISS dockerization of WordPress

ECS (EC2 Container Service) is one of the latest Web services released by Amazon and it is among the cool kids around. Why? Well it let you deploy and administer Docker containers by integrating deeply with the other Web services offered by Amazon. To name a few, ELB (Elastic Load Balancer), Launching Configuration and Auto Scaling Groups (ASG).

At the base of ECS reside two fundamental concepts, tasks and services.

Continue reading “ECS and KISS dockerization of WordPress”

Zero Downtime deployments with Docker on Opsworks

Updated on 2015-02-19, changed the stack JSON with new interpolation syntax

When I was younger I always wanted to be a cool kid. Growing up as computer enthusiast that meant using the latest window manager, and the most cutting edge versions of all software. Who hasn’t gone through a recompile-kernel-every-day-several-times-a-day phase in their lives?

Luckily I was able to survive that phase (except a 120GB Raid0 setup lost, but that’s another story), and maturity gave me the usual old guy-esque fear of change.

That’s the main reason I waited a bit before approaching Docker. All the cool kids where raving about it, and I am a cool kid no more, just a calm, collected, young-ish man.

You might have guessed it from the title of this article, but Docker blew my mind. It solves in a simple way lots of common development and deployment problems.

After learning to rebuild my application to work from a Docker container, more about it in the next weeks, my focus went to deploying Docker. We run our application on the AWS cloud, and our Amazon service of choice for that is Opsworks. Easy to setup, easy to maintain, easy to scale.

Unfortunately Opsworks does not support Docker, and ECS, Amazon’s own container running solution, is not production ready. I needed to migrate my complicated application to Docker, and I wanted to do it now. The only solution was convincing Opsworks to support Docker.

Googling around I found a tutorial and some github repos, but none were up to the task for me.

My application needs several containers to work, and I was aiming for something that allowed me to orchestrate the following:

  • 1 load balancer (but it might be ELB!)
  • 1 frontend webserver, like nginx
  • 1 haproxy to juggle between app servers, to allow for zero downtime deployments
  • 2 application containers, running their own app server
  • 1 Redis for caching (but it might be Elasticache!)
  • 1 PostgreSQL as DB (but it might be Amazon RDS)
  • 1 or more cron tasks
  • 1 or more sidekiq instances for async processing

In addition to that I wanted everything to scale horizontally based on load. It seemed a daunting task at first, but as I started writing my chef recipes everything started to make sense. My inspiration was fig. I like the way fig allows you to specify relations between containers, and I wanted to do something like that on Opsworks.

The result of my work can be found here. At the time of this writing the README.md file is still blank, but I promise some documentation should appear there sooner or later… for now use this article as a reference 🙂

The first thing we’ll do to deploy our dockerized app is login on AWS, access Opsworks and click on create a stack. Fill in the form like this:

  • Name: whatever you like
  • Region: whatever you prefer
  • Default root device type: EBS backend
  • Default SSH key: choose your key if you want to access your instances
  • [set the rest to default or your liking if you know what you’re doing]
  • Advanced

Once you have your stack you have to add a layer. Click on add a layer and fill in the form:

  • Layer type: Custom
  • Name: Docker
  • Short name: docker

After you create the layer go Edit its configuration and click the EBS volumes tab. We’ll need to add ad 120GB volume for each instance we add to the stack. Why you ask? Unfortunately on Amazon Linux/EC2 docker will use devicemapper to manage your containers, and devicemapper creates a file that will grow with normal use to up to 100GB. The extra 20GB are used for your images. You can go with less than that, or even no EBS volume, but know that sooner or later you’ll hit that limit.

  • Mount point: /var/lib/docker
  • Size total: 120GB
  • Volume type: General Purpose (SSD)

After that let’s edit our layer to add our custom recipes:

  • Setup
    • docker::install, docker::registries, logrotate::default, docker::logrotate
  • Deploy
    • docker::data_volumes, docker::deploy

What do our custom recipes do?

  • docker::install is easy, it just installs docker on our opsworks instances
  • docker::registries is used to login in private docker registries. It should work with several type of registries, but I have personally tested it only with quay.io
  • logrotate::default and docker::logrotate manage the setup of logrotate to avoid ever growing docker logs. This setup assumes you’re actually sending logs to a remote syslog, we use papertrail for that

Now let’s add an application. From the Opsworks menu on the left click Apps and add a new one.

  • Name: amazing application
  • Type: Other
  • Data Source Type: here I choose RDS, but you can feel free to use OpsWorks, or no DB at all and pass the data to your app via docker containers or other sources
  • Repository type: Other

Now add just one Env variable to the app:

  • APP_TYPE: docker

Everything else will be configured via the (enormous) Stack JSON. Go to your stack settings and edit them. You will need to compile a stack json for your containers. Here’s an example one:

{
  "logrotate": {
    "forwarder": "logspout0"
  },
  "deploy": {
    "amazing application": {
      "data_volumes": [
      {
        "socks": {
          "volumes": ["/var/run", "/var/lib/haproxy/socks"]
        },
        "static": {
          "volumes": ["/var/static"]
        }
      }
      ],
      "containers": [
        {
          "app": {
            "deploy": "auto",
            "image": "quay.io/mikamai/awesome-app",
            "database": true,
            "containers": 2,
            "volumes_from": ["socks", "static"],
            "entrypoint": "/app/bin/entrypoint.sh",
            "command": "bundle exec unicorn -c config/unicorn.rb -l /var/lib/haproxy/socks/%{app_name}.sock",
            "migration": "bundle exec rake db:migrate",
            "startup_time": 60,
            "env": {
              "RANDOM_VAR": "foo"
            },
            "notifications": {
              "rollbar" : {
                "access_token": "",
                "env_var": "RACK_ENV",
                "rev_var": "GIT_REVISION"
              }
            }
          }
        },
        {
          "cron": {
            "deploy": "cron",
            "image": "quay.io/mikamai/awesome-app",
            "database": true,
            "containers": 1,
            "env_from": "app",
            "command": "bundle exec rake cron:run",
            "cron": {"minute": 59}
          }
        },

        {
          "sidekiq": {
            "deploy": "auto",
            "image": "quay.io/mikamai/awesome-app",
            "database": true,
            "containers": 1,
            "env_from": "app",
            "command": "bundle exec sidekiq"
          }
        },
        {
          "haproxy": {
            "deploy": "manual",
            "hostname": "opsworks",
            "image": "quay.io/mikamai/configured-haproxy",
            "volumes_from": ["socks"],
            "env": {
              "REMOTE_SYSLOG": "logs.papertrailapp.com:1234"
            }
          }
        },
        {
          "nginx": {
            "deploy": "manual",
            "image": "quay.io/mikamai/configured-nginx",
            "ports": [
              "80:80",
              "443:443"
            ],
            "volumes_from": [
              "socks",
              "static"
            ]
          }
        },
        {
          "logspout": {
            "deploy": "manual",
            "hostname": "%{opsworks}",
            "image": "progrium/logspout",
            "volumes": [
              "/var/run/docker.sock:/tmp/docker.sock"
            ],
            "command": "syslog://logs.papertrailapp.com:1234"
          }
        }
      ]
    }
  },
  "docker": {
    "registries": {
      "quay.io": {
        "password": "",
        "username": ""
      }
    }
  }
}

WOW! That’s a lot to digest, isn’t it? In the next article we’ll go through the Stack JSON and see what each of the keys mean and what they enable you to do.

Thanks for reading through this, see you soon!

A modern workflow for WordPress using Docker and Dokku

Dokku + WordPress

Every developer, sooner or later, had to deal with WordPress, given it is one of the most popular Blog/CMS platform, if not the most popular.
According to Wikipedia, roughly 22% of the web sites run on it, (it means one web site in five) it is widely know by users, it has a large community (over 30 thousand contributed plugins) and it is easily supported by designers.

Unfortunately WP was targeted at non-developer people, it had a great success as hosted platform, but working with it from the developer perspective, especially if we look at the workflow, looks clunky and outdated.

Continue reading “A modern workflow for WordPress using Docker and Dokku”