Atom, 18 days in

I wanted to give you a quick update on my Atom test drive, 18 days in, as it  keeps going.

First things first: IT IS SLOW. This is actually so bad that even if I like everything else (and I am liking it), I might drop Atom because of that. 

Apart from the abysmal performance, Atom looks nice, and, while I have to be careful not to hit Emacs chords, all the keyboard defaults are nice.

Here’s a list of plugins I am using for now. It will probably grow as I use it more, and I will probably remove something, but here it is:

  • git-blame-plus (but I have no idea what it does, I liked the name)
  • git-plus (nice git integration, but definitely not magit)
  • language-docker (you know, I love Docker)
  • markdown-pdf (after all pdf is the 4th most popular religion)
  • minimap
  • tree-view-git-status

For visual pleasure, and that’s one of the most important reasons to switch to Atom:

  • seti-ui
  • seti-syntax

That’s it for today!

A month long Atom test drive from an Emacs fanatic – day 0

It’s no secret I am an Emacs fanatic. I previously talked about my emacs setup on this blog, and I try to convert my coworkers to Emacs everytime I can, as Emanuel wrote in his Emacs love/hate piece.

Nonetheless I am writing this article in Atom, the newest kid on the block, an open-source editor by Github that recenty reached the 1.0 milestone.

Why Atom you ask? The reason why I eventually ended up in the Emacs world is that as a programmer I think the editor is the most intimate piece of software you work with. The editor is your tool. The editor is to the programmer what the brush is to the painter, the scalpel to the sculptor, the sword to the swordsman, … you get it.

This strong relationship with the text editor is what makes me want to find the best possible one. I tried many editors, most of them probably, and eventually mustered the strength to learn Emacs, and haven’t looked away from it since then.

Atom though shares a lot of principles with Emacs, so many that I decided to give it a test drive. This is my inaugural piece. In a month from now I’ll tell you how the experience was.

I use Emacs to write posts, work with git, write code in several languages and organize my work. In the past I used to use it for IRC and email too, but I don’t do it anymore.

The first thing I noticed when firing up Atom to write this article is that Atom is visibly slower than Emacs. I don’t know if I’ll be able to survive one whole month with a slower editor, so beware, my concluding article might come out sooner.

I am also undecided whether I like it visually more than I do like Emacs. I know people say Emacs sucks visually, but whoever says that never configured it to their own taste.

There are several things going for Atom though. Coffeescript is a more popular language than Lisp (and its hackers are for sure friendlier than the Lisp ones), also, I know it better, so writing my own Atom extensions won’t be as hard as writing my Emacs extensions.

Everything is integrated with Github. That is a good thing for me. I know Richard Stallman won’t approve (god bless him!), but I am more pragmatic than he is about open-source at all costs.

Just a note: while writing I already changed my themes 3 times. I really can’t find a way to make it look exactely the way it should, that is, to my liking 🙂

I already installed several community packages I read about somewhere, I like minimap and hope to try linter very soon.

I am waiting to discover how and if I can integrate it with rspec and docker and see my projects building in background in a tab. I gotta say, once you get used to navigating among files and buffers with helm, the project navigation with the tree on the left seems a bit antiquated. I guess I can hide it, I’ll have to see how.

This is basically it. I hope to like Atom. Emacs has a lot of legacy, that’s its power and also its curse. Atom on the other hand is a clean slate, and sometimes everyone needs a reboot.

Talk to you in a month!

Fast Volume Sharing on Boot2Docker

When discussing docker with fellow developers on macs, I often hear complaints about boot2docker doesn’t work as expected, or how the performance of shared volumes is slow.

I hear their complaints, after all I hit my head more than once against the same problems. Eventually I found a sweet solution that works for me, and has been battle tested by my coworkers too, a setup based on parallels, vagrant and boot2docker.

Using parallels gives us a less bug ridden NFS implementation, needed to avoid the performance problems of shared folders.

We use vagrant to tie everything up and we get a nice fast boot2docker that is a bit less magic than the default one, but also less opaque.

Here’s my Vagrantfile, slightly tuned to also support the plain non-parallels boot2docker image.

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.box = "yungsang/boot2docker"
  config.vm.network "private_network", ip: ENV['BOOT2DOCKER_IP'] || "10.211.55.5"

  config.vm.provider "parallels" do |v, override|
    override.vm.box = "parallels/boot2docker"
    override.vm.network "private_network", type: "dhcp"
  end

  # http
  config.vm.network "forwarded_port", guest: 80, host: 80, auto_correct: true
  config.vm.network "forwarded_port", guest: 443, host: 443, auto_correct: true

  # rack
  config.vm.network "forwarded_port", guest: 9292, host: 9292, auto_correct: true

  config.vm.synced_folder "/Users/intinig/src", "/Users/intinig/src", type: "nfs", bsd__nfs_options: ["-maproot=0:0"], map_uid: 0, map_gid: 0
  config.vm.synced_folder "/Users/intinig/src", "/src", type: "nfs", bsd__nfs_options: ["-maproot=0:0"], map_uid: 0, map_gid: 0
end

The last two lines of the Vagrantfile are what allows us to run fast volume sharing. In this case, but you might want to adapt them to your needs, I share my ~/src folder as /src and as /Users/intinig/src on the boot2docker vm.

To start the boot2docker vm you just enter the folder where you have the Vagrantfile and launch vagrant up.

Let me know how it works for you, happy dockering!

A containerized approach to Rails apps

In the first article of this series we talked about how to deploy Opsworks. In the second one we talked about how to configure the deployment. In this third installment we’re going to look at a sample Rails Application, and how to Dockerize it in its composing parts.

We’re going to look at several Dockerfiles, and we’re going to discuss the hows and whys of certain choices. So, without further ado, let’s begin. Let’s say we have a normal real world Ruby on Rails application. Most people nowadays serve RoR application using a frontend server (usually nginx, but Apache is still strong), an application server (Puma, Unicorn,
Passenger, …) and might use a SQL Database (your choice of MySQL or Postgres) and maybe a key-value store, like Redis.

Many applications make good use of an asynchronous queue, like Sidekiq. That’s a lot of components, isn’t it? 🙂

Today we’re not going to look at containers that run MySQL, Posgtres or Redis. We’re going to focus on the app container, that will our app server of choice, and how this app container talks to its serving brothers, nginx and haproxy.

The first think we do is creating an empty Dockerfile in your app source code repo. As you should know by now, Dockerfile will tell docker how to build our container.

Let’s start to fill it with instructions:

FROM ubuntu:14.04 # my favorite way of starting containers
MAINTAINER Giovanni Intini <giovanni@mikamai.com> # yep that's me folks

ENV REFRESHED_AT 2015-01-28 # this is a trick I read somewhere
                            # useful when you want to retrigger a build

# we install all prerequisites for ruby and our app. Your app might need
# less or more packages
RUN apt-get -yqq update
RUN apt-get install -yqq autoconf 
                         build-essential 
                         libreadline-dev 
                         libpq-dev 
                         libssl-dev 
                         libxml2-dev 
                         libyaml-dev 
                         libffi-dev 
                         zlib1g-dev 
                         git-core 
                         curl 
                         node 
                         libmagickcore-dev 
                         libmagickwand-dev 
                         libcurl4-openssl-dev 
                         imagemagick 
                         bison 
                         ruby

# here we start installing ruby from sources. If you're curious about why we
# also installed ruby from apt-get, the short story is that we need it, but
# we're gonna remove it later in this file 🙂

ENV RUBY_MAJOR 2.2
ENV RUBY_VERSION 2.2.1

RUN mkdir -p /usr/src/ruby

RUN curl -SL "http://cache.ruby-lang.org/pub/ruby/$RUBY_MAJOR/ruby-$RUBY_VERSION.tar.bz2" 
        | tar -xjC /usr/src/ruby --strip-components=1

RUN cd /usr/src/ruby && autoconf && ./configure --disable-install-doc && make -j"$(nproc)"

RUN apt-get purge -y --auto-remove bison 
                                   ruby

RUN cd /usr/src/ruby && make install && rm -rf /usr/src/ruby

RUN rm -rf /var/lib/apt/lists/*

# let's stop here for now

Everything up to here is simple. When possible we chain statements so we have fewer images built by docker as it layers each step on top of each other.

Now the interesting part. We want to be able to cache gems in between image builds, and we also want to be able to override them with our local gems when developing locally using this image. We do it in two steps: first we configure bundler and add only Gemfile and Gemfile.lock to the image, then we add the rest of the application.

RUN echo 'gem: --no-rdoc --no-ri' >> $HOME/.gemrc
RUN gem install bundler
RUN bundle config path /remote_gems

COPY Gemfile /app/Gemfile
COPY Gemfile.lock /app/Gemfile.lock
WORKDIR /app

RUN bundle install --deployment --without development test

The order in which we do the rest is important, because we want to be sure to
minimize the number of discarded images when we change something in the application.

# here we just add all the environment variables we want
# more on that soon
ENV RAILS_ENV production
ENV RACK_ENV production
ENV SIDEKIQ_WORKERS 10

# what we do if we run a container from this image without telling it explicitly
# what command to use
CMD ["bundle", "exec", "unicorn", "--help"]

# now we add in the code
COPY . /app

# assets precompilation as late as possible
RUN bundle exec rake assets:precompile
ENV GIT_REVISION ef4312a
ENV SECRET_KEY_BASE af9cbc68d3ad9fe71669e791c59878ffd1fc

The first thing we have to be careful to do when we write an application that is supposed to run in containers and to scale indefinitely, is to configure it via environment variables. This is the most flexible way of working and allows you to deploy it to Heroku or Opsworks, or Opsworks+Docker.

Now, if you recall, we used Opsworks to pass in lots environment variables to our app, but a lot of those can be safely have default values set directly in here. Usually I put in Dockerfile all the variables that are required by the deployment process or that won’t change often.

The GIT_REVISION variable and SECRET_KEY_BASE are generated automatically by a rake task I wrote (warning: very rough, but it does its job). I usually expose GIT_REVISION in the backend of my applications, so I always know which version is deployed at a glance.

After writing our nice Dockerfile we launch rake docker:build or just build it with docker build -t whatever/myself/app_name .
and we have our cool image ready to be launched.

Let’s try to launch it with docker run whatever/myself/app_name:

Usage: unicorn [ruby options] [unicorn options] [rackup config file]
Ruby options:

  -e, --eval LINE          evaluate a LINE of code
  -d, --debug              set debugging flags (set $DEBUG to true)
  -w, --warn               turn warnings on for your script
  -I, --include PATH       specify $LOAD_PATH (may be used more than once)
  -r, --require LIBRARY    require the library, before executing your script
unicorn options:
  -o, --host HOST          listen on HOST (default: 0.0.0.0)
  -p, --port PORT          use PORT (default: 8080)
  -E, --env RACK_ENV       use RACK_ENV for defaults (default: development)
  -N                       do not load middleware implied by RACK_ENV
      --no-default-middleware
  -D, --daemonize          run daemonized in the background

  -s, --server SERVER      this flag only exists for compatibility
  -l {HOST:PORT|PATH},     listen on HOST:PORT or PATH
      --listen             this may be specified multiple times
                           (default: 0.0.0.0:8080)
  -c, --config-file FILE   Unicorn-specific config file
Common options:
  -h, --help               Show this message
  -v, --version            Show version

For now everything seems to be working, but obviously we need to tell unicorn what to do. For now something like this should do the trick:

$ docker run -p 80:9292 --env REQUIRED_VAR=foo whatever/myself/app_name bundle exec unicorn

If your app is simple it should already be serving requests (probably with no static assets) on port 80 of your host machine.

We’re not done yet, it’s now time to work on our haproxy and nginx configurations. I have mine in app_root/container/Dockerfile, but probably another repository is a better place.

In the first article I explained how I prefer to have haproxy serving requests to nginx to have zero downtime deployments.

Here’s my haproxy Dockerfile.

FROM dockerfile/haproxy
MAINTAINER Giovanni Intini <giovanni@mikamai.com>
ENV REFRESHED_AT 2014-11-22

ADD haproxy.cfg /etc/haproxy/haproxy.cfg

Very simple, isn’t it? Everything we need (not a lot actually) is done in haproxy.cfg (of which I present you an abridged version, just with the directives we need for our rails app).

global
  log ${REMOTE_SYSLOG} local0
  log-send-hostname
  user root
  group root

frontend main
  bind /var/run/app.sock mode 777
  default_backend app

backend app :80
  option httpchk GET /ping
  option redispatch
  errorloc 400 /400
  errorloc 403 /403
  errorloc 500 /500
  errorloc 403 /503

  server app0 unix@/var/lib/haproxy/socks/app0.sock check
  server app1 unix@/var/lib/haproxy/socks/app1.sock check

Haproxy is a very powerful but not very accessible, I know. What this configuration does is easy: it defines two backend servers that will be accessed via unix sockets, and exposes a unix socket itself, that will be used by nginx.

The most attentive readers will already know we’re going to tell unicorn to serve requests via unix sockets, and the ones with a superhuman memory will know we already showed how to to that in the first article 🙂

I promise we’re almost there, let’s look at nginx.

FROM nginx
MAINTAINER Giovanni Intini <giovanni@mikamai.com>
ENV REFRESHED_AT 2015-01-15

RUN rm /etc/nginx/conf.d/default.conf
RUN rm /etc/nginx/conf.d/example_ssl.conf

COPY proxy.conf /etc/nginx/sites-templates/proxy.conf
COPY nginx.conf /etc/nginx/nginx.conf
COPY mime.types /etc/nginx/mime.types
COPY headers.conf /etc/nginx/common-headers.conf
COPY common-proxy.conf /etc/nginx/common-proxy.conf

WORKDIR /etc/nginx

CMD ["nginx"]

As for haproxy we’re starting with the default image and just adding in our own configuration. I can’t share everything I use because it’s either taken from somewhere else or private, but I’ll give you the important details (from proxy.conf):

upstream rgts {
    server unix:/var/run/rgts.sock max_fails=0;
}

server {
    listen 80 deferred;
    listen [::]:80 deferred;

    server_name awesome-app.com;

    # IMPORTANT!!! We rsync this with the Rails assets to ensure that you can
    # server up-to-date assets.
    root /var/static;
    client_max_body_size 4G;
    keepalive_timeout 10;

    open_file_cache          max=1000 inactive=20s;
    open_file_cache_valid    30s;
    open_file_cache_min_uses 2;
    open_file_cache_errors   on;

    spdy_keepalive_timeout 300;
    spdy_headers_comp 6;

    # This is for files in /public, assets included
    location / {
        try_files $uri/index.html $uri @app;

        expires max;
        add_header Cache-Control public;

        etag on;
    }

    # Dynamic pages
    location @app {
        add_header Pragma "no-cache";
        add_header Cache-control "no-store";

        proxy_redirect off;
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;

        proxy_pass http://app;
    }
}

Simple, and it does the job. These three images will become the key containers in your app stack.

You can add containers with databases and other services, or use AWS for that, but the main building blocks are there.

Orchestrate them locally with docker compose or in the cloud with Opsworks and friends, and you shall be a happy camper.

Thanks for reading up to here, feel free to comment via mail, reddit, snail mail or pigeons, and most of all, have fun with Docker.

Docker and Opsworks: configuration basics

Last time we talked about Docker and Opsworks I left you with a huge JSON stack configuration, without too much info about how and why it worked.

Today we’re gonna review it key by key and explore the capabilities offered by opsworks-docker.

You can find the whole JSON in the original article, let’s break it down.

The top level keys are:


{
    "logrotate": {...},
    "docker": {...}
    "deploy": {...},
}

logrotate is used to declare which container will have to be restarted when we truncate the docker logfiles. I forward all my logs to papertrail using logspout, so I declare it like this:


"logrotate": {
    "forwarder": "logspout0"
},

It’s important to note the 0 at the end of the container name, we’ll soon talk about it.

docker contains information about custom docker configuration. Just private registries for now. It’s tested only with quay.io, the one we use, but it should work out of the box (or with minimal modifications) for any other private registry.


"docker": {
    "registries": {
      "quay.io": {
        "password": "",
        "username": ""
    }
  }
}

It’s pretty simple, you add in the registries key an hash for each registry you want to login to, and then you specify your username and your password.

We got rid of the simple configuration, let’s now look at the big deploy hash. In this hash we describe all the applications we want to support, all their containers, and the relationships among those.


"deploy": {
    "app_name": {
        "data_volumes": {...info about my data only containers...},
        "containers": {...all my running containers...}
    }
}

In data_volumes we declare all the data volume containers.

It’s pretty self-explanatory:


"data_volumes": {
    "volume_container_one": ["/all", "/my/volumes"],
    "volume_container_two": ["/other/volumes"]
}

The example in my previous article was taken from a real world use case, where we use shared volumes on data only containers to share unix socket files between front-end webservers and application servers.

containers holds information about the container graph we’ll have running on our host machine.


"containers": []

The first difference you’ll notice between containers and the rest of the configuration is that while everything else is an hash, containers is an array. Having it as an array was needed because we want to be sure in which order the containers will be deployed.


"containers": [
    {
        "an_example": {
            "deploy": "auto",
            "image": "quay.io/mikamai/an_example",
            "database": "true",
            "containers": "3",
            "startup_time": "60",
            "ports": ["80:80", "443:443"],
            "volumes_from": ["volume_container_one", "volume_container_two"],
            "entrypoint": "entrypoint.sh",
            "command": "bundle exec unicorn -c config/unicorn.rb -l /var/lib/haproxy/socks/%{app_name}.sock",
            "env": {
                "RANDOM_VAR": "foo"
            },
            "env_from": "another_container_name",
            "hostname": "%{opsworks}",
            "migration": "bundle exec rake db:migrate",
            "notifications": {
                "rollbar" : {
                    "access_token": "",
                    "env_var": "RACK_ENV",
                    "rev_var": "GIT_REVISION"
                }
            }
        }
    }
]

deploy can be either auto or manual or cron. auto is the basic one. It means you want this container to be deployed every time you deploy on opsworks.

manual is a little bit different. When deploying your application if the container is not running, opsworks will spin it up. Otherwise it will be left untouched. It’s useful for frontend servers, proxies, and anything that must not be restarted everytime you want to deploy.

cron is for containers that need to be run as cron jobs. The deploy recipe will set up a cron job that will docker run at specified intervals. cron containers need an extra key:


    "a_cron_example": {
        "deploy": "cron",
        ...
        "cron": {"minute": 59, "hour": "3", "weekday": "1"}
    }

Omitting one of minute hour or weekday has the same effect of specifying * for that value in crontab.

image is self explanatory, it’s the image name that docker will pull when deploying this container.

database tells opsworks whether it should supply DB info to the container via environment variables:


DB_ADAPTER
DB_DATABASE
DB_HOST
DB_PASSWORD
DB_PORT
DB_RECONNECT
DB_USERNAME

containers tells opsworks how many copies of the same container it should run. It’s mainly used for application servers, and to achieve zero downtime deployments. This is the reason why at the beginning we declared our forwarder to be logspout0. An integer is appended at the end of every running container. For containers running only in one copy that would be 0, but for multiple copies of the same containers we would have something like app_server0, app_server1, …, app_serverX.

startup_time is used when containers is greater than 0. Most application servers take a while to boot and load all of your application code. Timing how much time you app requires to boot, and specifying it in startup_time allows you to achieve zero downtime deployments, because at least one application server will always be up and answering requests.

ports, volumes_from, entrypoint, command and env work exactely like their fig/commandline counterparts, with one exception, that is valid throughout the configuration JSON: every string is filtered via a lightweight templating system (it’s actually built-in in the Ruby String class) that supports the following substitutions:


"%{public_ip}" # maps to the host instance public ip
"%{app_name}"  # name of the container + container_id, when containers > 0
"%{opsworks}"  # hostname of the host instance

env_from is a very useful helper. It allows you to copy the environemnt from another container definition. Immensely helpful when you have several containers that run off the same image, require the same environment, but are assigned different responsibilities.


"big_env_container": {
    ...
    "env": {...lots of keys...}
    ...
},
"leeching_container": {
    ...
    "env_from": "big_env_container"
    ...
}

hostname allows you to override the default container hostname and force it to something else. I use it with the %{opsworks} substitution for containers like logspout that must represent the whole stack. Feel free to come up with other uses 🙂

Finally migration and notifications were added to help our Ruby on Rails development, but can be adapted to other uses.

If migration is present opsworks will try to run its value to migrate the database before launching a new container. The smartest among you will have noticed that you can easily use this hook to launch any other docker run command you need before spinning up your containers. Notice that when containers is greater than 0, opsworks will only try to migrate the first one.

notifications is used during deployment to notify an external service that a deploy just happened. We only support rollbar for now, via the docker::notify_rollbar recipe, but will gladly accept pull requests for other services (gitter, campfire, irc, you name it).

CONGRATULATIONS if you read all of this. In the next article we’ll talk about dockerizing your application and breaking it in small containers that you can juggle on your favorite docker host.

Docker 1.5: IPv6 support, read-only containers, stats, “named Dockerfiles” and more

Docker 1.5: IPv6 support, read-only containers, stats, “named Dockerfiles” and more

docker/swarm

docker/swarm

Zero Downtime deployments with Docker on Opsworks

Updated on 2015-02-19, changed the stack JSON with new interpolation syntax

When I was younger I always wanted to be a cool kid. Growing up as computer enthusiast that meant using the latest window manager, and the most cutting edge versions of all software. Who hasn’t gone through a recompile-kernel-every-day-several-times-a-day phase in their lives?

Luckily I was able to survive that phase (except a 120GB Raid0 setup lost, but that’s another story), and maturity gave me the usual old guy-esque fear of change.

That’s the main reason I waited a bit before approaching Docker. All the cool kids where raving about it, and I am a cool kid no more, just a calm, collected, young-ish man.

You might have guessed it from the title of this article, but Docker blew my mind. It solves in a simple way lots of common development and deployment problems.

After learning to rebuild my application to work from a Docker container, more about it in the next weeks, my focus went to deploying Docker. We run our application on the AWS cloud, and our Amazon service of choice for that is Opsworks. Easy to setup, easy to maintain, easy to scale.

Unfortunately Opsworks does not support Docker, and ECS, Amazon’s own container running solution, is not production ready. I needed to migrate my complicated application to Docker, and I wanted to do it now. The only solution was convincing Opsworks to support Docker.

Googling around I found a tutorial and some github repos, but none were up to the task for me.

My application needs several containers to work, and I was aiming for something that allowed me to orchestrate the following:

  • 1 load balancer (but it might be ELB!)
  • 1 frontend webserver, like nginx
  • 1 haproxy to juggle between app servers, to allow for zero downtime deployments
  • 2 application containers, running their own app server
  • 1 Redis for caching (but it might be Elasticache!)
  • 1 PostgreSQL as DB (but it might be Amazon RDS)
  • 1 or more cron tasks
  • 1 or more sidekiq instances for async processing

In addition to that I wanted everything to scale horizontally based on load. It seemed a daunting task at first, but as I started writing my chef recipes everything started to make sense. My inspiration was fig. I like the way fig allows you to specify relations between containers, and I wanted to do something like that on Opsworks.

The result of my work can be found here. At the time of this writing the README.md file is still blank, but I promise some documentation should appear there sooner or later… for now use this article as a reference 🙂

The first thing we’ll do to deploy our dockerized app is login on AWS, access Opsworks and click on create a stack. Fill in the form like this:

  • Name: whatever you like
  • Region: whatever you prefer
  • Default root device type: EBS backend
  • Default SSH key: choose your key if you want to access your instances
  • [set the rest to default or your liking if you know what you’re doing]
  • Advanced

Once you have your stack you have to add a layer. Click on add a layer and fill in the form:

  • Layer type: Custom
  • Name: Docker
  • Short name: docker

After you create the layer go Edit its configuration and click the EBS volumes tab. We’ll need to add ad 120GB volume for each instance we add to the stack. Why you ask? Unfortunately on Amazon Linux/EC2 docker will use devicemapper to manage your containers, and devicemapper creates a file that will grow with normal use to up to 100GB. The extra 20GB are used for your images. You can go with less than that, or even no EBS volume, but know that sooner or later you’ll hit that limit.

  • Mount point: /var/lib/docker
  • Size total: 120GB
  • Volume type: General Purpose (SSD)

After that let’s edit our layer to add our custom recipes:

  • Setup
    • docker::install, docker::registries, logrotate::default, docker::logrotate
  • Deploy
    • docker::data_volumes, docker::deploy

What do our custom recipes do?

  • docker::install is easy, it just installs docker on our opsworks instances
  • docker::registries is used to login in private docker registries. It should work with several type of registries, but I have personally tested it only with quay.io
  • logrotate::default and docker::logrotate manage the setup of logrotate to avoid ever growing docker logs. This setup assumes you’re actually sending logs to a remote syslog, we use papertrail for that

Now let’s add an application. From the Opsworks menu on the left click Apps and add a new one.

  • Name: amazing application
  • Type: Other
  • Data Source Type: here I choose RDS, but you can feel free to use OpsWorks, or no DB at all and pass the data to your app via docker containers or other sources
  • Repository type: Other

Now add just one Env variable to the app:

  • APP_TYPE: docker

Everything else will be configured via the (enormous) Stack JSON. Go to your stack settings and edit them. You will need to compile a stack json for your containers. Here’s an example one:

{
  "logrotate": {
    "forwarder": "logspout0"
  },
  "deploy": {
    "amazing application": {
      "data_volumes": [
      {
        "socks": {
          "volumes": ["/var/run", "/var/lib/haproxy/socks"]
        },
        "static": {
          "volumes": ["/var/static"]
        }
      }
      ],
      "containers": [
        {
          "app": {
            "deploy": "auto",
            "image": "quay.io/mikamai/awesome-app",
            "database": true,
            "containers": 2,
            "volumes_from": ["socks", "static"],
            "entrypoint": "/app/bin/entrypoint.sh",
            "command": "bundle exec unicorn -c config/unicorn.rb -l /var/lib/haproxy/socks/%{app_name}.sock",
            "migration": "bundle exec rake db:migrate",
            "startup_time": 60,
            "env": {
              "RANDOM_VAR": "foo"
            },
            "notifications": {
              "rollbar" : {
                "access_token": "",
                "env_var": "RACK_ENV",
                "rev_var": "GIT_REVISION"
              }
            }
          }
        },
        {
          "cron": {
            "deploy": "cron",
            "image": "quay.io/mikamai/awesome-app",
            "database": true,
            "containers": 1,
            "env_from": "app",
            "command": "bundle exec rake cron:run",
            "cron": {"minute": 59}
          }
        },

        {
          "sidekiq": {
            "deploy": "auto",
            "image": "quay.io/mikamai/awesome-app",
            "database": true,
            "containers": 1,
            "env_from": "app",
            "command": "bundle exec sidekiq"
          }
        },
        {
          "haproxy": {
            "deploy": "manual",
            "hostname": "opsworks",
            "image": "quay.io/mikamai/configured-haproxy",
            "volumes_from": ["socks"],
            "env": {
              "REMOTE_SYSLOG": "logs.papertrailapp.com:1234"
            }
          }
        },
        {
          "nginx": {
            "deploy": "manual",
            "image": "quay.io/mikamai/configured-nginx",
            "ports": [
              "80:80",
              "443:443"
            ],
            "volumes_from": [
              "socks",
              "static"
            ]
          }
        },
        {
          "logspout": {
            "deploy": "manual",
            "hostname": "%{opsworks}",
            "image": "progrium/logspout",
            "volumes": [
              "/var/run/docker.sock:/tmp/docker.sock"
            ],
            "command": "syslog://logs.papertrailapp.com:1234"
          }
        }
      ]
    }
  },
  "docker": {
    "registries": {
      "quay.io": {
        "password": "",
        "username": ""
      }
    }
  }
}

WOW! That’s a lot to digest, isn’t it? In the next article we’ll go through the Stack JSON and see what each of the keys mean and what they enable you to do.

Thanks for reading through this, see you soon!