Last time we talked about Docker and Opsworks I left you with a huge JSON stack configuration, without too much info about how and why it worked.
Today we’re gonna review it key by key and explore the capabilities offered by opsworks-docker.
You can find the whole JSON in the original article, let’s break it down.
The top level keys are:
{
"logrotate": {...},
"docker": {...}
"deploy": {...},
}
logrotate
is used to declare which container will have to be restarted when we truncate the docker logfiles. I forward all my logs to papertrail using logspout, so I declare it like this:
"logrotate": {
"forwarder": "logspout0"
},
It’s important to note the 0
at the end of the container name, we’ll soon talk about it.
docker
contains information about custom docker configuration. Just private registries for now. It’s tested only with quay.io, the one we use, but it should work out of the box (or with minimal modifications) for any other private registry.
"docker": {
"registries": {
"quay.io": {
"password": "",
"username": ""
}
}
}
It’s pretty simple, you add in the registries
key an hash for each registry you want to login to, and then you specify your username and your password.
We got rid of the simple configuration, let’s now look at the big deploy
hash. In this hash we describe all the applications we want to support, all their containers, and the relationships among those.
"deploy": {
"app_name": {
"data_volumes": {...info about my data only containers...},
"containers": {...all my running containers...}
}
}
In data_volumes
we declare all the data volume containers.
It’s pretty self-explanatory:
"data_volumes": {
"volume_container_one": ["/all", "/my/volumes"],
"volume_container_two": ["/other/volumes"]
}
The example in my previous article was taken from a real world use case, where we use shared volumes on data only containers to share unix socket files between front-end webservers and application servers.
containers
holds information about the container graph we’ll have running on our host machine.
"containers": []
The first difference you’ll notice between containers
and the rest of the configuration is that while everything else is an hash, containers
is an array. Having it as an array was needed because we want to be sure in which order the containers will be deployed.
"containers": [
{
"an_example": {
"deploy": "auto",
"image": "quay.io/mikamai/an_example",
"database": "true",
"containers": "3",
"startup_time": "60",
"ports": ["80:80", "443:443"],
"volumes_from": ["volume_container_one", "volume_container_two"],
"entrypoint": "entrypoint.sh",
"command": "bundle exec unicorn -c config/unicorn.rb -l /var/lib/haproxy/socks/%{app_name}.sock",
"env": {
"RANDOM_VAR": "foo"
},
"env_from": "another_container_name",
"hostname": "%{opsworks}",
"migration": "bundle exec rake db:migrate",
"notifications": {
"rollbar" : {
"access_token": "",
"env_var": "RACK_ENV",
"rev_var": "GIT_REVISION"
}
}
}
}
]
deploy
can be either auto
or manual
or cron
. auto
is the basic one. It means you want this container to be deployed every time you deploy on opsworks.
manual
is a little bit different. When deploying your application if the container is not running, opsworks will spin it up. Otherwise it will be left untouched. It’s useful for frontend servers, proxies, and anything that must not be restarted everytime you want to deploy.
cron
is for containers that need to be run as cron jobs. The deploy recipe will set up a cron job that will docker run at specified intervals. cron
containers need an extra key:
"a_cron_example": {
"deploy": "cron",
...
"cron": {"minute": 59, "hour": "3", "weekday": "1"}
}
Omitting one of minute
hour
or weekday
has the same effect of specifying *
for that value in crontab.
image
is self explanatory, it’s the image name that docker will pull when deploying this container.
database
tells opsworks whether it should supply DB info to the container via environment variables:
DB_ADAPTER
DB_DATABASE
DB_HOST
DB_PASSWORD
DB_PORT
DB_RECONNECT
DB_USERNAME
containers
tells opsworks how many copies of the same container it should run. It’s mainly used for application servers, and to achieve zero downtime deployments. This is the reason why at the beginning we declared our forwarder to be logspout0
. An integer is appended at the end of every running container. For containers running only in one copy that would be 0
, but for multiple copies of the same containers we would have something like app_server0
, app_server1
, …, app_serverX
.
startup_time
is used when containers
is greater than 0
. Most application servers take a while to boot and load all of your application code. Timing how much time you app requires to boot, and specifying it in startup_time
allows you to achieve zero downtime deployments, because at least one application server will always be up and answering requests.
ports
, volumes_from
, entrypoint
, command
and env
work exactely like their fig/commandline counterparts, with one exception, that is valid throughout the configuration JSON: every string is filtered via a lightweight templating system (it’s actually built-in in the Ruby String class) that supports the following substitutions:
"%{public_ip}" # maps to the host instance public ip
"%{app_name}" # name of the container + container_id, when containers > 0
"%{opsworks}" # hostname of the host instance
env_from
is a very useful helper. It allows you to copy the environemnt from another container definition. Immensely helpful when you have several containers that run off the same image, require the same environment, but are assigned different responsibilities.
"big_env_container": {
...
"env": {...lots of keys...}
...
},
"leeching_container": {
...
"env_from": "big_env_container"
...
}
hostname
allows you to override the default container hostname and force it to something else. I use it with the %{opsworks}
substitution for containers like logspout that must represent the whole stack. Feel free to come up with other uses 🙂
Finally migration
and notifications
were added to help our Ruby on Rails development, but can be adapted to other uses.
If migration
is present opsworks will try to run its value to migrate the database before launching a new container. The smartest among you will have noticed that you can easily use this hook to launch any other docker run
command you need before spinning up your containers. Notice that when containers
is greater than 0
, opsworks will only try to migrate the first one.
notifications
is used during deployment to notify an external service that a deploy just happened. We only support rollbar for now, via the docker::notify_rollbar
recipe, but will gladly accept pull requests for other services (gitter, campfire, irc, you name it).
CONGRATULATIONS if you read all of this. In the next article we’ll talk about dockerizing your application and breaking it in small containers that you can juggle on your favorite docker host.
Leave a Reply