ECS and KISS dockerization of WordPress (Part 2)

‘Two article ago’ I wrote about my initial experience with Docker and ECS, the container service, built on top of EC2, offered by Amazon. Here it is if you want to take a look.

Today I want to continue in that direction describing the configuration of containers (or better container) I choose to serve the application selected as guinea pig to try ECS. Just as a reminder, the app was an almost standard WordPress blog with a custom theme and a few plugins.

I specified container because the app is actually served by one single container in which it is packed together with NGINX. This naive solution differs from the those offered by the Dockerfiles available online because it takes advantage mainly of two other AWS services: Amazon S3 and Amazon RDS.

Assuming the two products are known, the first let us store and serve the contents generated by the users (e.g. media uploaded from the WordPress dashboard), while the latter represents the actual database used by the app. AWS S3 is actually used through a WordPress plugin that I already mentioned in my previous post. Again, if you are curious, check it out.

Without these two services we should have handled at least two other containers, one data volume to store the uploaded media and one container to host the database engine.

Other containers we could have deployed are a data volume to host the WordPress core, a data volume to host the theme and a dedicated container for the Web Server. This is for sure a good example of a well organized architecture but it need additional Dockerfiles and a more complex configuration on the ECS side.

Considering my goal and the availability of S3 and RDS I choose a more naive approach by relying on a single, simple container. The Dockerfile I used as starting point was the one of the official WordPress image but right now there is not so much left from that.

Indeed, after having online the first functioning version with Apache as Web server and the stable PHP 5.6, I started to deep dive into the possible optimizations of the container. This lead me to NGINX, PHP7 and the HHVM.

Long story short, right now we have a custom image built from Debian Jesse (as recommended by the Docker docs) with a custom NGINX compiled from source and the HHVM as ‘PHP engine’.

Here is the ‘monster’ Dockerfile:

FROM debian:jessie

ENV NGINX_VERSION 1.9.9
ENV NPS_VERSION 1.10.33.2

# Install wget and ca-certificates
RUN apt-get update && apt-get install -y --no-install-recommends wget ca-certificates && 
    # Set up HHVM installation
    wget -O - http://dl.hhvm.com/conf/hhvm.gpg.key | apt-key add - && 
    echo deb http://dl.hhvm.com/debian jessie main | tee /etc/apt/sources.list.d/hhvm.list && 
    # Install all the packages we need...
    apt-get update && apt-get install -y --no-install-recommends 
    build-essential zlib1g-dev libpcre3 libpcre3-dev libssl-dev hhvm unzip less && 
    # Install wp-cli
    wget https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar && 
    chmod +x wp-cli.phar && 
    mv wp-cli.phar /usr/local/bin && 
    echo '#!/bin/bash' >> /usr/local/bin/wp && 
    echo '/usr/local/bin/wp-cli.phar --allow-root $@' >> /usr/local/bin/wp && 
    # Get PageSpeed NGINX module source
    cd && 
    wget https://github.com/pagespeed/ngx_pagespeed/archive/master.zip -O ngx_pagespeed-master.zip && 
    unzip ngx_pagespeed-master.zip && 
    cd ngx_pagespeed-master && 
    wget https://dl.google.com/dl/page-speed/psol/${NPS_VERSION}.tar.gz && 
    tar -xzvf ${NPS_VERSION}.tar.gz && 
    # Get NGINX source
    cd && 
    wget http://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz && 
    tar -xvzf nginx-${NGINX_VERSION}.tar.gz && 
    # Compile NGINX with all bells and whistles
    cd nginx-${NGINX_VERSION} && 
    ./configure --prefix=/etc/nginx 
                --sbin-path=/usr/sbin/nginx 
                --conf-path=/etc/nginx/nginx.conf 
                --error-log-path=/var/log/nginx/error.log 
                --http-log-path=/var/log/nginx/access.log 
                --pid-path=/var/run/nginx.pid 
                --lock-path=/var/run/nginx.lock 
                --http-client-body-temp-path=/var/cache/nginx/client_temp 
                --http-proxy-temp-path=/var/cache/nginx/proxy_temp 
                --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp 
                --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp 
                --http-scgi-temp-path=/var/cache/nginx/scgi_temp 
                --user=nginx 
                --group=nginx 
                --with-http_ssl_module 
                # --with-http_realip_module 
                # --with-http_addition_module 
                # --with-http_sub_module 
                # --with-http_dav_module 
                # --with-http_flv_module 
                # --with-http_mp4_module 
                --with-http_gunzip_module 
                --with-http_gzip_static_module 
                # --with-http_random_index_module 
                --with-http_secure_link_module 
                # --with-http_stub_status_module 
                --with-http_auth_request_module 
                --with-threads 
                # --with-stream 
                # --with-stream_ssl_module 
                # --with-http_slice_module 
                # --with-mail 
                # --with-mail_ssl_module 
                --with-file-aio 
                --with-http_v2_module 
                --with-cc-opt='-g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2' 
                --with-ld-opt='-Wl,-z,relro -Wl,--as-needed' 
                --with-ipv6 
                --add-module=$HOME/ngx_pagespeed-master && 
    make && make install && 
    # Clean up everything
    apt-get purge -y wget ca-certificates build-essential unzip && 
    apt-get autoremove -y && apt-get clean -y && apt-get autoclean -y && 
    rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* $HOME/ngx_pagespeed-master.zip 
       $HOME/ngx_pagespeed-master nginx-${NGINX_VERSION}.tar.gz nginx-${NGINX_VERSION}

# Add nginx user and the add it to www-data group (this need to be in a separate RUN!)
RUN useradd -r nginx && usermod -a -G www-data nginx

# Configure NGINX
RUN bash -c 'mkdir -p /var/cache/nginx/{client_temp,proxy_temp,fastcgi_temp,uwsgi_temp,scgi_temp}' && 
    mkdir /var/ngx_pagespeed_cache && 
    mkdir /var/run/nginx-cache && 
    chown -R nginx /var/cache/nginx/client_temp 
                   /var/cache/nginx/proxy_temp 
                   /var/cache/nginx/fastcgi_temp 
                   /var/cache/nginx/uwsgi_temp 
                   /var/cache/nginx/scgi_temp && 
    chown -R www-data:www-data /var/ngx_pagespeed_cache /var/run/nginx-cache

# Configure HHVM
RUN /usr/share/hhvm/install_fastcgi.sh && 
    /usr/bin/update-alternatives --install /usr/bin/php php /usr/bin/hhvm 60 && 
    echo 'hhvm.server.expose_hphp = false' >> /etc/hhvm/server.ini
    # Use a UNIX socket (HHVM side)
    # sed -i '/hhvm.server.port/chhvm.server.file_socket=/var/run/hhvm/hhvm.sock' /etc/hhvm/server.ini && 
    # Ser permissions on UNIX socket
    # chown -R www-data:www-data /var/run/hhvm

# Set up NGINX conf
COPY nginx.conf /etc/nginx/nginx.conf

# Set up app code
COPY . /usr/share/nginx/html

# Set permissions on app root
RUN chown -R www-data:www-data /usr/share/nginx/html

# Set up entrypoint
COPY docker-entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]

EXPOSE 80 443

CMD ["nginx", "-g", "daemon off;"]

I called it ‘monster’ mainly because of the first big RUN which deals with the installation/compilation of everything needed.

I decided to switch from multiple separated RUN to one big RUN to minimize the number of layers of the union filesystem and to properly handle the cleanup process of the what is installed and created during the installation and compilation on NGINX. By specifing the cleanup right after the installation and compilation it is granted that nothing unneeded gets written in the layer. This allowed us to minimize the total size of the builded image. Just to ‘spread the word’, here there is a beautiful explanation on how the Docker filesystem works.

Among all the things handled by the ‘big RUN’ the most interesting one is probably the configure step before the NGINX compilation. This step specifies all the desired NGINX modules and among them it figures also the Google PageSpeed module. It grants some pretty nice advanced optimizations and I strongly recommend you to take a look at it.

There is actually a lot more to talk about but I think I will continue to describe the Dockerfile and the NGINX/HHVM configurations in my next article/s.

So stay tuned! ;P

Cheers!

Leave a Reply

wpDiscuz