From arduino to rails through api

Some years ago I used / to save a stream of data generated from a sensor attached to an Arduino board. Unfortunately  this service is no more available, but you can build your own data stream repository with rails and heroku.

In this example we read temperatures from an Arduino sensor and then we save data via API using a rails app.

There are several guides to build REST APIs using rails. This tutorial integrates some best practices but you need to focus your work on a smaller subset, depending on your need. Rails and JSON give you the perfect combination for a fast and comprehensive developement.

Rails api controllers

# /app/api/temperatures_controller.rb

module Api
  class TemperaturesController < ActionController::Base
    skip_before_action :verify_authenticity_token, if: :json_request?

    def index
      temperatures = Temperature.all
      if temperatures.any?
        render json: temperatures
        render json: {}, status: :not_found

    def show
      temperature = Temperature.find(params[:id]) rescue nil
      if temperature
        render json: temperature
        render json: {}, status: :not_found

    def create
      temperature =
        render json: temperature, location: api_temperature_path(, status: :created
        render json: { errors: temperature.errors }, status: :unprocessable_entity



    def json_request?


    def temperature_params

Test locally

During your development you can test your API with curl, here are some examples for saving and retrieving JSON data feeds:

# create
curl -v loalhost:3000/api/temperatures -X POST 
     -H "Accept: application/json" 
     -H "Content-Type: application/json" 
     -d '{"temperature": {"value": 20}}'

# index
curl -v loalhost:3000/api/temperatures

# show
curl -v loalhost:3000/api/temperatures/1

Your goal now is to create a POST request from Arduino just like the one generated with curl.

Arduino post request

To send data to a rails app you need an Anduino Ethernet Shield + Arduino or an Arduino Yun. In this example we are using an Anduino Ethernet Shield and we run on it this code:

#include <SPI.h>
#include <Ethernet.h>

// assign a MAC address for the ethernet controller.
// Newer Ethernet shields have a MAC address printed on a sticker on the shield
// fill in your address here:
byte mac[] = {0xDE, 0xAD, 0xBE, 0xEF, 0xFE, 0xED};

// fill in an available IP address on your network here,
// for manual configuration:
IPAddress ip(192,168,1,10);
// initialize the library instance:
EthernetClient client;

char server[] = "";   

unsigned long lastConnectionTime = 0;       
boolean lastConnected = false;              
const unsigned long postingInterval = 60000;

float temperature;  
int reading;  
int lm35Pin = 5;
float referenceVoltage;

void setup() {
  // Open serial communications and wait for port to open:

  referenceVoltage = 1.1;

  // start the Ethernet connection:
  if (Ethernet.begin(mac) == 0) {
    Serial.println("Failed to configure Ethernet using DHCP");
    // DHCP failed, so use a fixed IP address:
    Ethernet.begin(mac, ip);

void loop() {
  // if there's incoming data from the net connection.
  // send it out the serial port.  This is for debugging
  // purposes only:
  if (client.available()) {
    char c =;

  // if there's no net connection, but there was one last time
  // through the loop, then stop the client:
  if (!client.connected() && lastConnected) {

  // if you're not connected, and ten seconds have passed since
  // your last connection, then connect again and send data:
  if(!client.connected() && (millis() - lastConnectionTime > postingInterval)) {
    reading = analogRead(lm35Pin);
    temperature = (referenceVoltage * reading) / 1023;
  // store the state of the connection for next time through
  // the loop:
  lastConnected = client.connected();

// this method makes a HTTP connection to the server:
void sendData(int thisData) {
  // if there's a successful connection:
  String JsonData = "{"temperature": {"value": ";
  JsonData = JsonData + thisData;
  JsonData = JsonData + "}}";
  if (client.connect(server, 80)) {
    // send the HTTP PUT request:
    client.println("POST /api/temperatures HTTP/1.1");
    client.println("User-Agent: Arduino/1.0");
    client.println("Accept: application/json");
    client.print("Content-Length: ");

    // last pieces of the HTTP PUT request:
    client.println("Content-Type: application/json");
    client.println("Connection: close");
  else {
    // if you couldn't make a connection:
    Serial.println("connection failed");
   // note the time that the connection was made or attempted:
  lastConnectionTime = millis();

The example above shows how to manage ethernet configuration, transform reading from analog input sensor and how to send JSON data to our Rails app.

Keep in mind it is possible to use another board to collect and send data, for example Spark Core and Electric Imp are perfect beacause already integrated with WiFi module.

Using spring with pow!

The other day I was a bit sad for I was aware that pow (the web server) wasn’t leveraging spring (the Rails preloader) with its fast load times.


Luckily this deficiency is easily fixed by adding the spring snippet to your

# This file is used by Rack-based servers to start the application.

  load File.expand_path('../bin/spring', __FILE__)
rescue LoadError

require ::File.expand_path('../config/environment',  __FILE__)
run Rails.application

Why JSON sucks for user configuration

In the current era of JavaScript JSON has become super common. In this article I want to make a point about it not being suitable for user configuration.

The good

Let’s start with the good parts. It’s not XML. Being born as a data interchange format from the mind of Douglas Crockford it shines for its simplicity in both the definition and easiness in parsing. That’s pretty much it. As a configuration tool the only good thing I can say is that many people are accustomed to it, and usually already know its rules.

The bad

As popular as it is the JSON format has started to be overused, especially for letting users to configure stuff (that’s the point of the article). The first thing to notice here is that JSON is not intended to be used that way, instead we can quote from the official page:

JSON (JavaScript Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate.


What I’d like to highlight is that JSON is a compromise between write/readability for both humans and machines. A data interchange format can easily be almost unreadable to humans, that happens all the time with binary formats but a configuration format should instead be biased towards humans.

The ugly

But what’s wrong with JSON?

I see three things that make JSON fail as a configuration format:


The format is really demanding. Contrary to the ordinary JavaScript literal object format (which is where it comes from) JSON mandates double quotes around keys and strict absence of commas before closed parentheses.

Lack of comments

The absence of comments is perfectly fine in a data interchange formats where the payload will probably travel on wires and needs to be as tiny as possible. In a configuration format conversely it’s plain non-sense as they can be used to help the user explaining each configuration option or for the user to explain why he chose a particular option.


While JSON is quite readable when properly formatted, this good practice is not enforced by the format itself, so it’s up to coders good will to add returns, tabs and spaces in the right places. Lucky enough they usually do.


JSON has become very popular thanks to JavaScript to the point that is also a super common format for user configuration. It wasn’t designed to do so and in fact it does an awful job.

Almost anything is a good alternative, from INI to CSON. The one I like the most tho is YAML (which in turn has been erroneously used as a data interchange format). Here is the YAML tagline:

YAML is a human friendly data serialization standard for all programming languages.


NTML authentication for Rails from inside Microsoft™ ActiveDirectory

I ended up with a decent setup in which the whole authentication is handled by IIS on a Windows machine that lives inside the ActiveDirectory tree. Adapting from these instructions.

IIS will act as a reverse proxy to your Rails app (typically installed on a *nix server, apache+passenger in my case).

The secret resides in configuring IIS to handle NTLM and then adding this nifty plugin that will basically reproduce the mod_proxy api for IIS.

Here’s an iirf.ini example:

# NOTE: 
# This file should be placed in the IIS document root 
# for the application

StatusInquiry ON
RewriteLogLevel 3
RewriteLog ....TEMPiirf
RewriteCond %{REQUEST_FILENAME} -f
RewriteRule ^.*$ - [L]
ProxyPass ^/(.*)$$1
ProxyPassReverse /

With this setup you can rely on the fact that the authentication is performed by IIS and you only get authenticated request with the authentication information stored inside HTTP_AUTHORIZATION.

To parse the user data from the auth header I used net-ntlm:

require 'kconv'
require 'net/ntlm'

if /^(NTLM|Negotiate) (.+)/ =~ env["HTTP_AUTHORIZATION"]
  encoded_message = $2
  message = Net::NTLM::Message.decode64(encoded_message)
  user = Net::NTLM::decode_utf16le(message.user)

After that you can even connect to the LDAP ActiveDirectory interface and fetch details about the user.

Use the ENV Luke! aka: simulate the ENV in OpsWorks using Chef and Dotenv

OpsWorks is an impressive piece of software, but sometimes it lacks the comfort zone we developers love so much.
One feature that I really miss, is the ability to configure my application using ENV variables.
I’m not aware of any easy way (ie: Heroku like) to create environment variables in OpsWorks that the application can consume.

Fortunately OpsWorks is based on Chef and can be customized in any way you want.
Be warned, it’s not always an easy path, basic Chef knowledge is required, the interface is quite convoluted, but in the end it gets the job done.

So, we were saying environment!
We know environment is not supported in OpsWorks, so what we really need is to simulate it in some way.
A common solution among Rails developers, is the Dotenv gem which load the file .env in the root of you app and create the correspondent keys in the ENV object.

I will assume you already have created a Stack in OpsWorks with a Rails application layer.

Continue reading “Use the ENV Luke! aka: simulate the ENV in OpsWorks using Chef and Dotenv”

Mocking Rails, for SPEED

DISCLAIMER: This article is by no means related to the recent quarrel about TDD’s death (be it presumed or actual), nor to this article or these hangouts.


Now that we’re clear let us get to the real title:

How I test Rails stuff without loading Rails and learned to use mkdir

Chapter 1: mkdir1

this chapter is quite brief and straightforward…

Starting from Rails 3 every folder you create under app will be used for autoloading, this means that you can group files by concern2 instead of grouping them by MVC role, or you maybe want to add new “roles”.

For example the app I’m currently working on contains looks like this:

├── admin
├── assets
├── authentication
├── authorization
├── controllers
├── forms
├── helpers
├── journaling
├── mailers
├── models
├── proposals
├── utils
└── views

The journaling folder contains journal.rb (a simple Ruby class) and journal_entry (an ActiveRecord model).

Chapter 2: Rails not required

Talking about that Journal, it doesn’t really need Rails to be used, it’s ok with any JournalEntry-ish object that exposes this API:

class FauxJournalEntryClass <, :user, :model_type, :model_id, :action)
  def save!
    @persisted = true

  def persisted?

So it can be used and tested without loading rails, that gives opportunity to have the corresponding spec to give us really fast feedback. Enter a lighter version of our spec_helper:

# spec/spec_helper.rb
RSpec.configure do |config|
  # yada, yada, rspec default config from `rspec --init`

along with a speedy spec:

require 'light_spec_helper'
require File.expand_path('../../app/journal/journal', __FILE__)

describe Journal do
  # examples here...

and have the normal spec_helper to load the light one:

# spec/spec_helper.rb
require 'light_spec_helper'
# stuff coming from `rails generate rspec:install`...

Chapter 3: Mocking Rails

…without making fun of it

With time i often found myself in the need to add a logger, to check current environment or to know the app’s root from this kind of plain classes.

In a Rails app the obvious choices are Rails.logger, Rails.env and Rails.root. In addition I really can’t stand that File.expand_path and light_spec_helper, they’re just ugly.

My solution was to:

  • have a faux Rails constant that quacks like Rails in all those handy methods
  • add the most probable load paths into the light spec_helper
  • rename the the rails-ready spec_helper to rails_helper and keep the light helper as the default one

Here’s how it looks:

# spec/spec_helper.rb
require 'bundler/setup'
require 'pathname'

unless defined? Rails
  module Rails
    def self.root"#{__dir__}/.."))

    def self.env

    def self.logger
      @logger ||= begin
        require 'logger'"log/#{env}.log"))

    def self.cache

    def self.fake?

    def self.implode!
      class << self
        %i[fake? cache logger env root implode!].each do |m|
          remove_method m

$:.unshift *Dir[File.expand_path("#{Rails.root}/{app/*,lib}")]

RSpec.configure do |config|
  # yada, yada, rspec default config from `rspec --init`

Notice anything of interest?
I bet you already guessed what Rails.implode! does…

Here’s the shiny new rails_helper:

ENV["RAILS_ENV"] ||= 'test'
require 'spec_helper'
Rails.implode! if Rails.respond_to? :implode!

require File.expand_path('../../config/environment', __FILE__)
require 'rspec/rails'
# Requires supporting ruby files with custom matchers and macros, etc,
# in spec/support/ and its subdirectories.
Dir[Rails.root.join('spec/support/**/*.rb')].each do |f|
  f = Pathname(f).relative_path_from(Rails.root.join('spec'))
  require f

ActiveRecord::Migration.maintain_test_schema! # rails 4.1 only

RSpec.configure do |config|
  # stuff coming from `rails generate rspec:install`...


The introduction of Spring in Rails 4 has actually made all this stuff useless, running RSpec through it makes it blazingly fast.

Sorry if I wasted your time and I didn’t even make you smile.

  1. I don’t actually use mkdir, I know how to use my editor 
  2. Not those concerns 

Start a Rails 4 full-featured application with uWSGI

uWSGI is a full stack application server that can act as a proxy service, process monitor and much more. In this post we will setup Feedbin, a Rails 4 application that uses Sidekiq as background jobs engine, PostgreSQL as rdbms and Elasticsearch as full-text search engine. Configuring uWSGI is quite easy, we just need to write a file!

Server configuration

Configuring a local virtual machine with Virtualbox and Vagrant is easy and out of the scope of this post, but it could help taking a look at the following links:

After you ensured ruby 2.0.0 is installed, you can install uWSGI via  rubygems:

gem install uwsgi

Clone the Feedbin repo into your machine, I cloned the repo into ~/apps:

git clone

and comment out unicorn, capistrano-unicorn, therubyracer and foreman in the Gemfile:

# gem 'capistrano-unicorn', github: 'sosedoff/capistrano-unicorn', ref: '52376ad', require: false
# gem 'unicorn'
# gem "therubyracer", require: 'v8'
# gem 'foreman'

Feedbin takes advantage of env variables, so we add dotenv-rails to the Gemfile, this way we don’t have to export the env settings each time we run a command in console:

gem 'dotenv-rails'

dotnev-rails reads settings from the .env file:


And now, bundle:

bundle install --without development test

Let’s create a new database:

createdb -T template0 feedbin_production

go ahead loading schema and running the migrations (we need to load the schema because at the moment there is an error in one of the old migrations preventing the rake task to be completed correctly)

rake db:schema:load
rake db:migrate

Application setup is completed. Now we can configure uWSGI.


Create the uwsgi config file in config/uwsgi.ini:

# this is the application user/uid to use in our uWSGI process. uid = vagrant
# Setting master = true I'm telling uWSGI to enable the master process.
# This way uWSGI can act as an application instance monitor master = true
# Number of application instances to spawn processes = 2
# Full path of the unix socket that apache/nginx will use to speak to uWSGI socket = /home/vagrant/apps/feedbin/tmp/uwsgi.sock
# Setting this modifier to 7 will enable the ruby/rack plugin socket-modifier1 = 7
# Application root directory chdir = /home/vagrant/apps/feedbin
# Rackup file location rack = /home/vagrant/apps/feedbin/
# post-buffering and buffer-size are required by the Rack specification post-buffering = 4096
buffer-size = 25000
# load the bundle subsystem
rbrequire = rubygems
rbrequire = bundler/setup
# disable logging
disable-logging = true
# uWSGI log file location
daemonize = /home/vagrant/apps/feedbin/log/uwsgi.log
# uWSGI pid file location
pidfile = /home/vagrant/apps/feedbin/tmp/
# Start sidekiq when you start the application.
# Using smart-attach-daemon uWSGI can be used to start external services,
# and monitor them (for example it automatically restarts the daemon when you restart uWSGI)
smart-attach-daemon = %(chdir)/tmp/pids/ bundle exec sidekiq -P %(chdir)/tmp/pids/ -e production

As you can see we configured only one file to start both the application itself and Sidekiq.

Now we can start the application with the follow command:

uwsgi ~/apps/feedbin/config/uwsgi.ini

Our Feedbin instance is now up and running, but in order to let it work properly we need to configure nginx/apache in front of it.  If you need to install nginx you can follow the instructions at this Ubuntu community link.

Let’s now create the application configuration; below there is my nginx config file (we also need to create https self-signed certs for Feedbin):

server {
  listen 80;
  listen 443 default ssl;

  ssl_certificate /etc/nginx/certs/myfeedbin.crt;
  ssl_certificate_key /etc/nginx/certs/myfeedbin.key;

  location / {

    root /home/vagrant/apps/feedbin/public;
    include uwsgi_params;
    if (!-f $request_filename) {
      uwsgi_pass unix:///home/vagrant/apps/feedbin/tmp/uwsgi.sock;
    uwsgi_param     UWSGI_SCHEME $scheme;
    uwsgi_param     SERVER_SOFTWARE    nginx/$nginx_version;
    uwsgi_param     HTTPS           on;
    uwsgi_modifier1 7;

    location ~ ^/assets/ {
      expires 1y;
      add_header Cache-Control public;

      add_header ETag "";


Restart nginx.


Open your browser and enjoy: the application is up and running!

Appendix: Other useful uWSGI settings


If your app uses rvm you can add the following to the uwsgi.ini in order to load the rvm environment:

# load rvm
rvm-path = /home/vagrant/.rvm

Environment Variables

We used dotenv to set environment vars inside a config file, but you may prefer to set these variables directly inside uWSGI. You can do that (for our example) by adding the following to uWSGI.ini

env = BUNDLE_GEMFILE=/home/vagrant/apps/feedbin/Gemfile
env = RAILS_ENV=production
env = DATABASE_URL=postgres://vagrant:secret@
env = SECRET_KEY_BASE=yoursecretkey

The pattern is env = <name>=<value>

Final considerations

uWSGI is a powerful piece of software with a lot of interesting features like fast-routing, auto scaling, support for new plugins, alarms, mules, crontab, application caching and multiple applications management mode… oddly enough it’s so much more famous in the python world than in the ruby community.