3 Simple examples from Ruby to Elixir

In this post we’re gonna see how to transform a simple script from Ruby to Elixir.

Installing Elixir

The first thing you need is to have Elixir installed on your box, the instructions are dead simple and you can
find them in the official Getting Started page.
For example on OS X is as simple as brew update; brew install elixir.

The Ruby Script

The script is the one I use to fire up my editor adding support for the file:line:column
format that is often found in error stacktraces. I keep this script in ~/bin/e.

#!/usr/bin/env ruby

command = ['mate']

if ARGV.first
  file, line_and_column = ARGV.first.split(':', 2)

  command << file
  command += ['-l', line_and_column] if line_and_column
end
command << '.' if command.size == 1
exec *command

Take 1: Imperative Elixir

As we all know, no matter the language, you can keep your old style. In this first example we’ll see the same

#!/usr/bin/env elixir

if System.argv != [] do
  [file| line_and_column] = String.split(hd(System.argv), ":", parts: 2)
  args = [file]

  if line_and_column != [] do
    args = args ++ ["-l"| line_and_column]
  end
else
  args = ["."]
end
System.cmd("mate", args)

The “Guillotine” operator

The first thing we notice the change in syntax for the splat assignment:

# Ruby
a, b = [1,2,3]
a # => 1
b # => 2
# Elixir
[a| b] = [1,2,3]
a # => 1
b # => [2,3]

The | operator in Elixir will in fact take out the head of the list and leave the rest on its right.
It can be used multiple times:

[a| [b| c]] = [1,2,3]
a # => 1
b # => 2
c # => [3]

what happens here is that the list that in the first example was b is now beheaded again.
If instead we wanted c to equal 3 the assignment would look like this:

[a| [b| [c]]] = [1,2,3]
a # => 1
b # => 2
c # => 3

As we can see Elixir matches the form of the two sides of the assignments and extracts values and variables accordingly.

Other notes

Let’s see a couple of other things that we can learn in this simple example

List concatenation: ++

The ++ operator simply concatenates two lists:

a = [1,2] ++ [3,4]
a # => [1,2,3,4]

Double quoted "strings"

All strings need to be double quoted in Elixir, as single quotes are reserved for other uses.
I make the mistake of using single quotes all the time. Probably that’s the price for being a
ROFLScale expert.

Take 2: First steps in Pattern Matching

With this second version we’re gonna see the pattern matched case.

Notice anything?

Yes. All ifs are gone.

#!/usr/bin/env elixir

args = System.argv
args = case args do
  [] -> []
  [""] -> []
  [path] -> String.split(path, ":", parts: 2)
end

args = case args do
  [] -> ["."]
  [file] -> [file]
  [file, ""] -> [file]
  [file, line_and_column] -> [file, "-l", line_and_column]
end

System.cmd("mate", args)

We now reduced the whole program to a couple of switches that will route the input and transform it
towards the intended result.

That’s it. No highlights for this implementation. Just a LOLCAT.

cat getting scared for no reason

Take 3: Modules and pipes

#!/usr/bin/env elixir

defmodule Mate do
  def open(argv), do: System.cmd("mate", argv |> parse_argv)

  def parse_argv([]), do: ["."]
  def parse_argv([options]) do
    [file| line_and_column] = String.split(options, ":", parts: 2)
    [file| line_and_column |> line_option]
  end

  def line_option([]),                do: []
  def line_option([""]),              do: []
  def line_option([line_and_column]), do: ["-l", line_and_column]
end

Mate.open System.argv

Module and defs

As you have seen we have now organized out code into a module and moved stuff to defined module
functions. The same function can be defined multiple times, Elixir will take care of matching the arguments
you pass to a function to the right.

Let’s review the two forms of function definition:

defmodule Greetings do
  # extended
  def hello(name) do
    IO.inspect("hello #{name}")
  end

  # onliner
  def hello(), do: IO.inspect("hello world!")
end

Greetings.hello "ppl" # => "hello ppl"
Greetings.hello       # => "hello world!"

Be sure to remember the comma before do: otherwise Elixir will complaint.

The |> pipe operator

If you’re familiar with commandline piping you’ll fell like at home with the pipe operator.
Basically it will take the result of each expression and pass it as the first argument of the next one.

Let’s see an example:

"hello world" |> String.capitalize |> IO.inspect # => "Hello world"

That is just the same of:

s = "hello world"
s = String.capitalize(s)
s = IO.inspect(s)
s # => "Hello world"

or

IO.inspect(String.capitalize("hello world")) # => "Hello world"

Where the latter is probably the least comprehensible to human eyes

What’s next?

Playing

Studying

Watching

Barebon, a minimal prototyping framework based on Bourbon and Neat

Say hello to the first, unpolished version of Barebon!

Barebon is a simple skeleton boilerplate for fast static page prototyping. It provides a friendly environment for front-end developers who love Sass and CoffeeScript. It also comes with a super-easy Grunt configuration that helps you to start hacking on your project right away.

Why?

Legit question! I absolutely love and use HTML5 boilerplate and Yeoman, but when it comes to super-simple static pages prototyping they’re just… overkill. I just wanted a neat, easy “framework” for these kinds of websites.

Since I’ve been enjoying Bourbon and Neat for a while I’ve decided to give it a try and make something focused on that. I liked it, so I thought I could share it with teh internetz.

How easy is it?

VERY easy. If you already have npm and grunt-cli installed, you just need to fork the repo, get in the folder and run $ npm install. When done, you just need to spawn the grunt watcher (that includes a Livereload feature), and you can start hacking on your code! 

Gimme dat repository link already!

Ok, ok! You can find Barebon on Github. Feel free to give it a try – any feedback is obviously appreciated!

Wordpress and OpsWorks, “Pride and Prejudice” (Part 2)

Here we are for Part 2 :)

In my last post about WP and OpsWorks I tried to explain the general setup for the recipes that I’ve developed to automate the deploy of WP applications through OpsWorks. If you missed it here's the link.

Today I’m gonna talk about a particular problem that I encountered while writing the recipe that currently handles the database importation.

As anyone with a little bit of experience in WP knows, applications developed on top of it are tightly coupled with the database and so their deploy differs from the one related to applications developed on top of frameworks like Rails or Laravel.

To handle the over mentioned importation I decided to keep it simple and stupid (and dirty!) and put the dump right in the repo. In this way it is automatically available during the deploy after the git fetch performed on the instance by the Amazon infrastructure.

Given the possibility to have or not the dump wherewith seed the database, I encountered the need to check for its presence. 

And here’s the first problem:

 
db_path = File.join(deploy[:deploy_to], 'current/dump.sql')

if File.file? db_path 

  # Do stuff like import the database 
  
  ... 
  
end

If you check for the presence of the dump in this way you’ll end up with a broken deploy when you remove the dump after a proper importation.

This is due the fact that the recipes that get executed by the Amazon infrastructure are actually compiled and so the check will be true even after the first time you remove the dump from the repo.

After a bit of Google, StackOverflow and Chef docs I found that the check should have been wrapped inside a ruby_block Chef resource.

Everything that is done inside this kind of resource is in fact evaluated dynamically right during the execution of the recipe where it is used. Not only what is written inside the block do end but also the proc passed to the not_if guard.

Here’s the proper check:


db_path = File.join(deploy[:deploy_to], 'current/dump.sql')

ruby_block 'magic_dynamic_evaluation' do
  not_if { File.file? db_path }
  block do
  
    # Do stuff like import the database
    ...

  end
  notifies :run, 'resource[resource_name]', :immediately
end

With regard to the last element inside the ruby_block (i.e. notifies :run, 'resource[resource_name]', :immediately) it really deserves a proper dissertation because it takes into consideration the notification system implemented by Chef. Here you can find a brief doc about it.

Anyway what the last statement inside the ruby_block do is “simply” notify another resource immediately (i.e. :immediately) after and if (i.e. not_if) it is run (i.e. :run).

Next time I’ll try to explain another functionality that I’ve put inside the recipe that I’ve briefly introduced in this post so stay in touch! ;)

つづく

Web Procedural Map Generation - Part 3

As I stated in my last post about map generation, I was satisfied by Voronoi as a map representation system, but it was not the right tool for me because I needed a wrap-wround map.

After some googling I found another great post by Amit Patel that describes  in great detail hexagonal maps, so I decided to use an hexagonal grid for my map. It seemed a good compromise.

Good and fascinating. A regular hexagon:

  • can be inscribed inside a circle;
  • is composed of six equilateral triangles with 60° angles inside;
  • is composed by six edges and six vertices, each edge having the same size.

There are a lot of other interesting facts about hexagons. Last but not least hexagons can be easily found in nature (e.g. honeycombs) and it’s often considered nature’s perfect shape:

image

Back to my goal of map generation, hexagonal maps are not so easy to deal with due to the nature of the hexagons themselves. As Amit’s says:

Squares share an edge with four neighbors but also touch another four neighbors at just one point. This often complicates movement along grids because diagonal movements are hard to weight properly with integer movement values. You either have four directions or eight directions with squares, but with hexagons, you have a compromise—six directions. Hexagons don’t touch any neighbor at only a point; they have a small perimeter-to-area ratio; and they just look neat. Unfortunately, in our square pixel world of computers, hexagons are harder to use…

In any case, having found no libraries to deal with hexagons in js, I decided to write one: hexagonal.js. I tried to give the library enough flexibility to be used following any of the approaches described by Amit in his post. At the moment some features are still missing (like the cube coordinates) but I’ll add them in the next few days along with demos and better documentation.

Here you can find a codepen demo that uses an hexagonal map and a heightmap.

CSV import with PostgreSQL

Importing data is one of the tasks I like the least. It requires to write messy code and it will inevitably be really slow, maybe too slow to be viable.

A good approach in these cases is to rely as much as possible on the database engine using a temporary table to store the CSV content and then calling the various insert into select or update from select queries needed to import data from the temp table. They can definitely save your day.

Last day I stumbled upon this wonderful feature of PostgreSQL that could avoid even the need of parsing the CSV. So, let’s say we have the following database:

image

And the following CSV:


Code;Name;Description
SHO32_LEA01_BLA10;Black shoes;Black leather shoes
SHO32_LEA01_RED10;Red shoes;Red leather shoes
SHO32_PLA90_BLA10;Black shoes;Black plastic shoes
SHO32_PLA90_RED10;Red shoes;Red plastic shoes
HAT76_LEA01_BLA10;Black hat;Black leather hat
HAT76_LEA01_RED10;Red hat;Red leather hat
HAT76_PLA90_BLA10;Black hat;Black plastic hat
HAT76_PLA90_RED10;Red hat;Red plastic hat

we can write the entire import algorithm in the DB. Pseudocode:

  • import the csv inside a temporary table;
  • for each row, detect the correct model code, material code and color code splitting the code we have in the first column;
  • create all missing colors, materials and models;
  • update all existing products (setting the new name and description);
  • insert all new products;

To achieve this we can write a PostgreSQL function. Basically you can see a function like an aggregate of SQL statements. With a function we can store in the DB all the import logic and when we have to effectively import the data, we can simply execute the function.

Okay, let’s see a function that will load our CSV and will import the data (see the comments for info):

/* a function named import_products that requires an argument an returns nothing */
CREATE OR REPLACE FUNCTION import_products(file_path text) RETURNS VOID SECURITY DEFINER AS $BODY$ BEGIN DROP TABLE IF EXISTS tmp_import_data; /*
* create the temporary table
* we start inserting only not null columns
* and then we update the other columns to set references
*/ CREATE TEMP TABLE tmp_import_data( product_id integer, product_code varchar(255) NOT NULL, model_id integer, model_code varchar(5), material_id integer, material_code varchar(5), color_id integer, color_code varchar(5), name varchar(255) NOT NULL, description text NOT NULL ); /* copy the entire csv in tmp_import_data */ execute format ( $$copy tmp_import_data (product_code, name, description) from %L delimiter ';' header csv$$ , file_path); /*
* set model code, material code and color code individually
* splitting the product code
*/ UPDATE tmp_import_data SET model_code = split_part(product_code, '_', 1), material_code = split_part(product_code, '_', 2), color_code = split_part(product_code, '_', 3); /* insert new colors */ INSERT INTO models (code) SELECT DISTINCT model_code FROM tmp_import_data WHERE NOT EXISTS (SELECT id FROM models WHERE code = model_code);
/* set references */ UPDATE tmp_import_data SET model_id = s.id FROM (SELECT id, code FROM models) AS s WHERE s.code = model_code; /* insert new materials */ INSERT INTO materials (code) SELECT DISTINCT material_code FROM tmp_import_data WHERE NOT EXISTS (SELECT code FROM materials WHERE code = material_code);
/* set references */ UPDATE tmp_import_data SET material_id = s.id FROM (SELECT id, code FROM materials) AS s WHERE s.code = material_code; /* insert new colors */ INSERT INTO colors (code) SELECT DISTINCT color_code FROM tmp_import_data WHERE NOT EXISTS (SELECT code FROM colors WHERE code = color_code);
/* set references */ UPDATE tmp_import_data SET color_id = s.id FROM (SELECT id, code FROM colors) AS s WHERE s.code = color_code; /* update name and description for existing products */ UPDATE products SET name = s.name, description = s.description FROM (SELECT model_id, material_id, color_id, name, description FROM tmp_import_data) AS s WHERE products.model_id = s.model_id AND products.material_id = s.material_id AND products.color_id = s.color_id;
/* insert new products */ INSERT INTO products (model_id, material_id, color_id, name, description) SELECT DISTINCT t.model_id, t.material_id, t.color_id, t.name, t.description FROM tmp_import_data AS t WHERE NOT EXISTS ( SELECT id FROM products WHERE model_id = t.model_id AND material_id = t.material_id AND color_id = t.color_id ); END; $BODY$ LANGUAGE plpgsql;

Now we need only to call the function passing a file name, and the function will take care of loading the stuff:

SELECT import_products("/home/user/import.csv");