LambdaDays 2017, FP concepts and their application

In my last post I tried to summarize the main concepts expressed by Prof. John Hughes and Prof. Mary Sheeran in their wonderful keynote at the LambdaDays 2017.

If you didn’t read it, well, here it is. Go on, I’ll be waiting for you here 😁


Jokes apart, at the end of my summary I left a little bit of suspense regarding the topics of my next (this) post but I also gave a few hints about them.

So, without further ado, here there are the two “mysterious concepts”:

Continue reading “LambdaDays 2017, FP concepts and their application”

LambdaDays 2017 – more than one month later…

I know, I should have written this article a while ago but I couldn’t find the time…sorry 😞

Anyway…one month…how time flies!

Last February, thanks to 😘 Mikamai, I had the immense pleasure to attend to an astonishing conf.

For the ones who don’t know, LambdaDays is an international 2 days conference that has been held in Krakow for four years now.

Its main focus is the “umbrella topic” of “Functional Programming”.

Continue reading “LambdaDays 2017 – more than one month later…”

asdf the “easy to write and hard to read” version manager

As a Rubyist one of the first thing you end up doing is to manage many different Ruby versions on the same machine. As a matter of fact, one of the first steps in setting up a new workstation is to install some kind of version manager like RVM or rbenv.

Unfortunately it doesn’t end up simply like this…

Continue reading “asdf the “easy to write and hard to read” version manager”

Ecto 2 is coming

Some days ago Ecto version 2.0.0-rc.5 has been released. So Ecto 2 is coming and exploring how it works and its new features is a good idea .

First, from, Ecto is a domain specific language to write queries and interacting with databases in Elixir.

This version has four main components: Ecto.Repo, Ecto.Schema, Ecto.Query, Ecto.Changeset. Note here the absence of Ecto.Model that has been deprecated in favor of a more data-oriented approach.

Let’s try it by creating a sample Elixir application.

mix new --sup my_shop

This command uses mix to create our application while the --sup option generates an OTP application skeleton that includes a supervision tree.

Now we are going to edit mix.exs file in order to include some dependencies at their latest versions: ecto and postgrex.

def application do
    [applications: [:logger, :ecto, :postgrex],
     mod: {MyShop, []}]
defp deps do
      {:ecto, "~> 2.0.0-rc.5"},
      {:postgrex, "~> 0.11.1"}

Run mix deps.get and we’re ready to define our repo.

defmodule MyShop do
  use Application

  def start(_type, _args) do
    import Supervisor.Spec, warn: false

    children = [
      supervisor(MyShop.Repo, [])
    opts = [strategy: :one_for_one, name: MyShop.Supervisor]
    Supervisor.start_link(children, opts)

defmodule MyShop.Repo do
  use Ecto.Repo, otp_app: :my_shop

Note that we are defining our repo supervised by our app.
Repositories are the way you use to communicate with datastore, they are wrappers around our databases and you can define as many as we need and configure them in config/config.exs. This is my configuration:

use Mix.Config

config :my_shop,
  ecto_repos: [MyShop.Repo]

config :my_shop, MyShop.Repo,
  adapter: Ecto.Adapters.Postgres,
  url: "postgres://my_shop_user:my_shop_password@localhost:5432/my_shop_dev"

Now we can run the specific mix task mix ecto.create and your database should be created.

We need some tables so let’s define a migration. In priv/repo/migrations/20160516233500_create_tables.exs:

defmodule MyShop.Repo.Migrations.CreateTables do
  use Ecto.Migration

  def change do
    create table(:products) do
      add :name, :string
      add :description, :text
      add :cost, :integer

    create table(:colors) do
      add :code, :string

    create table(:order_items) do
      add :product_id, references(:products)
      add :color_id, references(:colors)
      add :quantity, :integer
      add :cost, :integer

    create table(:orders) do
      add :order_item_id, references(:order_items)

    create table(:addresses) do
      add :country, :string

Run mix ecto.migrate and we’re done, we have five tables.

Now we are ready to use Ecto.Schema:

defmodule Product do
  use Ecto.Schema

  schema "products" do
    field :name, :string
    field :description, :string
    field :cost, :integer

Schemas are used to map any data source into an Elixir struct. Note that it’s not mandatory to use all the table fields, just those you need.

Now run iex -S mix in order to load your application into iex and verify if it works:

iex(1)> %Product{}
%Product{__meta__: #Ecto.Schema.Metadata<:built>, cost: nil, description: nil,
 id: nil, name: nil}
iex(2)> %Unexistent{}
** (CompileError) iex:2: Unexistent.__struct__/0 is undefined, cannot expand struct Unexistent
    (elixir) src/elixir_map.erl:58: :elixir_map.translate_struct/4

Now, let’s use our repo to insert a record in our data store:

MyShop.Repo.insert(%Product{name: "Programming Elixir"})

13:43:52.298 [debug] QUERY OK db=26.0ms
INSERT INTO "products" ("name") VALUES ($1) RETURNING "id" ["Programming Elixir"]
 %Product{__meta__: #Ecto.Schema.Metadata<:loaded>, cost: nil, description: nil,
  id: 1, name: "Programming Elixir"}}

Import Ecto.Query and retrieve all the products in our table:

iex(4)> import Ecto.Query
iex(5)> MyShop.Repo.all(from p in Product)

13:46:18.400 [debug] QUERY OK db=1.4ms
SELECT p0."id", p0."name", p0."description", p0."cost" FROM "products" AS p0 []
[%Product{__meta__: #Ecto.Schema.Metadata<:loaded>, cost: nil, description: nil,
  id: 1, name: "Programming Elixir"}]

It works 🙂

In my next post I’ll try to go deeper with more complex queries and introduce changesets.

If you are interested in this subject, Plataformatec will release an ebook about Ecto 2.0 written by José Valim, Elixir creator, you can reserve a copy here.

Phoenix Framework: the assets pipeline


From the time I wrote part 1
of this short series, Atom has gained a new Elixir plugin based on Samuel Tonini’s Alchemist Server.
From the Emacs plugin, it inherits all the most notable features such as
autocomplete, jump to definition/documentation for the function/module under
the cursor, quote/unquote code and interactive macro expansion.
A feature reference along with some screenshots can be found at the atom-elixir page.
It also looks pretty good.

The assets pipeline

Assets pipelines are one of the most important features in modern web frameworks.
When working on this task, Phoenix developers have proven that they value
pragmatism over purity and have chosen to base their implementation on Brunch, a Node.js build tool that takes care of everything
related to assets management.
This choice has probably saved man-years of work, that would have inevitably delayed the release of a fully working pipeline system.
A very common counter argument is that this adds node as a dependency, but I
think it’s a negligible inconvenient, node is most probably already present on
the majority of developers machines.

Continue reading “Phoenix Framework: the assets pipeline”

Phoenix, to the basics and beyond.


Phoenix is the exciting new kid on the block in the vast world of web frameworks.
Its roots are in Rails, with the bonus of the performances of a compiled language.
This isn’t exactly a getting started guide, but a (albeit short) list of things you’ll have to know very soon in the process of writing a Phoenix application, that are just a bit beyond the writing a blog engine in 15 minutes by using only the default generators.
I assume previous knowledge of the Elixir language, the Phoenix framework and the command line tools

Continue reading “Phoenix, to the basics and beyond.”

Elixir as a parsing tool: writing a Brainfuck interpreter, part two

This is the second in a series of articles on building a brainfuck interpreter in Elixir

In the first part we built a minimal brainfuck interpreter that can already run some basic program.
For example

# prints A

# prints the ASCII character preceding the one taken as input
# in "B" -> out "A" 

But honestly we can’t do anything more with it.

The first missing feature is memory management. We have implemented the functions that move the pointer to memory cells left and right, but we’re still stuck with a non expanding memory tape of one cell only.

Let’s implement memory auto expansion, turns out it is gonna be very easy.

Continue reading “Elixir as a parsing tool: writing a Brainfuck interpreter, part two”