LambdaDays 2017 – more than one month later…

I know, I should have written this article a while ago but I couldn’t find the time…sorry 😞

Anyway…one month…how time flies!

Last February, thanks to 😘 Mikamai, I had the immense pleasure to attend to an astonishing conf.

For the ones who don’t know, LambdaDays is an international 2 days conference that has been held in Krakow for four years now.

Its main focus is the “umbrella topic” of “Functional Programming”.

During the two days of the conference this topic was addressed from many different points of view, both theoretical and practical, by many prominent experts.

Maybe it’s because I’m still not familiar with all the topics that fall under “Functional Programming” (FP), but I found the level of the talks consistently high.

I’d love to give a glimpse of every talk I attended but sincerely I fear that this article would end up pretty long and boring.

To fix the first problem I’ll report just the first day keynote and leave the other talks to future articles…maybe 😜
Don’t despair, at you can already find all the slides and videos of the conf 😉

As for the boring part I’ll try to keep you entertained 😛

Jokes aside I think that the contents of the first keynote were incredibly dense and well presented by the speakers.
Yes, this year the participants had the opportunity to attend a wonderful double-talk held by Prof. John Hughes and his wife Prof. Mary Sheeran. For the ones who don’t know them they are part of the Program Committee for Haskell and they both contributed to the language development.
The keynote, titled like the famous article “Why Functional Programming Matters” published by John Hughes in 1989, gave a broad overview of the Functional Programming evolution with a deep dive in some of the milestones and key concepts.

The talk started with the early days (i.e. 1940) by presenting the fascinating Church encoding and its power to represent any basic data type and operator exclusively with functions. Booleans, integers, pairs and lists, you can express all of them by relying only on the concept of function. An example? Here you go with the true and false booleans: true x y = x, false x y = y.
Do you want something more than basic data types? Well it happens that the Church encoding lets you also express control flow statements like if-then-else:

ifte bool t e =
bool t e

As crazy as it may sound the Church encoding was actually used inside the early versions of the Glasgow Haskell compiler to implement the basic data structures of the language.

After a few concrete examples of the Church encoding, Prof. Hughes jumped to John McCarthy’s LISP (1960). It was kind of a due step considering all the influence that this language had (and still has!) in the development and evolution of programming languages (not only functional). During the talk LISP was presented, in particular, as the first language allowing to write and execute high order functions.

Right after LISP, it was the turn of an important concept regarding programming languages design that was initially expressed by Peter Landin in 1965: “expressive power should be by design, rather than by accident!”

Strictly related to this one, another important point of view, expressed by Turing award winner (1977) John Backus, was presented by Prof. Hughes by citing the paper “Can Programming Be Liberated from the Von Neuman Style? A Functional Style and Its Algebra of Programs” (1978).
In this paper Backus depicts what I think is an accurate description of the state of the art of programming languages in the latest of the seventies: “conventional programming languages are growing ever more enormous, but not stronger”.
According to Backus, programming languages at the time were becoming increasingly bound to the “Von Neumann style” of programming. As a consequence, they were becoming fat and weak, incapable to effectively leverage powerful combining forms that were already there. The “Von Neuman Style” was both a technical and intellectual bottleneck.

Backus reference represented also the first switch between the speakers. In particular it gave the chance to Prof. Mary Sheeran to present a concrete example of the power of Backus’ combining forms.
She took into account the work “Functional Geometry” (1982) of Peter Henderson to show how it was (and still is) possible to develop an algebra capable to describe and implement pictures through a series of simple rules/laws. After all, as also shown by the Church encoding, pictures are data and data can be represented with functions.
For the record, the picture considered by Henderson was the famous Escher woodcut Square Limit.

One of the key take aways of Henderson’s work was the highlighting of the positive correlation between the simplicity of the rules and the overall quality of the resulted algebra.

By considering a similar domain, i.e. 2-D geometrical problem, Prof. Hughes reinforced what his wife presented by discussing the results of a programming language comparison reported inside the paper “Haskell vs. Ada vs. C++ vs. Awk vs. … An Experiment in Software Prototyping Productivity” (1994).

What draws my attention was the incredible difference between the number of LOC required to solve the given geometrical problem in Haskell and the number of LOC used with the imperative languages like C++ and Ada.
The time spent by a student with only 8 days of experience with the language to implement a solution to the given problem was particularly surprising for me.
I think that it shows clearly how it is possible to solve a problem in a declarative and functional programming language like Haskell in a concise way even without a deep knowledge of the language itself.

After presenting the results of the comparison, Prof. Hughes introduced another important concepts of FP: lazy evaluation. This term was brought into the field of programming languages in 1976 by the works “A lazy evaluator”, by Henderson and Morris, and “CONS should not evaluate its argument”, by Friedman and Wise.
Generally speaking it can be considered as the key enabler for the expression and implementation of numerical algorithms that work with possibly infinite values. One example? The Newton-Raphson square root.
From an implementational point of view, however, lazy evaluation alone isn’t enough to handle infinite values. How do these infinite values get processed with a limited amount of resources? Well the answer is by applying a general design principle in which values are produced on one side and consumed on the other.

Concepts like laziness, producers, consumers, streams, etc. are getting a lot of traction in many different contexts of programming nowadays but like many new shiny buzzwords they’re actually nothing new.
In his talk, Prof. Hughes clearly demonstrates this by presenting a few examples of producers-consumers systems like the one depicted in the following diagrams:

or, even better, the one on top of which is built the quite famous tool for random testing of Haskell programs, called QuickCheck:

Mary Sheeran contributed to these examples by presenting many more systems and hardware description languages (e.g. VSLI design languages) like muFP, Lava and Bluespec that leveraged the FP concepts expressed during the talk to improve hardware design and production.

The last slide of the presentation was finally a perfect recap of all the base concepts presented during the keynote:

Besides these ones, I was particularly interested and captivated by two concepts derived from them and, at this point, you may also guess what they are 😉

If not, well, I give you just a hint and leave the rest for my next article: a famous Elixir library is built right on top these two mysterious concepts.

Stay tuned! 😁

P.S: I don’t own any credit about all the images that I used inside the post. The credit goes directly to Prof. John Hughes and Prof. Mary Sheeran. Their slides are publicly available here while the video of their presentation can be found here.

Leave a Reply

Please Login to comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.