~ 5 min
Recently, I’ve been doing some research on the topic of functional programming. I think it’s a good time to get together some of the resources I’ve found so far and remark the ideas that motivated me to learn more about it.
Why Functional Programming?
After reading articles, books, and watching videos by Eric Eliott, Brian Lonsdorf, Michael Fogus, Brian Beckman, and others, I’ve convinced myself about the following facts:
- Object-Oriented Design and Relational Databases potentially have a big problem: they bind your software and your data together with a very rigid structure that is very difficult to modify in the future.
- You usually have to make decisions about that rigid structure at a development stage where you simply don’t have all the relevant information. Requirements change with time and rigid structures don’t like changes.
- It’s almost impossible to design a very good OOD or RDB from the very beginning because humans cannot predict the future.
That’s why I think not just functional programming but also NoSQL databases are getting more and more traction. Now that memory and computational power are getting bigger and cheaper, they are becoming a great alternative for many applications. In addition to that, functional techniques present clear advantages in distributed computing.
As Brian Beckman said, today in the software development community, we’re out of control in the complexity space. The way to control complexity is compositionality.
Functional programming is probably the natural alternative to the classical Object-Oriented paradigm. It’s been around since the very beginning of computation, but for some reason, it never went mainstream. Until now.
But functional programming can be quite tricky to understand in the beginning. It has its roots in lambda calculus (or category theory, as you prefer), and it’s full of rather esoteric terms such as monoids, monads, functors, applicative functors, and so on. But the main idea behind it is very simple: build software just by writing small pieces of code (functions) that perform very specific tasks and then compose them in order to provide a more complex behavior. One of the main advantages of such an approach is that unit testing is much easier, sometimes even trivial.
I Want More of This
A Bit More Advanced
Brian Beckman: Don’t fear the Monad. Brian Beckman (Princeton University, Microsoft, Amazon…) gives you a glimpse of the category theory behind monoids and monads, expressed in practical terms, familiar to an imperative programmer (he also gives C# examples). I find particularly interesting the following ideas:
- Functions can be seen as data. You can model (almost) any function as a dictionary table where a lookup is performed.
- A short story about the two separated camps created when computing started: the imperative one (a tradition that led to languages like Fortran, C, Pascal, Java, etc.), where Math theory was put aside and performance was king, and the functional one (this one led to languages like Algol, Lisp, Smalltalk or Haskell), where they started out with Math theory (lambda calculus) and went from there down to the machine, sometimes at the expense of performance.
- Build complexity out of simplicity by composing functions, not by adding ad-hoc features.