Making Functional Programming Click

Written by FrancisStokes | Published 2018/01/13
Tech Story Tags: functional-programming | javascript | introduction | composition | type

TLDRvia the TL;DR App

if only composing functions was this easy

I have tried to grok functional programming a bunch of times now. I would be lying if I said that I understand it fully, but at the very least I think I now understand enough to think about it in a meaningful way.

The trouble with trying to “get” FP is that there isn’t actually one idea; there are actually two big ideas that you need to get your head around, preferably without mixing them unnecessarily. The first is about functions and the second is about types. In this article I want to show you both of these ideas, and hopefully illustrate why they are important in isolation, and then how to bring them together.

I’m going to use javascript to illustrate the examples because it happens to lend itself well to using functions in the way we need to. Don’t be put off however, they are concepts, not tied to a single language. If you hate javascript then let me assure you I won’t be using any of it’s insane parts, and you can just as well apply these concepts to other places.

Functions: The universal building block

There are a few ways in which the idea of a function is a teensy bit different in FP, but those differences have some really big implications, as we will see. Take a function like nth, which gives us the nth item in an array:

You’ve got two arguments, n and arr, which when called will return the element at the specified index, but in functional programming, we need to change this a little bit to get the first big insight: a function should only ever takes one argument. It’s a kind of one input, one output type of deal.

Wait though, how can that even work? Add needs two numbers minimum after all. The answer is that if you need more than one argument, you simply return another function that waits for the next argument. When you’ve got everything you need, you return the result.

It’s almost the same, but we’re doing a kind of piece by piece supplying of arguments. nth(3) is now itself a function, waiting for the array part to be specified. This concept has the completely non-semantic name of “currying”.

For most people, myself included, the way you have to use the new nth function feels weird, because you have to have a series of bracketed expressions that look a little bit disconnected from the actual function call. That’s why most functional libraries like ramda or lodash provide a function called curry that converts normal functions into the best of both worlds, where you can call it normally or piece by piece. For example:

Now we can call nth(3)(powers) and nth(3, powers) and the effect is the same. The little third function I added should give you a nice idea of why this concept actually makes sense: The fact that nth is curried means that we can build a more specific version out of it, simply by letting us space the arguments out over time. This idea of building up the arguments is called partial application, and a function made this way is said to be partially applied. So whichever array we give to third, we’re always going to get element 3.

That little nugget alone can be used in your everyday programming to improve code reuse. Often you just need to think about the best way to arrange the arguments so it becomes as general as possible. The rule is typically put the data as the last argument. So that’s the array you’re working with, the string you want to manipulate, the object, number, array, whatever.

With one more little tool, we can really turn the humble function into the universal building block of programming. That tool is called compose.

This might start to seem complicated, but stay with me and we will break it down. compose takes 2 functions f and g, and a variable x. It then runs g(x), and the result is consumed directly by f(). To get concrete with thirdPlus1, x goes into nth(2), which produces 4, and that’s consumed by add(1), producing 5.

compose is like an assembly line. You put something in, it’s picked up by one of our functions, transformed, and then passed on to the next to be transformed again. Something I tripped up on when I was first wrapping my head around this is it’s right-to-left nature; so the first argument in compose is actually the last operation that gets run. If you look at the implementation it makes sense, but it’s a tricky one for sure.

Of course that’s fairly arbitrary, and you could just as well flip it all around. In functional programming that function is called pipe, and it’s really as simple as specifying the transformations left-to-right. In fact a few languages actually have a first class operator for this kind of thing (unix, R, Swift, F#, overload for C++, proposal for javascript).

Either way, like curry, many functional libraries implement compose and pipe, usually in a way that lets you specify any number of functions, so we’ll just use those from now on. Let’s just look at one more example that’s a little closer to real life.

We’ve defined a couple more helpful generalised functions that play well with compose. Most are actually just turning methods into normal functions (map, join), or taking something syntaxy like reading an object property and turning that into a function (property).

So our newly composed namedColorToHex function is implicitly receiving an “x” value, which is the name of our colour. That is fed into property giving us the “Blue Lagoon” array. That array is given to map, a function that takes a transform function and an array and applies the transform to every element in the array. map here is partially applied with the toHex function, so what comes out is an array of hexadecimal representations of each number. join turns our new array into a string, and finally prefix pops a “#” on the front to turn it into a proper hex colour.

Like the greats Bach and Motzart, who weaved masterful compositions from the same 88 piano keys, we’ve used compose and a bunch of general functions to write a program that’s nothing more than sticking different lego blocks together. No loops, no special syntax, no intermediate variables. Just descriptions of how to transform data. If you’ve kept up with everything so far, congratulations! You now understand the first of the two big ideas in functional programming.

Well, almost. I mentioned earlier that there are a couple of ways functions in FP are different. The other major way is that your functions should be “pure”. That means:

  • You don’t use anything you weren’t given as arguments to compute (no global state)
  • You don’t modify anything you’re given, you always just give back a new copy (no mutation)

A good way to think about this is: If I give this input to a function, will I always get the same output no matter what? If the answer is no, then its probably not a pure function.

Of course, what does that mean for errors? API calls? Random number generation? Well that’s the next big idea.

(Algebraic) Types

If you’re coming from OOP then you probably know what an interface is; Basically a sort of blueprint of what should be implemented on an object. This concept is also important in functional programming, but of course in a slightly different way.

There is a function on the Array object called map, which as described above, runs a function on every element and returns a new array. The array is a kind of container for values, and the map function lets us get at those values without pulling them out of the array. But what if you could map on other types, like object? What would that even mean?

Well you could run a function on every keyed element in the object, changing the values but preserving the structure.

That’s pretty reasonable; Now we can map an object. Now, it’s completely against the rules but if you just go ahead and add map to the Object prototype, then we have a common interface between Arrays and Objects!

Now we have a new, more general concept of a mappable — something which has a reasonable definition for map. Functional programming (and math) calls this idea of a mappable a Functor. Functors are just containers of values, where you work on the container instead of the value directly.

I hope your mind is running wild at what else could be a Functor. The answer is a number of things — given the right treatment, and every functor can have it’s own interesting little domain.

Let’s take a moment here and imagine a Container where the stuff that goes inside might be a normal value, but it also might be null/undefined/junk. What we might want from this special container is that map the value if it’s there, but not cause errors if it’s not. Of course we should be able to crack open the container and get at the meaningful stuff —the normal value, or the “nothing” — and respond accordingly.

We’re going to build this Functor — often called a Maybe — but to avoid exposing some gritty javascript details I’m going to use a library called daggy. From the github page:

Library for creating tagged constructors a.k.a. “union types” or “sum types”

Don’t worry about exactly what that means, let’s learn by doing.

We’re going to break this bad boy down. First of all we use daggy to make a “sum” constructor for a Maybe — this is the thing which will encapsulate possible nothingness. We name it Maybe, and that it can be either a “Just” which has an “x” value, or it can be a “Nothing” with no value.

Next we implement a map function for our Maybe, in which we use a daggy function called cata (as in catamorphism) to do a sort of switch/case on the type. If it’s a Just type, we run the function we supplied to map on the “x” value and pop it back inside a new Maybe.Just. If it’s a Nothing then we just give back another Maybe.Nothing.

Finally we make a validThing and an invalidThing and run the same map on both of them. We see that map runs without error on both, and as expected we see a transformed value on the validThing and Maybe.Nothing on the invalid one.

So there’s two problems with what we have now. The first is that it’s not really a realistic example — who is actually just creating nothing values? And the second: even if it were realistic, our value is still trapped inside the Maybe! We need a way to bring it out, so let’s revisit the compositional-colour example from earlier, and see if we can’t make it a bit more error proof.

Breaking it down, the first change is that we’ve got ourselves an extractMaybe function. It takes a function for the Nothing case, a function for the Just case, and the Maybe object. So when this is run, it’s going to see if the M is a Just, and if so it will return the extracted the value with the successFn run over it. If it’s not, then it will return the result of the nothingFn.

Then we’ve got another new function called safeProperty. This checks if the object actually has the property we’re trying to get at, and returns a Just of the value if it’s there, and a Nothing if it’s not.

We define a function that always returns it’s argument (we’ll get to why shortly), and a new variable called defaultColor.

Finally we changed the namedColorToHex function to use safeProperty instead of property (remember compose goes right-to-left, or in this case, down-to-up), which means we’re getting a Maybe out the other end. Because of that, we need to use the map function defined on Maybe in order to change the value, and remember, nothing happens if the value is Nothing! So we wrap all the other transformations in a map, and at the end we can call extractMaybe to pull out the value. The Nothing case returns a function that gives us the defaultColor, and the Just case gives us the value run through the identity function, which is… well just the value!

My reaction when I first came across this kind of code was a kind of amazement; I was so used to handling errors with these try/catch blocks, null checks, and non linear code that would separate everything into if/else blocks. Now there’s a function which wraps up error handling, and even better in a way that still let’s us reason about code in a linear way. The handling of a conditional is just one more step in the chain of transformations!

Containers in containers

Imagine if you had to go a bit deeper into an object. If you use safeProperty every time, you’re going to end up with Maybes inside Maybes. Example:

This could get out of hand; before you know it, you’re running map(map(map(xyz))), which is not sustainable. We need a way to flatten those Maybes into a single Maybe. Luckily, FP is way ahead of us.

Sidenote: The idea of flipping the arguments of a function is quite common. If you’re playing along at home, why not try to write a function called flip that takes a function that takes two arguments, and returns a flipped version of the function.

In functional programming, this idea is called a few names, sometimes flatMap, sometimes chain, but the one I think is clearest is flatMap. Basically that says run a function inside the container that returns another container, and then flatten those two containers into one. That kind of two ideas right — first map, and then flatten. In the case of an array for example, flatten would just say if I have an Array of Arrays of Numbers, then turn that into an Array of Numbers. Let’s implement flatMap for Maybe.

Quite simple, though I should say this is not what you might call a rock solid implementation! So now if we know we’re running a function that returns a Maybe, then we just flatMap instead of map, and voila! No more containers in containers.

So in a way we’ve made the Maybe type a more powerful tool by equipping it with some interfaces like map and flatMap. This kind of type we call an Algebraic Data Type. And this kind of type has some really interesting properties when you follow certain laws.

There are many interfaces you can equip a type with. And it’s not called algebraic without reason — you can write equations about how these different interfaces can be written in terms of each other. For instance, we wrote flatMap as map, and then flatten (also sometimes called join). But you can actually write the flatten function first, and construct flatMap from flatten and map. If you write lte (less than or equal to), you can derive every other comparison function automatically.

Conclusion

So we’ve seen what I consider to be the two big ideas of functional programming — function composition, and types. On top of that we’ve seen how to bring them together. If you’ve followed up until now, you have enough understanding to go out and learn all these arcane terms, get deeper in the laws and relations of algebraic data types, and how to write a whole program in terms of function composition.

If you’re interested to learn more, I think Brian Lonsdorf’s “Professor Frisby’s mostly adequate guide to functional programming” is an amazing book. It goes over everything from this article in a much deeper but still accessible level. He also has a bunch of great talks online that can help you get to grips with some more specific aspects of FP.

To make full javascript applications using this methodology, my colleague at wearereasonablepeople Aldwin Vlasblom has written a phenomenal library called Fluture which let’s you handle async in the functional style, and provides an alternative to promises and observables that feels great to program with.


Published by HackerNoon on 2018/01/13