Programming journey

Purely Functional lg(n)-Algorithm Design

Christian Clausen
12 min readJan 12, 2022

--

During education to become developers, we learn many skills, some of which are difficult to apply in our daily 95work. Unfortunately, like muscles, if we don’t use a skill, we lose it. Some of the skills that can be difficult to use in our daily work are complexity analysis, algorithm design, and pure functional programming. So to practice these, I set myself a series of challenges, which turned out to be a fun programming journey. One I am sharing with you here.

To guide you on this journey, each section heading is a question. You should attempt to implement or answer the question before reading the answer; it will make the explanation much more satisfying and be a better practice. Careful not to scroll down too fast when you see a header; otherwise, you might spoil the answer.

This article is an exploration of purely functional algorithms and design techniques. Therefore I am assuming you are familiar with big O-notation. At every step, we analyze both runtime and memory complexity.

Everything in this article is purely functional. If you are unfamiliar with this concept, you can think of it as having no assignments. My code is in Java, but it should be trivial to move it to other languages. You can get the code (including solutions) from my GitHub — feel free to drop a follow while you are over there.

How do we find the smallest natural number missing from an array?

The task is to implement a function that can take an array of distinct integers without any assignments return the smallest positive element or zero missing from the list.

To implement this without assignments, I started by implementing some basic data types from functional languages: Linked lists and pairs. I use a functional style by using static functions instead of methods, but it is only for aesthetics. Crucially though all fields are final which ensures we have no assignments; that is also why they don’t need to be private.

We can implement the naive solution to find the smallest missing number with these data types. We sort the list and then search through it until we hit the missing number. To make the coming analysis more manageable, I include my full quickSort implementation.

What is the complexity of this solution?

The complexity of minMissing we need the complexity of both findMissing and quickSort. The former is easy since it is a simple linear search, so O(n) runtime. Since there are no news, we might be fooled into thinking the memory is O(1). However, we need to keep in mind that counting the input size using binary numbers takes O(lg n).

quickSort is a bit more involved. partition roughly splits the input in half, both of which we then handle recursively. partition runs in linear time O(n). quickSort also uses append which also runs in O(n). Therefore we do O(2n)=O(n) work on each recursion level. Assuming the list is split in half each time, the call depth is O(lg n). Therefore the runtime of quickSort is O(n lg n). This figure illustrates this analysis:

It is accurate that the expected runtime is O(n lg n). However, my implementation is naive in assuming that the pivot, i.e., the element we use to partition, is relatively close to the median. If this is not the case, for example, the recursion will be much deeper if the list is already in order or reverse order. We only eliminate one element per level, resulting in O(n) depth, meaning the total worst-case runtime is O(n²).

Now for the memory. partition has one new Pair and one new List per recursive call, as stated above, there are O(n) calls, thus partition uses O(2n)=O(n). However, there are also news in the append, in fact, there are O(left) recursive calls, each with a new. Notice here the sharing of right which halves the memory overhead. This sharing is possible only because the lists are immutable.

In the context of quickSort, we expect the left argument to halve in each recursive call meaning that the memory is 1/2+1/4+…=1, thus the expected memory is O(n). Similarly to above, if the list is in reverse order, we end up newing every element on every call level, meaning O(n²). Interestingly and contrary to above, if the list is in order, we get list-sharing all the way up, meaning no news. Summing up quickSorts memory is expected O(n) but worst-case O(n²).

Remembering that we are analyzing minMissing the runtime is O(n+n lg n)=O(n lg n) expected, worst case O(n+n²)=O(n²). The expected memory is O(lg n+n)=O(n), worst case O(lg n+n²)=O(n²).

How can we reduce the runtime to O(n)?

To reduce the runtime, we can use a partial evaluation technique where we take advantage of knowing how the result is used. So, instead of having a findMissing and a quickSort we combine the two by pushing findMissing into quickSort.

This change allows us to switch out the pivot for partition because we know exactly what the middle, i.e., the perfect pivot, should be if the list contains all the numbers. The perfect pivot allows us to determine whether the number is missing from the left side of the partition simply by looking at its length. This pivot, in turn, means we only need one recursive call.

Prof. Richard Bird presents this solution in his book Pearls of Functional Algorithm Design.

What is the complexity of this solution?

Time to prove that we solved the exercise of making the runtime O(n). Contrary to quickSort we have a perfect pivot which means the recursion depth is always at most O(lg n). Even more significant, we had two recursive calls before we now have only one. This optimization means that as with memory above, we only do half the work at each call level; 1+1/2+1/4+…=2. Thus the runtime of quickFind is O(n+2n)=O(n). This figure illustrates this analysis:

For memory, we have eliminated the append, but we still have the partition, which we haven’t touched, meaning it is O(n) on each call level. This complexity is the same as the runtime, so the same analysis applies, and the total memory is O(n).

How can we reduce the memory to O(lg n)?

As stated earlier to O(lg n), we can only count the input, not duplicate it. Therefore we cannot modify it. Our previous O(n) memory came from partition. Thus we turn our attention to partition.

Luckily we can use the same technique we used earlier; partial evaluation. We do partition primarily to examine the length of the left-list, but we don’t have to construct it to find out what length it would have. We only have to count how many elements are smaller than the pivot.

Because we are no longer doing any news and need O(1) direct access, I have switched to using arrays. So long as we only read arrays are purely functional. This change also means that the base case for lightFind is no longer the empty list; it is a range.

What is the complexity of this solution?

First, let’s verify that the memory is, in fact, O(lg n) as required by the exercise. Luckily this is easy since there are no news anywhere in the solution. However, we count the input; thus, the memory is O(lg n).

The runtime, however, is fascinating. Because we are only counting and not partitioning the list, we are not halving the list at each recursive level. Therefore, our runtime is back to that of quickSort, although still with the perfect pivot. The runtime is O(n lg n).

How can we generalize this solution?

Some readers may have noticed that our algorithm is binary search, although with countSmaller as the ‘equality check.’ Pulling that part out as a parameter, we have this:

We can now easily define both minMissing and regular binarySearch in terms of this new function.

I think it is gorgeous how we started from quickSort, and by massaging it, we have arrived at binary search. These are two of the most famous algorithms, and the fact that they are in any way connected is almost magical.

What is the complexity of the generalized function?

The change from the previous design is that we extracted the countSmaller. Left are only constant operation and recursive calls. The recursion depth, as we have analyzed before, is O(lg n); thus, the total runtime is O(eq(n) lg n), where eq is the time of eq.

Even though we have eliminated the countSmaller we are still counting something based on the input size mid. Therefore the memory is still O(lg n).

How do we find the kth smallest number in an array?

Since we already have a function to check how many elements are smaller than a pivot, it is surprisingly easy to find the kth smallest element. We choose the left side whenever we count fewer than k elements smaller than mid.

What is the complexity of this solution?

This solution is almost exactly equal to minMissing. Therefore, the same analysis applies; the runtime is O(n lg n), and the memory is O(lg n).

Can we reduce the runtime of this solution?

Since we have already seen a reduced runtime version, we can put back partition. With the gift of foresight, we can also immediately generalize this version of binary search.

This algorithm famously goes under another name: quickSelect. Because we have done the analysis so much by now, it should be no surprise that it is equal in both runtime and memory to Bird’s solution from earlier: O(n).

Which generalized solution can implement a divide-by-5 function?

We now have two generalized functions, one specialized for runtime and one for memory. However, there is another crucial difference; the runtime comes from partitioning a list. In this case, we have no list; we have only numbers. Thus we need to use the partitionless version.

The exercise now is about finding the ‘go-left’ function. The property we want of a div5(a) function is that the result x satisfies x⋅5=a. We can plug this equation directly into the generalizedBinarySearch.

My mind was blown the first time I saw this. Multiplication of 32-bit numbers is O(1). However, if we assume unlimited size numbers, multiplication is probably itself O(lg n), which means the runtime is O((lg n)²). The memory is again O(lg n).

How can we implement O(lg n) integer division?

This exercise is probably the easiest since it simply extracts the constant into a parameter. Therefore we won’t spend much time on it either.

Same complexity as div5.

How can we implement O(lg n) integer square root?

Again we are not using a list, so the exercise is to find the equation our result should satisfy. For sqrt(a) we would expect the result x to satisfy x⋅x=a.

How can we generalize to real numbers?

Replacing int with float (or double if you prefer) seems trivial, but we have to remember that this means we cannot use equality anymore. Instead, we introduce a precision parameter, and our base-case becomes when our range is narrower than the precision.

Although not the same, I want to note the similarity to another famous algorithm called Newton’s Method. The difference to Newton’s Method is that our finish shouldn’t move and the mid should be (start+finish/start)/2. However, I believe the runtime of both these two algorithms is O(lg n). Therefore I don’t think it is dishonorable to name the function after Newton.

How can we implement O(lg n) real square root?

Since we already know the equation for square root, we call the new newton function.

Conclusion

This article has covered several powerful techniques relating to algorithm design. My two favorites are:

  • Partial evaluation — combining two algorithms to reduce runtime or memory.
  • Creative ways to use binary search — such as using it to define a fast square root function.

Indeed binary search is such a powerful tool that it pops up in many surprising places, like Git Bisect. However, there are a few more things we can do with it, but I leave these as exercises for you:

Exercise: Implement an indexOf in an unsorted array. (Hint)

Exercise: What is its complexity?

Exercise: Implement a real number log₁₀, where the result satisfies the equation log₁₀(n) = x if and only if 10ˣ = n.

Finally, if you made it this far, I urge you to check out my book about technical excellence. With plenty of concrete examples and valuable advice, it is a great companion for developers of any seniority:

Or directly from Manning: https://www.manning.com/books/five-lines-of-code

--

--

Christian Clausen

I live by my mentor’s words: “The key to being consistently brilliant is: hard work, every day.”