# Purely Functional lg(n)-Algorithm Design

During education to become developers, we learn many skills, some of which are difficult to apply in our daily 95work. Unfortunately, like muscles, if we don’t use a skill, we lose it. Some of the skills that can be difficult to use in our daily work are complexity analysis, algorithm design, and pure functional programming. So to practice these, I set myself a series of challenges, which turned out to be a fun programming journey. One I am sharing with you here.

To guide you on this journey, each section heading is a question. You should attempt to implement or answer the question before reading the answer; it will make the explanation much more satisfying and be a better practice. Careful not to scroll down too fast when you see a header; otherwise, you might spoil the answer.

This article is an exploration of purely functional algorithms and design techniques. Therefore I am assuming you are familiar with big O-notation. At every step, we analyze both runtime and memory complexity.

Everything in this article is purely functional. If you are unfamiliar with this concept, you can think of it as having no assignments. My code is in Java, but it should be trivial to move it to other languages. You can get the code (including solutions) from my GitHub — feel free to drop a follow while you are over there.

## How do we find the smallest natural number missing from an array?

The task is to implement a function that can take an array of distinct integers without any assignments return the smallest positive element or zero missing from the list.

`> minMissing([ 0, 1, 2, 3 ])4> minMissing([ 8, 2, 3, 0, 12, 4, 1, 6 ])5`

To implement this without assignments, I started by implementing some basic data types from functional languages: Linked lists and pairs. I use a functional style by using static functions instead of methods, but it is only for aesthetics. Crucially though all fields are `final` which ensures we have no assignments; that is also why they don’t need to be private.

`public class List {  final int elem;  final List next;  public List(int elem, List next) {    this.elem = elem;    this.next = next;  }  public static List append(List left, List right) {    if (left == null) return right;    else return new List(left.elem, append(left.next, right));  }  public static int length(List xs) {    if (xs == null) return 0;    else return 1 + length(xs.next);  }}public class Pair {  List left;  List right;  public Pair(List left, List right) {    this.left = left;    this.right = right;  }}`

We can implement the naive solution to find the smallest missing number with these data types. We sort the list and then search through it until we hit the missing number. To make the coming analysis more manageable, I include my full quickSort implementation.

`private static Pair partition(List xs, int pivot) {  if (xs == null)     return new Pair(null, null);  var p = partition(xs.next, pivot);  if (xs.elem < pivot)     return new Pair(new List(xs.elem, p.left), p.right);  else     return new Pair(p.left, new List(xs.elem, p.right));}private static List quickSort(List xs) {  if (xs == null) return null;  var p = partition(xs.next, xs.elem);  var left = quicksort(p.left);  var right = quicksort(p.right);  return List.append(left, new List(xs.elem, right));}private static int findMissing(int guess, List xs) {  if (xs == null || xs.elem != guess) return guess;  else return findMissing(guess + 1, xs.next);}private static int minMissing(List xs) {  return findMissing(0, quickSort(xs));}`

## What is the complexity of this solution?

The complexity of `minMissing` we need the complexity of both `findMissing` and `quickSort`. The former is easy since it is a simple linear search, so O(n) runtime. Since there are no `new`s, we might be fooled into thinking the memory is O(1). However, we need to keep in mind that counting the input size using binary numbers takes O(lg n).

`quickSort` is a bit more involved. `partition` roughly splits the input in half, both of which we then handle recursively. `partition` runs in linear time O(n). `quickSort` also uses `append` which also runs in O(n). Therefore we do O(2n)=O(n) work on each recursion level. Assuming the list is split in half each time, the call depth is O(lg n). Therefore the runtime of `quickSort` is O(n lg n). This figure illustrates this analysis:

It is accurate that the expected runtime is O(n lg n). However, my implementation is naive in assuming that the pivot, i.e., the element we use to partition, is relatively close to the median. If this is not the case, for example, the recursion will be much deeper if the list is already in order or reverse order. We only eliminate one element per level, resulting in O(n) depth, meaning the total worst-case runtime is O(n²).

Now for the memory. `partition` has one `new Pair` and one `new List` per recursive call, as stated above, there are O(n) calls, thus `partition` uses O(2n)=O(n). However, there are also `new`s in the `append`, in fact, there are O(left) recursive calls, each with a `new`. Notice here the sharing of `right` which halves the memory overhead. This sharing is possible only because the lists are immutable.

In the context of `quickSort`, we expect the `left` argument to halve in each recursive call meaning that the memory is 1/2+1/4+…=1, thus the expected memory is O(n). Similarly to above, if the list is in reverse order, we end up `new`ing every element on every call level, meaning O(n²). Interestingly and contrary to above, if the list is in order, we get list-sharing all the way up, meaning no `new`s. Summing up `quickSort`s memory is expected O(n) but worst-case O(n²).

Remembering that we are analyzing `minMissing` the runtime is O(n+n lg n)=O(n lg n) expected, worst case O(n+n²)=O(n²). The expected memory is O(lg n+n)=O(n), worst case O(lg n+n²)=O(n²).

## How can we reduce the runtime to O(n)?

To reduce the runtime, we can use a partial evaluation technique where we take advantage of knowing how the result is used. So, instead of having a `findMissing` and a `quickSort` we combine the two by pushing `findMissing` into `quickSort`.

This change allows us to switch out the pivot for partition because we know exactly what the middle, i.e., the perfect pivot, should be if the list contains all the numbers. The perfect pivot allows us to determine whether the number is missing from the `left` side of the partition simply by looking at its length. This pivot, in turn, means we only need one recursive call.

`private static int quickFind(int guess, int len, List xs) {  if (xs == null) return guess;  var mid = (guess + len) / 2;  var p = partition(xs, mid);  var leftLength = List.length(p.left);  if (leftLength == mid - guess)     return quickFind(mid, len, p.right);  else     return quickFind(guess, mid, p.left);}private static int minMissing(List xs) {  return quickFind(0, List.length(xs) + 1, xs);}`

Prof. Richard Bird presents this solution in his book Pearls of Functional Algorithm Design.

## What is the complexity of this solution?

Time to prove that we solved the exercise of making the runtime O(n). Contrary to `quickSort` we have a perfect pivot which means the recursion depth is always at most O(lg n). Even more significant, we had two recursive calls before we now have only one. This optimization means that as with memory above, we only do half the work at each call level; 1+1/2+1/4+…=2. Thus the runtime of `quickFind` is O(n+2n)=O(n). This figure illustrates this analysis:

For memory, we have eliminated the `append`, but we still have the `partition`, which we haven’t touched, meaning it is O(n) on each call level. This complexity is the same as the runtime, so the same analysis applies, and the total memory is O(n).

## How can we reduce the memory to O(lg n)?

As stated earlier to O(lg n), we can only count the input, not duplicate it. Therefore we cannot modify it. Our previous O(n) memory came from `partition`. Thus we turn our attention to `partition`.

Luckily we can use the same technique we used earlier; partial evaluation. We do partition primarily to examine the length of the `left`-list, but we don’t have to construct it to find out what length it would have. We only have to count how many elements are smaller than the pivot.

Because we are no longer doing any `new`s and need O(1) direct access, I have switched to using arrays. So long as we only read arrays are purely functional. This change also means that the base case for `lightFind` is no longer the empty list; it is a range.

`private static int smallerHelper(int i, int mid, int[] arr) {  if (i >= arr.length)    return 0;  else if (arr[i] < mid && arr[i] >= 0)     return 1 + smallerHelper(i + 1, mid, arr);  else     return smallerHelper(i + 1, mid, arr);}private static int countSmaller(int mid, int[] arr) {  return smallerHelper(0, mid, arr);}private static int lightFind(int start, int finish, int[] arr) {  if (start == finish - 1) return start;  var mid = (finish + start) / 2;  var sm = countSmaller(mid, arr);  if (sm == mid) return lightFind(mid, finish, arr);  else return lightFind(start, mid, arr);}private static int minMissing(int[] arr) {  return lightFind(0, arr.length + 1, arr);}`

## What is the complexity of this solution?

First, let’s verify that the memory is, in fact, O(lg n) as required by the exercise. Luckily this is easy since there are no `new`s anywhere in the solution. However, we count the input; thus, the memory is O(lg n).

The runtime, however, is fascinating. Because we are only counting and not partitioning the list, we are not halving the list at each recursive level. Therefore, our runtime is back to that of `quickSort`, although still with the perfect pivot. The runtime is O(n lg n).

## How can we generalize this solution?

Some readers may have noticed that our algorithm is binary search, although with `countSmaller` as the ‘equality check.’ Pulling that part out as a parameter, we have this:

`private static <T> int generalizedBinarySearch(  int min, int max,   Function<Integer, Boolean> eq) {  if (min == max - 1) return min;  var mid = (max + min) / 2;  if (eq.apply(mid)) return generalizedBinarySearch(mid, max, eq);  else return generalizedBinarySearch(min, mid, eq);}`

We can now easily define both `minMissing` and regular `binarySearch` in terms of this new function.

`private static int binarySearch(int[] arr, int e) {  return generalizedBinarySearch(0, arr.length,     (mid) -> arr[mid] <= e);}private static int minMissing(int[] arr) {  return generalizedBinarySearch(0, arr.length + 1,     (mid) -> countSmaller(mid, arr) == mid);}`

I think it is gorgeous how we started from quickSort, and by massaging it, we have arrived at binary search. These are two of the most famous algorithms, and the fact that they are in any way connected is almost magical.

## What is the complexity of the generalized function?

The change from the previous design is that we extracted the `countSmaller`. Left are only constant operation and recursive calls. The recursion depth, as we have analyzed before, is O(lg n); thus, the total runtime is O(eq(n) lg n), where eq is the time of `eq`.

Even though we have eliminated the `countSmaller` we are still counting something based on the input size `mid`. Therefore the memory is still O(lg n).

## How do we find the kth smallest number in an array?

Since we already have a function to check how many elements are smaller than a pivot, it is surprisingly easy to find the kth smallest element. We choose the `left` side whenever we count fewer than k elements smaller than mid.

`private static int kth(int[] arr, int k) {  return generalizedBinarySearch(0, arr.length + 1,     (mid) -> countSmaller(mid, arr) <= k);}`

## What is the complexity of this solution?

This solution is almost exactly equal to `minMissing`. Therefore, the same analysis applies; the runtime is O(n lg n), and the memory is O(lg n).

## Can we reduce the runtime of this solution?

Since we have already seen a reduced runtime version, we can put back `partition`. With the gift of foresight, we can also immediately generalize this version of binary search.

`private static int partitionBinarySearch(  int min, int max, List xs,   BiFunction<Integer, List, Boolean> eq) {  if (min == max - 1) return min;  var mid = (max + min) / 2;  var p = partition(xs, mid);  if (eq.apply(mid, p.left))    return partitionBinarySearch(mid, max, p.right, eq);  else    return partitionBinarySearch(min, mid, p.left, eq);}private static int quickSelect(int k, List xs) {  return partitionBinarySearch(0, length(xs) + 1, xs,     (mid, left) -> length(left) <= k);}`

This algorithm famously goes under another name: quickSelect. Because we have done the analysis so much by now, it should be no surprise that it is equal in both runtime and memory to Bird’s solution from earlier: O(n).

## Which generalized solution can implement a divide-by-5 function?

We now have two generalized functions, one specialized for runtime and one for memory. However, there is another crucial difference; the runtime comes from partitioning a list. In this case, we have no list; we have only numbers. Thus we need to use the partitionless version.

The exercise now is about finding the ‘go-left’ function. The property we want of a `div5(a)` function is that the result x satisfies x⋅5=a. We can plug this equation directly into the `generalizedBinarySearch`.

`private static int div5(int a) {  return generalizedBinarySearch(0, a, (mid) -> mid * 5 <= a);}`

My mind was blown the first time I saw this. Multiplication of 32-bit numbers is O(1). However, if we assume unlimited size numbers, multiplication is probably itself O(lg n), which means the runtime is O((lg n)²). The memory is again O(lg n).

## How can we implement O(lg n) integer division?

This exercise is probably the easiest since it simply extracts the constant into a parameter. Therefore we won’t spend much time on it either.

`private static int div(int a, int b) {  return generalizedBinarySearch(0, a, (mid) -> mid * b <= a);}`

Same complexity as `div5`.

## How can we implement O(lg n) integer square root?

Again we are not using a list, so the exercise is to find the equation our result should satisfy. For `sqrt(a)` we would expect the result x to satisfy x⋅x=a.

`private static int sqrti(int a) {  return generalizedBinarySearch(0, a, (mid) -> mid * mid <= a);}`

## How can we generalize to real numbers?

Replacing `int` with `float` (or `double` if you prefer) seems trivial, but we have to remember that this means we cannot use equality anymore. Instead, we introduce a precision parameter, and our base-case becomes when our range is narrower than the precision.

`private static <T> float newton(  float precision, float start, float finish,   Function<Float, Boolean> eq) {  if (Math.abs(start - finish) <= precision) return start;  else {    var mid = (finish + start) / 2;    if (eq.apply(mid)) return newton(precision, mid, finish, eq);    else return newton(precision, start, mid, eq);  }}`

Although not the same, I want to note the similarity to another famous algorithm called Newton’s Method. The difference to Newton’s Method is that our `finish` shouldn’t move and the `mid` should be `(start+finish/start)/2`. However, I believe the runtime of both these two algorithms is O(lg n). Therefore I don’t think it is dishonorable to name the function after Newton.

## How can we implement O(lg n) real square root?

Since we already know the equation for square root, we call the new `newton` function.

`private static float sqrtf(float a) {  return newton(0.001f, 0, a, (mid) -> mid * mid <= a);}`

# Conclusion

This article has covered several powerful techniques relating to algorithm design. My two favorites are:

• Partial evaluation — combining two algorithms to reduce runtime or memory.
• Creative ways to use binary search — such as using it to define a fast square root function.

Indeed binary search is such a powerful tool that it pops up in many surprising places, like Git Bisect. However, there are a few more things we can do with it, but I leave these as exercises for you:

Exercise: Implement an `indexOf` in an unsorted array. (Hint)

Exercise: What is its complexity?

Exercise: Implement a real number log₁₀, where the result satisfies the equation log₁₀(n) = x if and only if 10ˣ = n.

Finally, if you made it this far, I urge you to check out my book about technical excellence. With plenty of concrete examples and valuable advice, it is a great companion for developers of any seniority:

Or directly from Manning: https://www.manning.com/books/five-lines-of-code