Recursion and Dynamic Programming

An introduction to two very important concepts

Albert Ming
6 min readJul 25, 2023
Photo by Arnold Francisca on Unsplash

One of the most interesting topics from computer science that I have learned after completing my first year of college was recursion. In my introductory Java class, we had covered basic syntax and operations, but recursion always caught my eye. See, it was the first topic that effectively challenged me to think deeper about computer science. In this article, I will attempt to explain recursion at the surface level, then introduce another topic that may be an even better alternative to recursion.

What Is Recursion?

At the most basic level, recursion is the process of solving a problem whose solution depends on the solution of smaller subproblems. In code, we can find that this usually means calling our function within itself. Let’s go over a really specific but effective example.

public int fibonacci(int n) {
if(n <= 1) {
return n;
}
return fibonacci(n - 1) + fibonacci(n - 2);
}

As shown by the name of the function, we are simply calculating the n-th fibonacci number of the fibonacci sequence. If one is not familiar with the concept of recursion, this blob of code can look intimidating. I mean, what exactly is going on here? Am I returning the function itself? It can be easy to get lost within the endless recursive stack, but let’s try to break the problem down step by step.

Before we get into ANYTHING, let’s talk about some definitions.

Base Case: the ending condition of a recursive function that prevents a function from running indefinitely
Recursive Step: the part of the function that calls itself with a smaller subproblem of the original problem

Photo by Desola Lanre-Ologun on Unsplash

Since we are calling a function within itself, the base case is necessary to terminate the code. If there was no sufficient base case, the program would never end. Therefore, when brainstorming different recursive solutions to a question, I recommend immediately thinking about different possible base cases that can work (hint: there can be more than one!). If the base case is not executed, we enter the recursive part of the problem. Each time a recursive call is executed, it gets placed on the recursive call stack. This call stack grows and grows until we have reached a base case; then the return statement of the base case returns the value back UP the call stack to the previous recursive execution. This continues until we “retrieve” our answer for the original input.

Going back to our fibonacci code, let’s identify our base case. In this scenario, the if-statement with condition n ≤ 1 represents the base case. The reason this is the case is because

  1. The fibonacci sequence is by nature a recurrence relation, which just means that a certain term in the sequence can be built up from some combination of some of the previous terms in the sequence. If we think about some of the terms in the fibonacci sequence, we have 0, 1, 1, 2, 3, 5, 8, 13, 21… The 0th and the 1st terms are 0 and 1 respectively, so we can just return the value of the term itself! That is what is happening with if(n ≤ 1) return n: If we are at either the 0th or 1st term, return 0 or 1 respectively.
  2. Intuitively speaking, fibonacci sequence terms can not be negative. The sequence starts at term 0, 1, 2, 3… There is no -1st term.
Fibonacci Recursive Stack Example

As an example, let’s see what is happening with fibonacci(4).

  1. When we call the function, n is not less than or equal to 1, so we do not execute the base case.
  2. The function recursively calls itself with n-1 and n-2.
  3. Once we get to a base case, we return the values back up the call stack.

An example of returning values up the call stack: for fib(2) = fib(1) + fib(0), we get 1 + 0 = 1. Therefore fib(3) = fib(2) + fib(1) = 1 + 1 = 2. As we can see, we need to know the answer of smaller recursive subproblems to solve the bigger problems: we need to solve fib(2) and fib(1) to solve fib(3). If you are a more mathematical person, you may benefit from viewing recursion in the form of substitution.

fib(4) = fib(3) + fib(2)
fib(3) = fib(2) + fib(1)
fib(2) = fib(1) + fib(0)
fib(1) = 1 and fib(0) = 0, so fib(2) = 1 + 0 = 1
Now, we substitute values back up

fib(3) = fib(2) + fib(1) = 1 + 1 = 2
fib(4) = fib(3) + fib(2) = 2 + 1 = 3

Dynamic Programming

Photo by Sebastian Kanczok on Unsplash

Dynamic programming is a whole new ball game, but it is essentially a form of optimized recursion. Now, I am still relatively new to this topic, but I would like to try my best at explaining the dynamic programming approach to fibonacci.

The key idea is to notice why the recursive approach may be ineffective. If we look back at the recursion diagram, you may notice that fib(2) is being computed on both the left and right sides of the “recursion tree.” What if we didn’t have to recalculate fib(2) again? Now in this case, recalculating fib(2) might not be the end of the world, but just imagine how much redundant work would be done if we want to calculate fib(100) or fib(1000).

The underlying idea of dynamic programming is to store redundant work in some sort of memory or cache. Most of the time, this is done with an array or a hashmap. Going back to the previous example, what if when we figured out fib(2), we stored the value of fib(2) in an array! This way, if we have to calculate fib(2) again, we can instantly look up its answer and avoid having to do recursion again (Note that array and hashmap lookup operations are both O(1) time). This process of storing or caching results to be used later is called memoization.

public int fibonacci(int n) {
int[] memo = new long[n + 1];
Arrays.fill(memo, -1);
return helper(n, memo);
}

public int helper(int n, int[] memo) {
if (n <= 1) {
return n;
}

if (memo[n] != -1) {
return memo[n];
}
memo[n] = helper(n - 1, memo) + helper(n - 2, memo);
return memo[n];
}

Personally, I always like to have some sort of helper function complete the actual dynamic programming part of the program. Common practice states to initialize some sort of memo array or hashmap to cache values. In this case, we are using an array that is pre-filled with -1’s.

You’ll notice that much of the recursion code is still the same, except we are passing our memo array along with n. We still have our base cases where we return 0 or 1, but then we have an if-statement that essentially states that if we have already stored the value of a previous recursive call, return that value immediately. Since we initially set all values in the memo array to be -1, if any value is not -1 anymore, it means that we have already calculated that fibonacci number already! As mentioned before, much of the rest of the recursive code is still the same since we still need to recursively find the sum of the two previous fibonacci numbers. At the end of the code, we return memo[n], which represents the value of the n-th fibonacci number.

Final Thoughts

I hope you enjoyed this brief introduction to recursion and dynamic programming! In my opinion, these two topics are some of the most difficult to grasp, especially when first being introduced to them. Even though I have just started to learn about dynamic programming, I am enjoying the process, and it has been extremely rewarding so far. If you enjoyed reading this article, consider following for more computer science and math related topics!

--

--

Albert Ming
Albert Ming

Written by Albert Ming

19 || College Student || CS & Math

No responses yet