2

What is the difference between dynamic programming and recursion? I have gone through many articles in geeksforgeeks tutorial point and Wikipedia, but it seems to me that both are same.

Can you please explain me with example of Fibonacci series the difference between dynamic programming and recursion?

6
  • 1
    Recursion is a "language feature" that can be used to implement the "technique" of dynamic-programming, among other uses. Commented Jul 30, 2020 at 10:04
  • 1
    Dynamic programming ("programming" here means "planning") is an optimization method that can be implemented using recursion with memoization. It can also be implemented using other approaches. Take a look at Cormen's et al. book. Geeksforgeeks is a resource of very poor quality and better be avoided. Commented Jul 30, 2020 at 10:05
  • 1
    Recursion is when a function calls itself. Dyamic-programming is a "technique" where you solve someproblem by filling a table where you resolve some sub-problem. Filling that table might use recursion. Commented Jul 30, 2020 at 10:05
  • So dynamic programming is an application of recursion? Commented Jul 30, 2020 at 10:09
  • 1
    Dynamic Programming (DP) is a technique used to solve certain types of problems. These problems have two properties, optimal substructure and overlapping subproblems. DP technique is divided into two: top-down and bottom-up. top-down uses recursion which starts from n and goes to the base case and bottom-up starts from the base case and goes to n. Commented Jul 30, 2020 at 10:21

1 Answer 1

2

Calculating terms in the Fibonacci sequence is very easy, since in fact you only need to remember fib(n-2) and fib(n-1) in order to calculate fib(n). Because it is so easy, any algorithm is going to be extremely simple, so this example blurs the nuances between different dynamic programming paradigms. That being said, the wikipedia page you mentioned has a nice explication about fibonacci: https://en.wikipedia.org/wiki/Dynamic_programming#Fibonacci_sequence

A function is called recursive if it calls itself during its execution.

A dynamic programming algorithm might be implemented with or without using recursion.

The core of dynamic programming is exploiting the two following facts to write an algorithm:

  1. A solution to a problem can be broken into solutions to subproblems;
  2. When an optimal solution S to a problem P is broken into solutions s1, s2, ... to subproblems p1, p2, ..., then s1, s2, ... are all optimal solutions to their respective subproblems.

Note that these two facts are not true of all problems. A problem only lends itself to dynamic programming if those two facts apply to it.

A simple example is finding the shortest path from point A to point B: if a shortest path from A to B goes through point C, then its two halves from A to C and from C to B it is made of are also shortest paths.

In most situations, you could make recursive calls to solve the subproblems. But a "naive" recursive approach can easily result in an exponential algorithm, because of the cascading "in order to solve this problem, I need to solve these two (or more) subproblems" that might quickly escalate the number of problems you have to solve. Here is an example with fibonacci:

fib(5) = fib(4) + fib(3)
  fib(4) = fib(3) + fib(2)
    fib(3) = fib(2) + fib(1)
      fib(2) = fib(1) + fib(0)
        fib(1) = 1
        fib(0) = 0
      fib(1) = 1
    fib(2) = fib(1) + fib(0)
      fib(1) = 1
      fib(0) = 0
    fib(1) = 1
  fib(3) = fib(2) + fib(1)
    fib(2) = fib(1) + fib(0)
      fib(1) = 1
      fib(0) = 0
    fib(1) = 1

Here we had to calculate 16 terms to find fib(5). But notice that there are only 6 different terms in total. Surely we could be more efficient by avoiding repeating the same calculations again and again.

To avoid this, dynamic programming algorithms most often amount to filling an array with the solutions to the subproblems. Once you've identified the list of subproblems and the array, there might not be much incentive to go "top-down" with recursive calls that start with the largest problem and successively break it down into smaller subproblems. Instead, you can fill the array "bottom-up" starting from the most trivial problems and then using those to solve the more complex problems, until you've made it up to the problem you originally wanted to solve. In the case of the fibonacci sequence, you might end up with the following code:

int f[n+1];
f[0] = 0;
f[1] = 1;
for (int k = 2; k <= n; k++)
  f[k] = f[k-2] + f[k-1];
return f[n];

However, in the case of the fibonacci sequence, you only need to remember the last two terms at any time, so instead of filling a full array with all terms from fib(0) to fib(n), you can simply keep two variables (or a size-2 array) with the previous two results. Arguably this is still dynamic programming, although it ends up simply being a loop that calculates the terms of the sequence in order and it's hard to see anything "dynamic" about it.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.