Dynamic Programming
Telerik Academy Alpha
DSA
Table of contents
Dynamic programming
What and why?
- Dynamic programming is when you use past knowledge to make solving a future problem easier
-
Solving a complex problem
- Breaking it down into a collection of simpler subproblems
- Solve each of those subproblems just once
- Store their solutions
How it works
- How dynamic programming (DP) works?
- Approach to solve problems
- Store partial solutions of the smaller problems
- Usually they are solved bottom-up
Steps
- Steps to designing a DP algorithm:
- Characterize optimal substructure
- Recursively define the value of an optimal solution
- Compute the value bottom up
- (if needed) Construct an optimal solution
Elements of DP
- DP has the following characteristics
-
Simple sub-problems
- We break the original problem to smaller sub-problems that have the same structure
-
Optimal substructure of the problems
- The optimal solution to the problem contains within optimal solutions to its sub-problems
-
Overlapping sub-problems
- There exist some places where we solve the same sub-problem more than once
-
Simple sub-problems
DP vs Divide and Conquer
- Using Divide-and-Conquer to solve problems (that can be solved using DP) is inefficient
- Because the same common sub-problems have to be solved many times
- DP will solve each of them once and their answers are stored in a table for future use
- Technique known as memoization
Example
- In many DP problems there is a moving object with some restrictions
- For example: In how many ways you can reach from top-left corner of a grid to the bottom-right?
- You can move only right and down
- Some cells are unreachable
Fibonacci
Divide and Conquer
vs Dynamic Programming
Divide and Conquer Approach
- How can we find the n-th Fibonacci number using recursion ("divide and conquer")
Directly applying the recurrence formula:
decimal Fibonacci(int n)
{
if (n == 0) return 0;
if (n == 1) return 1;
return Fibonacci(n - 1) + Fibonacci(n - 2);
}
Fibonacci and Memoization
- We can save the results from each function call
- Every time when we call the function we check if the value is already calculated
- This saves a lot of useless calculations!
decimal Fibonacci(int n)
{
if (memo[n] != 0) return memo[n];
if (n == 0) return 0;
if (n == 1) return 1;
memo[n] = Fibonacci(n - 1) + Fibonacci(n - 2);
return memo[n];
}
Fibonacci and DP
- How to find the n-th Fibonacci number using the dynamic programming approach?
- We can start solving the Fibonacci problem from bottom-up calculating partial solutions
- We know the answer for the 0-th and the 1-st number of the Fibonacci sequence
Fibonacci and DP
- How to find the n-th Fibonacci number using the dynamic programming approach?
- We can start solving the Fibonacci problem from bottom-up calculating partial solutions
- We know the answer for the 0-th and the 1-st number of the Fibonacci sequence
We know the formula to calculate each of the next numbers \( F_i = F_{i-1} + F_{i-2} \)
Compare Fibonacci Solutions
-
Recursive solution
- Complexity: ~O(φ\(^n\)) = O(1.618\(^n\))
-
DP or memoization solution
- Complexity: ~O(n)
- Dynamic programming solutions is way faster than the recursive solution
- If we want to find the 36th Fibonacci number:
- Dynamic programming solution takes ~36 steps
- Recursive solution takes ~48 315 633 steps
- If we want to find the 36th Fibonacci number:
Subset Sum
Subset sum
- Given a set of integers, is there a non-empty subset whose sum is zero?
- Given a set of integers and an integer \( S \), does any non-empty subset sum to \( S \)?
- Given a set of integers, find all possible sums
- Can you equally separate the value of coins?
Subset sum
- Solving the subset sum problem:
- numbers = \( \{3,5,-1,4,2\} \), \( sum = 6 \)
- start with possible = \( \{0\} \)
- Step 1: obtain all possible sums of \( \{3\} \)
- possible = \( \{0\} \cup \{0+3\} = \{0,3\} \)
- Step 2: obtain all possible sums of \( \{3,5\} \)
- possible = \( \{0,3\} \cup \{0+5,3+5\} = \{0,3,5,8\} \)
- Step 3: obtain all possible sums of \( \{3,5,-1\} \)
- possible = \( \{0,3,5,8\} \cup \{0-1,3-1,5-1,8-1\} = \{-1,0,2,3,4,5,7,8\} \)
Subset sum - live demo
Longest Increasing Subsequence
Longest Increasing Subsequence
- Find a subsequence of a given sequence in which the subsequence elements are in increasing order, and in which the subsequence is as long as possible
- This subsequence is not necessarily contiguous nor unique
- The longest increasing subsequence problem is solvable in time \( O(n*log(n)) \) [more info]
- We will review one simple DP algorithm with complexity \( O(n*n) \)
- Example: 1, 8, 2, 7, 3, 4, 1, 6
Longest Increasing Subsequence
- Find a subsequence of a given sequence in which the subsequence elements are in increasing order, and in which the subsequence is as long as possible
- This subsequence is not necessarily contiguous nor unique
- The longest increasing subsequence problem is solvable in time \( O(n*log(n)) \) [more info]
- We will review one simple DP algorithm with complexity \( O(n*n) \)
- Example: 1, 8, 2, 7, 3, 4, 1, 6
Longest Increasing Subsequence
Longest Increasing Subsequence
LIS - Dynamic table
LIS - Restore sequence
Longest Common Subsequence
Longest Common Subsequence
- Given two sequences \( x[1..m] \) and \( [1..n] \), find their longest common subsequence (LCS)
- For example if we have
- x="ABCBDAB"
- y="BDCABA"
- longest common subsequence will be "BCBA"
- For example if we have
Initial LCS table
- To compute the LCS efficiently using dynamic programming we start by constructing a table in which we build up partial results
- S1 = GCCCTAGCG
- S2 = GCGCAATG
Initial LCS table
- We'll fill up the table from top to bottom, and from left to right
- Each cell = the length of an LCS of the two string prefixes up to that row and column
- Each cell will contain a solution to a sub-problem of theoriginal problem
LCS table – base cases filled in
- Each empty string has nothing in common with any other string, therefore the 0-length strings will have values 0 in the LCS table
- Integer arrays in C# are filled with 0 by default, so we're good to go
LCS - live demo
LCS - live demo - reconstruct
Summary
Summary
- Divide-and-conquer method for algorithm design
- Dynamic programming is a way of improving on inefficient divide-and-conquer algorithms
- Dynamic programming is applicable when the sub-problems are dependent, that is, when sub-problems share sub-sub-problem
- Recurrent functions can be solved efficiently
- Longest increasing subsequence and Longest common subsequence problems can be solved efficiently using dynamic programming approach
DP Applications
- Matematical, finance and economic optimizations
- Optimal consumption and saving
- The core idea of DP is to avoid repeated work by remembering partial results. This is a very common technique whenever performance problems arise
- Bioinformatics
- sequence alignment, protein folding, RNA structure prediction and protein-DNA binding
DP Applications
- Control theory, information theory
- Operations research, decision making
- Computer science:
- Theory, Graphics, AI
- Markov chains
- Spelling correction
Questions?
[C# DSA] Dynamic Programming
By telerikacademy
[C# DSA] Dynamic Programming
- 1,080