Daniel Sutantyo
Department of Computing
Macquarie University
what have we done so far
We have an infinite number of metallic bars with specific lengths.
We need to produce a bar of a certain length from the bars that we have. You cannot cut up any metallic bar, but can solder any two bars together to create a longer bar.
We have an infinite number of metallic bars with specific lengths.
We need to produce a bar of a certain length from the bars that we have. You cannot cut up any metallic bar, but can solder any two bars together to create a longer bar.
f(1) + f(264)
f(2) + f(263)
f(3) + f(262)
...
f(264) + f(1)
Given [ 100, 50, 25, 1 ], can we make 265
f(265)
f(165)
f(65)
Given [ 100, 50, 25, 1 ], can we make 265
f(215)
f(240)
f(264)
f(115)
f(140)
f(164)
f(115)
f(165)
f(140)
f(190)
f(164)
f(214)
f(239)
f(263)
...
...
f(15)
f(15)
f(15)
f(115)
f(15)
We have an infinite number of metallic bars with specific lengths.
We need to produce a bar of a certain length from the bars that we have. You cannot cut up any metallic bar, but can solder any two bars together to create a longer bar.
f(265)
f(132)
Given [ 100, 50, 25, 1 ], can we make 265
f(133)
f(265)
f(132)
Given [ 100, 50, 25, 1 ], what is the minimum number of bars to make 265?
f(133)
f(265)
f(100)
f(165)
Given [ 100, 50, 25, 1 ], what is the minimum number of bars to make 265?
f(265)
f(165)
f(65)
f(215)
f(240)
f(264)
f(115)
f(140)
f(164)
f(115)
f(165)
f(140)
f(190)
f(164)
f(214)
f(239)
f(263)
...
...
f(15)
f(15)
f(15)
f(115)
f(15)
Given [ 100, 50, 25, 1 ], what is the minimum number of bars to make 265?
We have an infinite number of metallic bars with specific lengths.
We need to produce a bar of a certain length from the bars that we have. You cannot cut up any metallic bar, but can solder any two bars together to create a longer bar.
f(265)
f(165)
f(65)
f(215)
f(240)
f(264)
f(115)
f(140)
f(164)
f(115)
f(165)
f(140)
f(190)
f(164)
f(214)
f(239)
f(263)
...
...
f(15)
f(15)
f(15)
f(115)
f(15)
Given [ 100, 50, 25, 1 ], what is the minimum number of bars to make 265?
optimal substructure
problem definition
\[\sum_{i=1}^k x_i\]
is minimised and
\[\sum_{i=1}^k b_ix_i = N\]
optimal substructure
optimal substructure
\(\text{minB}(n)\)
\(1+\text{minB}(n-100)\)
\(1+\text{minB}(n-50)\)
\(1+\text{minB}(n-25)\)
\(1+\text{minB}(n-1)\)
showing optimal substructure
\(1+\text{minB}(n-100)\)
\(1+\text{minB}(n-50)\)
\(1+\text{minB}(n-25)\)
\(1+\text{minB}(n-1)\)
\(\text{minB}(n)\)
showing optimal substructure
\(1+\text{minB}(n-100)\)
\(1+\text{minB}(n-50)\)
\(1+\text{minB}(n-25)\)
\(1+\text{minB}(n-1)\)
\(\text{minB}(n)\)
showing optimal substructure
\(\text{minB}(n)\)
\(1+\text{minB$(n-b_1)$}\)
\(1+\text{minB}(n-b_2)\)
\(1+\text{minB}(n-b_k)\)
...
...
...
showing optimal substructure
\(\text{minB}(n)\)
\(1+\text{minB$(n-b_1)$}\)
\(1+\text{minB}(n-b_2)\)
\(1+\text{minB}(n-b_k)\)
...
...
...
showing the recursive relation that gives optimal solution
\(\text{minB}(n)\)
\(1+\text{minB$(n-b_1)$}\)
\(1+\text{minB}(n-b_2)\)
\(1+\text{minB}(n-b_k)\)
...
...
...
\[\text{minB}(n)\]
\(1+\text{minB$(n-b_1)$}\)
\(1+\text{minB}(n-b_2)\)
\(1+\text{minB}(n-b_k)\)
...
showing the recursive relation that gives optimal solution
\(\text{minB}(n)\)
\(1+\text{minB$(n-b_1)$}\)
\(1+\text{minB}(n-b_2)\)
\(1+\text{minB}(n-b_k)\)
...
...
...
\[\text{minB}(n) = \begin{cases}\displaystyle\min_{i\ :\ b_i\le n}\left(1 +\text{minB}(n-b_i)\right) & \text{if $n > 0$}\\ \quad \quad \quad\quad 0 & \text{otherwise}\end{cases}\]
showing the recursive relation that gives optimal solution
\(\text{minB}(n)\)
\(1+\text{minB$(n-b_1)$}\)
\(1+\text{minB}(n-b_2)\)
\(1+\text{minB}(n-b_k)\)
...
...
...
\[\text{minB}(n) = \begin{cases}\displaystyle\min_{i\ :\ b_i\le n}\left(1 +\text{minB}(n-b_i)\right) & \text{if $n > 0$}\\ \quad \quad \quad\quad 0 & \text{otherwise}\end{cases}\]
\(\text{minB}(n)\)
\(1+\text{minB$(n-b_1)$}\)
\(1+\text{minB}(n-b_2)\)
\(1+\text{minB}(n-b_k)\)
...
...
...
\[\text{minB}(n) = \begin{cases}\displaystyle\min_{i\ :\ b_i\le n}\left(1 +\text{minB}(n-b_i)\right) & \text{if $n > 0$}\\ \quad \quad \quad\quad 0 & \text{otherwise}\end{cases}\]
do you know which case gives you the optimal answer?
showing the recursive relation that gives optimal solution
min vertex cover
shortest path
1
1
15
5
1
1
1
vs Dynamic Programming
vs Dynamic Programming
vs Dynamic Programming
4
7
4
5
1
3
5
2
3
2
4
7
4
5
1
3
5
2
3
2
The optimal choice is to pick the edge with the smallest weight
greedy-choice-property
...
...
greedy-choice-property
\[\text{minC}(n) = \begin{cases}\displaystyle\min_{i\ :\ b_i\le n}\left(1 +\text{minC}(n-c_i)\right) & \text{if $n > 0$}\\ \quad \quad \quad\quad 0 & \text{otherwise}\end{cases}\]
do you know which case gives you the optimal answer?
\(C = \{c_1,c_2,c_3,c_4\} =\{1,25,50,100\}\)
\(1+\text{minC}(n-100)\)
\(1+\text{minC}(n-50)\)
\(1+\text{minC}(n-25)\)
\(1+\text{minC}(n-1)\)
\(\text{minC}(n)\)
greedy-choice property
greedy-choice property
\[\text{minC}(n) = \begin{cases}\displaystyle\min_{i\ :\ c_i\le n}\left(1 +\text{minC}(n-c_i)\right) & \text{if $n > 0$}\\ \quad \quad \quad\quad 0 & \text{otherwise}\end{cases}\]
\(C = \{c_1,c_2, \dots c_k\} \)
\(1+\text{minC}(n-c_1)\)
\(1+\text{minC}(n-c_2)\)
\(\dots\)
\(1+\text{minC}(n-c_k)\)
\(\text{minC}(n)\)
\(\dots\)
greedy-choice property
\(C = \{c_1,c_2,c_3,c_4\} =\{1, 7, 10, 20\}\)
greedy-choice property
\(C = \{c_1,c_2,c_3,c_4\} =\{1, 7, 10, 20\}\)
greedy-choice property
problem definition
problem definition
0
2
4
6
8
10
12
14
16
activity
start
finish
4 5 6 7 9 9 10 11 12 14 16
1 3 0 5 3 5 6 8 8 2 12
1 2 3 4 5 6 7 8 9 10 11
problem definition
formal problem statement
developing a greedy solution
problem and subproblem
\(\{a_1, a_2, \dots, a_n\}\)
\(\{a_1, a_2, \dots, a_n\} \setminus a_1\)
\(\{a_1, a_2, \dots, a_n\} \setminus a_2\)
\(\{a_1, a_2, \dots, a_n\} \setminus a_{n-1}\)
\(\{a_1, a_2, \dots, a_n\} \setminus a_n\)
\(\dots\dots\)
problem and subproblem
0
2
4
6
8
10
12
14
16
\(S_{1,11}\)
problem and subproblem
\(\text{select}(S_{0,12})\)
\(\text{select}(S_{0,1}) + \text{select}(S_{1,12})\)
\(\text{select}(S_{0,11}) + \text{select}(S_{11,12})\)
\(\text{select}(S_{0,2}) + \text{select}(S_{2,12})\)
\(\text{select}(S_{0,10}) + \text{select}(S_{10,12})\)
pick \(a_1\)
pick \(a_2\)
pick \(a_{10}\)
pick \(a_{11}\)
\(\dots\)
problem and subproblem
\(\text{select}(S_{i,j})\)
\(\text{select}(S_{i,i+1}) + \text{select}(S_{i+1,j})\)
\(\text{select}(S_{i,j-1}) + \text{select}(S_{j-1,j})\)
\(\text{select}(S_{i,i+2}) + \text{select}(S_{i+2,j})\)
\(\text{select}(S_{i,k}) + \text{select}(S_{k,j})\)
pick \(a_{i+1}\)
pick \(a_{j-1}\)
pick \(a_{i+2}\)
\(\dots\)
\(\dots\)
pick \(a_{k}\)
optimal substructure
0
2
4
6
8
10
12
14
16
\(S_{1,11}\)
optimal substructure
optimal substructure
recursive relation
\[|A_{i,j}| = \begin{cases}\displaystyle{\max_{a_k\in S_{i,j}}} \left(|A_{i,k}| + 1 + |A_{k,j}|\right) & \text{if $S_{i,j} \ne \empty$}\\ 0 & \text{if $S_{i,j} = \empty$} \end{cases}\]
\(\text{select}(S_{i,j})\)
\(\text{select}(S_{i,i+1}) + \text{select}(S_{i+1,j})\)
\(\text{select}(S_{i,j-1}) + \text{select}(S_{j-1,j})\)
\(\text{select}(S_{i,k}) + \text{select}(S_{k,j})\)
pick \(a_{i+1}\)
pick \(a_{j-1}\)
\(\dots\)
\(\dots\)
pick \(a_{k}\)
\[|A_{i,j}| = \begin{cases}\displaystyle{\max_{a_k\in S_{i,j}}} \left(|A_{i,k}| + 1 + |A_{k,j}|\right) & \text{if $S_{i,j} \ne \empty$}\\ 0 & \text{if $S_{i,j} = \empty$} \end{cases}\]
greedy choice property
0
2
4
6
8
10
12
14
16
greedy choice property
greedy choice property
0
2
4
6
8
10
12
14
16
greedy choice property
0
2
4
6
8
10
12
14
16
greedy choice property
greedy choice property
show that only one subproblem remains
show that it is safe to make the greedy choice
show that it is safe to make the greedy choice
recursive solution
// int s[] is the array containing start times
// int f[] is the array containing finish times
HashSet<Integer> select(int k, int n){}{
int m = k+1;
while (m <= n && s[m] < f(k))
m++:
if (m <= n){
HashSet<Integer> ans = new HashSet<>(select(m,n));
ans.add(m);
return ans;
}
}
recursive to iterative
// int s[] is the array containing start times
// int f[] is the array containing finish times
HashSet<Integer> select(){
int n = s.length;
HashSet<Integer> answer = new HashSet<Integer>();
answer.add(1);
k = 1;
for(int m = 2; m < n; m++){
if (s[m] >= f[k]){
answer.add(m);
k = m;
}
}
return answer;
}
What is the complexity of this algorithm?