COMP333

Algorithm Theory and Design

 

 

Daniel Sutantyo

Department of Computing

Macquarie University

Lecture slides adapted from lectures given by Frank Cassez, Mark Dras, and Bernard Mans

Summary

  • Algorithm complexity (running time, recursion tree)
  • Algorithm correctness (induction, loop invariants)
  • Problem solving methods:
    • exhaustive search
    • dynamic programming
    • greedy method
    • divide-and-conquer
    • probabilistic method
    • algorithms involving strings
    • algorithms involving graphs
  • Problems and subproblems
     
  • Brute force
     
  • Divide and Conquer
     
  • Dynamic Programming

Problem Solving

what have we done so far

We have an infinite number of metallic bars with specific lengths.

We need to produce a bar of a certain length from the bars that we have. You cannot cut up any metallic bar, but can solder any two bars together to create a longer bar.

  • Example: Given [ 100, 50, 25, 1 ]
    • can we make 265?
    • what is the minimum number of bars that we need?

We have an infinite number of metallic bars with specific lengths.

We need to produce a bar of a certain length from the bars that we have. You cannot cut up any metallic bar, but can solder any two bars together to create a longer bar.

  • Example: Given [ 100, 50, 25, 1 ]
    • can we make 265?
    • what is the minimum number of bars that we need?

Brute Force

  • Example: Given [ 100, 50, 25, 1 ]
    • can we make 265?
  • Brute force solution: 
    • Try every single combination!
      • including [1,1,1,1,1,1,1,...,1] (265 times)
    • Pruning
      • won't try [1,1,1,1,1,1,1,...,1] 266 times or more
  • Change the question slightly: list all the possible combinations!

Brute Force

  • find f(265)                           
                                  

f(1) + f(264)

f(2) + f(263)

f(3) + f(262)

...

f(264) + f(1)

Given [ 100, 50, 25, 1 ], can we make 265

Brute Force

f(265)

f(165)

f(65)

Given [ 100, 50, 25, 1 ], can we make 265

f(215)

f(240)

f(264)

f(115)

f(140)

f(164)

f(115)

f(165)

f(140)

f(190)

f(164)

f(214)

f(239)

f(263)

...

...

f(15)

f(15)

f(15)

f(115)

f(15)

Brute Force

We have an infinite number of metallic bars with specific lengths.

We need to produce a bar of a certain length from the bars that we have. You cannot cut up any metallic bar, but can solder any two bars together to create a longer bar.

  • Example: Given [ 100, 50, 25, 1 ]
    • can we make 265?
    • what is the minimum number of bars that we need?
    • can we use divide and conquer algorithm?

Divide and Conquer

Divide and Conquer

f(265)

f(132)

Given [ 100, 50, 25, 1 ], can we make 265

f(133)

Divide and Conquer

f(265)

f(132)

Given [ 100, 50, 25, 1 ], what is the minimum number of bars to make 265?

f(133)

Divide and Conquer

f(265)

f(100)

f(165)

Given [ 100, 50, 25, 1 ], what is the minimum number of bars to make 265?

Divide and Conquer

f(265)

f(165)

f(65)

f(215)

f(240)

f(264)

f(115)

f(140)

f(164)

f(115)

f(165)

f(140)

f(190)

f(164)

f(214)

f(239)

f(263)

...

...

f(15)

f(15)

f(15)

f(115)

f(15)

Given [ 100, 50, 25, 1 ], what is the minimum number of bars to make 265?

We have an infinite number of metallic bars with specific lengths.

We need to produce a bar of a certain length from the bars that we have. You cannot cut up any metallic bar, but can solder any two bars together to create a longer bar.

  • Example: Given [ 100, 50, 25, 1 ]
    • can we make 265?
    • what is the minimum number of bars that we need?

Dynamic Programming

Dynamic Programming

f(265)

f(165)

f(65)

f(215)

f(240)

f(264)

f(115)

f(140)

f(164)

f(115)

f(165)

f(140)

f(190)

f(164)

f(214)

f(239)

f(263)

...

...

f(15)

f(15)

f(15)

f(115)

f(15)

Given [ 100, 50, 25, 1 ], what is the minimum number of bars to make 265?

Dynamic Programming

  • Requires:
    • Optimal substructure
    • Overlapping subproblems
  • Developing a dynamic programming solution:
    1. Show that there is an optimal substructure
    2. Show the recursive relation that gives optimal solution
    3. Compute the value of an optimal solution
    4. (optional) Construct an optimal solution from computed information

Dynamic Programming

optimal substructure

  • A problem exhibits optimal substructure if the optimal solution to the problem can be constructed using optimal solutions to the subproblems
  • Let us define the problem more formally

Dynamic Programming

problem definition

  • Input:
    • A set of distinct positive integers \(B = \{b_1,b_2,\dots,b_k\}\) arranged in increasing order, representing the lengths of the bar
    • a positive integer \(N\)
  • Output:
    • A set of non-negative integers \(\{x_1,x_2,\dots,x_k\}\) where \(x_i\) represents the number of bars of length \(b_i\) we use such that

\[\sum_{i=1}^k x_i\]

is minimised and

\[\sum_{i=1}^k b_ix_i = N\]

Dynamic Programming

optimal substructure

  • Let \(B = \{ 1, 25, 50, 100\}\) be a set containing the lengths of the bars
  • Let \(\text{minB}(n)\) be the function that returns the minimum number of bars to represent a positive integer \(n\)
  • We have to pick at least one bar to form a solution, so to find \(\text{minB}(n)\), we have four subproblems:
    • if we pick bar of length 100:    1 + \(\text{minB}(n-100)\)
    • if we pick bar of length 50  :    1 + \(\text{minB}(n-50)\)
    • if we pick bar of length 25  :    1 + \(\text{minB}(n-25)\)
    • if we pick bar of length 1    :    1 + \(\text{minB}(n-1)\)

Dynamic Programming

optimal substructure

\(\text{minB}(n)\)

\(1+\text{minB}(n-100)\)

\(1+\text{minB}(n-50)\)

\(1+\text{minB}(n-25)\)

\(1+\text{minB}(n-1)\)

Dynamic Programming

showing optimal substructure

  • Suppose n = 265, and we know that the optimal solution is 18 bars:
    • 2 bars of length 100,
    • 1 bar of length 50,
    • 15 bars of length 1)
  • which branch gives you this solution?

\(1+\text{minB}(n-100)\)

\(1+\text{minB}(n-50)\)

\(1+\text{minB}(n-25)\)

\(1+\text{minB}(n-1)\)

\(\text{minB}(n)\)

Dynamic Programming

showing optimal substructure

  • Suppose our algorithm returns the 2nd branch as the optimal path, so our subproblem is minB(215)
    • minB(215) must be optimal as well, i.e. 17 is the optimal answer
    • if minB(215) is not optimal, e.g. it returns 13, then we would be able to construct an optimal solution for 265 with 14 bars (a contradiction)

\(1+\text{minB}(n-100)\)

\(1+\text{minB}(n-50)\)

\(1+\text{minB}(n-25)\)

\(1+\text{minB}(n-1)\)

\(\text{minB}(n)\)

Dynamic Programming

showing optimal substructure

\(\text{minB}(n)\)

\(1+\text{minB$(n-b_1)$}\)

\(1+\text{minB}(n-b_2)\)

  • To generalise: since \(\text{minB}(n-b_i)\) is a subproblem of \(\text{minB}(n)\), the optimal solution to \(\text{minB}(n)\) may use the solution to \(\text{minB}(n-b_i)\)
  • If the solution to \(\text{minB}(n-b_i)\) is not optimal, then there is a better solution to \(\text{minB}(n-b_i)\)
    • this means our solution to \(\text{minB}(n)\) is not optimal, which is contradiction

\(1+\text{minB}(n-b_k)\)

...

...

...

Dynamic Programming

showing optimal substructure

\(\text{minB}(n)\)

\(1+\text{minB$(n-b_1)$}\)

\(1+\text{minB}(n-b_2)\)

  • We have optimal substructure, so we know we can use the optimal solutions to the subproblems to construct the optimal solution to the problem
    • but how? what is the relationship between the problem and the subproblems? how do we choose?

\(1+\text{minB}(n-b_k)\)

...

...

...

Dynamic Programming

showing the recursive relation that gives optimal solution

\(\text{minB}(n)\)

\(1+\text{minB$(n-b_1)$}\)

\(1+\text{minB}(n-b_2)\)

\(1+\text{minB}(n-b_k)\)

...

...

...

\[\text{minB}(n)\]

\(1+\text{minB$(n-b_1)$}\)

\(1+\text{minB}(n-b_2)\)

\(1+\text{minB}(n-b_k)\)

...

Dynamic Programming

showing the recursive relation that gives optimal solution

\(\text{minB}(n)\)

\(1+\text{minB$(n-b_1)$}\)

\(1+\text{minB}(n-b_2)\)

\(1+\text{minB}(n-b_k)\)

...

...

...

\[\text{minB}(n) = \begin{cases}\displaystyle\min_{i\ :\ b_i\le n}\left(1 +\text{minB}(n-b_i)\right) & \text{if $n > 0$}\\ \quad \quad \quad\quad 0 & \text{otherwise}\end{cases}\]

Dynamic Programming

showing the recursive relation that gives optimal solution

\(\text{minB}(n)\)

\(1+\text{minB$(n-b_1)$}\)

\(1+\text{minB}(n-b_2)\)

\(1+\text{minB}(n-b_k)\)

...

...

...

\[\text{minB}(n) = \begin{cases}\displaystyle\min_{i\ :\ b_i\le n}\left(1 +\text{minB}(n-b_i)\right) & \text{if $n > 0$}\\ \quad \quad \quad\quad 0 & \text{otherwise}\end{cases}\]

  • Note, with dynamic programming, we don't know which path is going to give us the optimal solution, so we have to evaluate all of them

\(\text{minB}(n)\)

\(1+\text{minB$(n-b_1)$}\)

\(1+\text{minB}(n-b_2)\)

\(1+\text{minB}(n-b_k)\)

...

...

...

\[\text{minB}(n) = \begin{cases}\displaystyle\min_{i\ :\ b_i\le n}\left(1 +\text{minB}(n-b_i)\right) & \text{if $n > 0$}\\ \quad \quad \quad\quad 0 & \text{otherwise}\end{cases}\]

do you know which case gives you the optimal answer?

Dynamic Programming

showing the recursive relation that gives optimal solution

Greedy Algorithm

  • Using dynamic programming is sometimes an overkill because given the available choices, we may know which one gives us the optimal solution 
  • A greedy algorithm chooses the locally optimal solution in the hope that it contributes to the globally optimal solution
    • locally optimal: the best solution for current step (ignoring the implications on future subproblems)
    • globally optimal: the best solution for the problem
  • ​It may not always work, but it can approximate the optimal solution

Greedy Algorithm

  • Example of problems (that you already know) that can be solved using the greedy method:
    • Finding the minimum spanning tree
      • both Prim's and Kruskal's algorithms
    • Dijkstra's algorithm
    • The coin change problem
    • Fractional knapsack
  • Example of problems (that you already know) that cannot be solved using the greedy method:
    • there are lots ...

Greedy Algorithm

min vertex cover

Greedy Algorithm

shortest path

1

1

15

5

1

1

1

Dynamic Programming

  • Show that there is an optimal substructure
  • Show the recursive relation that gives optimal solution
  • Compute the value of an optimal solution
  • (optional) Construct an optimal solution from computed information

Greedy Algorithm

  • Show that there is an optimal substructure
  • Show the recursive relation that gives optimal solution
  • Show that if we make the greedy choice only one subproblem remains
  • Show that it is safe to make the greedy choice
  • Compute the value of an optimal solution
    • CLRS: recursive solution -> iterative solution
  • (optional) Construct an optimal solution from computed information

Greedy Algorithm

  • Key Ingredients:
    • Optimal substructure
    • Greedy-choice property
      • we can construct globally optimal solution by making locally optimal (greedy) choices

Greedy Algorithm

vs Dynamic Programming

  • Both requires the problem to exhibit optimal substructure property
  • Dynamic Programming evaluates every subproblems, where as the Greedy approach chooses only one
    • Dynamic Programming can be top-down or bottom-up, Greedy is usually top-down
    • Greedy is easier to code
    • It is harder to prove the correctness of a greedy algorithm!

Greedy Algorithm

vs Dynamic Programming

  • Optimal substructure (how to show it):
    • Dynamic programming (at step i):
      • The optimal solution requires the solution to the subproblem, and so the solution to the subproblem must be optimal (otherwise we have a contradiction)
    • Greedy algorithm (at step i):
      • The choice that we make now must be part of the optimal solution, because if we make any other choice then it won't be optimal

Greedy Algorithm

vs Dynamic Programming

  • Optimal substructure (informally):
    • Dynamic programming (at step i):
      • We find the optimal solutions to the subproblem, and this influences the choice that we make 
    • Greedy algorithm (at step i):
      • We make a choice now, then combine that with the optimal solution to the subproblem to give the optimal answer

Prim's Algorithm

4

7

4

5

1

3

5

2

3

2

Prim's Algorithm

4

7

4

5

1

3

5

2

3

2

The optimal choice is to pick the edge with the smallest weight

Prim's Algorithm

greedy-choice-property

  • Proving the greedy-choice property:
    • Suppose the MST does not include the cheapest edge connected to the green node (which connects to the orange node), and instead we choose the edge connecting to the red node
    • In this MST, the red node must be connected to the orange node

...

  • Proving the greedy-choice property:
    • From this MST, discard the edge connecting the green node and the red node
    • Then add the edge connecting the orange node and the green node
    • We have another spanning tree with the lower weight, which is a contradiction!

...

Prim's Algorithm

greedy-choice-property

\[\text{minC}(n) = \begin{cases}\displaystyle\min_{i\ :\ b_i\le n}\left(1 +\text{minC}(n-c_i)\right) & \text{if $n > 0$}\\ \quad \quad \quad\quad 0 & \text{otherwise}\end{cases}\]

do you know which case gives you the optimal answer?

The Coin Changing Problem

\(C = \{c_1,c_2,c_3,c_4\} =\{1,25,50,100\}\)

\(1+\text{minC}(n-100)\)

\(1+\text{minC}(n-50)\)

\(1+\text{minC}(n-25)\)

\(1+\text{minC}(n-1)\)

\(\text{minC}(n)\)

The Coin Changing Problem

greedy-choice property

  • The greedy choice:
    • if \(c_\ell \le n < c_{\ell+1}\), then choose \(c_\ell\), or choose \(c_k\) if \(c_k \le n\)
    • if we do not take \(c_\ell\) (or \(c_k\)), then we need to replace this by smaller values:
      • if \(n \ge 100\) and we do not take 100, then we need to replace it with 2 50s
      • if \(50 \le n < 100\) and we do not take 50, then we need to replace it with 2 25s
      • if \(25 \le n < 50\) and we do not take 25, then we need to replace it with 25 1s
    • in all cases, we end up with a worse solution, therefore our greedy choice is the best optimal choice
  • Why is the proof so long and specific?

The Coin Changing Problem

greedy-choice property

\[\text{minC}(n) = \begin{cases}\displaystyle\min_{i\ :\ c_i\le n}\left(1 +\text{minC}(n-c_i)\right) & \text{if $n > 0$}\\ \quad \quad \quad\quad 0 & \text{otherwise}\end{cases}\]

\(C = \{c_1,c_2, \dots c_k\} \)

\(1+\text{minC}(n-c_1)\)

\(1+\text{minC}(n-c_2)\)

\(\dots\)

\(1+\text{minC}(n-c_k)\)

\(\text{minC}(n)\)

\(\dots\)

  • Can you prove this for any \(C\)?

The Coin Changing Problem

greedy-choice property

\(C = \{c_1,c_2,c_3,c_4\} =\{1, 7, 10, 20\}\)

  • Use greedy algorithm to find the \(\text{minC}(22)\)
  • Use greedy algorithm to find the \(\text{minC}(15)\)
  • Use greedy algorithm to find the \(\text{minC}(14)\)
  • What conclusion can you draw?

The Coin Changing Problem

greedy-choice property

\(C = \{c_1,c_2,c_3,c_4\} =\{1, 7, 10, 20\}\)

  • Use greedy algorithm to find the \(\text{minC}(15)\)
    • 15 = 10 + 1 + 1 + 1 + 1 + 1 (6 coins)
    • 15 = 7 + 7 + 1 (3 coins)
  • Use greedy algorithm to find the \(\text{minC}(14)\)
    • 14 = 10 + 1 + 1 + 1 + 1 (5 coins)
    • 14 = 7 + 7 (2 coins)
  • Moral of the story: you don't always have the greedy-choice property!

The Coin Changing Problem

greedy-choice property

Activity-Selection Problem

problem definition

  • You have a list of \(n\) activities you can perform, but all of these activities use the same resource, so you cannot perform two of these activities at the same time
    • Examples:
      • one classroom for multiple classes
      • one doctor, multiple appointments
  • You want to do as many activities as you can, which ones should you choose? 

Activity-Selection Problem

problem definition

0

2

4

6

8

10

12

14

16

activity

start

finish

4        5        6        7        9        9       10      11      12      14      16

1        3        0        5        3        5        6        8        8        2       12

1        2        3        4        5        6        7        8        9       10      11

  • Let \(S = \{a_1, a_2, \dots, a_n\}\) be the set of \(n\) activities ordered by their finish times (in increasing order)
  • Each activity \(a_i\) has start time \(s_i\) and finish time \(f_i\), \(0 \le s_i < f_i\)
    • \(a_i\) takes place during half-open time interval \([s_i,f_i)\)
      • \(x \in [s_i,f_i)\) means \(s_i \le x < f_i\)
    • activities \(a_i\) and \(a_j\) are compatible if \([s_i,f_i)\) and \([s_j,f_j)\) do not overlap
      • i.e. \(f_i < s_j\) or \(s_i \ge f_j\)

Activity-Selection Problem

problem definition

  • Input:
    • The set \(S = \{a_1, a_2, \dots, a_n\}\) of \(n\) activities, sorted in increasing order of finish time
    • The start time \(s_i\) and finish time \(f_i\) for each activity \(a_i\)
  • ​Output:
    • ​The maximum subset \(A = \{a_{\ell_1}, a_{\ell_2}, \dots, a_{\ell_m}\} \subseteq S\) of compatible activities, where two activities \(a_i\) and \(a_j\) are compatible if \([s_i,f_i)\) and \([s_j,f_j)\) do not overlap, i.e. \(f_i < s_j\) or \(s_i \ge f_j\)

Activity-Selection Problem

formal problem statement

Activity-Selection Problem

developing a greedy solution

  • Show that there is an optimal substructure
  • Show the recursive relation that gives optimal solution
  • Show that if we make the greedy choice only one subproblem remains
  • Show that it is safe to make the greedy choice
  • Compute the value of an optimal solution
    • CLRS: recursive solution -> iterative solution
  • (optional) Construct an optimal solution from computed information

Activity-Selection Problem

problem and subproblem

\(\{a_1, a_2, \dots, a_n\}\)

\(\{a_1, a_2, \dots, a_n\} \setminus a_1\)

\(\{a_1, a_2, \dots, a_n\} \setminus a_2\)

\(\{a_1, a_2, \dots, a_n\} \setminus a_{n-1}\)

\(\{a_1, a_2, \dots, a_n\} \setminus a_n\)

\(\dots\dots\)

Activity-Selection Problem

problem and subproblem

  • Let \(S_{i,j}\) be the set of activities that start after activity \(a_i\) finishes and finishes before activity \(a_j\) starts

0

2

4

6

8

10

12

14

16

\(S_{1,11}\)

Activity-Selection Problem

problem and subproblem

\(\text{select}(S_{0,12})\)

  • In our example, we want to find \(S_{0,12}\), where \(a_0\) and \(a_{12}\) are two made-up activities, respectively finishing before time 0 and starting at time 16

\(\text{select}(S_{0,1}) + \text{select}(S_{1,12})\)

\(\text{select}(S_{0,11}) + \text{select}(S_{11,12})\)

\(\text{select}(S_{0,2}) + \text{select}(S_{2,12})\)

\(\text{select}(S_{0,10}) + \text{select}(S_{10,12})\)

pick \(a_1\)

pick \(a_2\)

pick \(a_{10}\)

pick \(a_{11}\)

\(\dots\)

Activity-Selection Problem

problem and subproblem

\(\text{select}(S_{i,j})\)

\(\text{select}(S_{i,i+1}) + \text{select}(S_{i+1,j})\)

\(\text{select}(S_{i,j-1}) + \text{select}(S_{j-1,j})\)

\(\text{select}(S_{i,i+2}) + \text{select}(S_{i+2,j})\)

\(\text{select}(S_{i,k}) + \text{select}(S_{k,j})\)

pick \(a_{i+1}\)

pick \(a_{j-1}\)

  • Of course we should generalise this:
    • if we pick \(a_k\), then we have the subproblems \(S_{i,k}\) and \(S_{k,j}\)

pick \(a_{i+2}\)

\(\dots\)

\(\dots\)

pick \(a_{k}\)

Activity-Selection Problem

optimal substructure

  • Let \(A_{i,j}\) be the optimal solution \(\text{select}(S_{i,j})\), that is, \(A_{i,j}\) is the maximum set of compatible activities in \(S_{i,j}\)
  • Let \(a_k \in A_{i,j}\)

0

2

4

6

8

10

12

14

16

\(S_{1,11}\)

Activity-Selection Problem

optimal substructure

  • Let \(A_{i,j}\) be the optimal solution for \(S_{i,j}\) and \(a_k \in A_{i,j}\)
    • since we picked \(a_k\), then we have the solution for the subproblems \(S_{i,k}\) and \(S_{k,j}\)
  • Let \(A_{i,k} = A_{i,j} \cap S_{i,k}\) and \(A_{k,j} = A_{i,j}\cap S_{k,j}\) be the solutions to these subproblems
    • i.e. the activities before \(a_k\) starts and after \(a_k\) finishes
    • in other words: \(A_{i,j} = A_{i,k} \cup \{a_k\} \cup A_{k,j} \)
    • the optimal solution is \(|A_{i,j}| = |A_{i,k}| + 1 +  |A_{k,j}| \)
  • ​Suppose there is a better solution for the subproblem \(S_{i,k}\), say \(A^\prime_{i,k}\)

Activity-Selection Problem

optimal substructure

  • ​If \(A^\prime_{i,k}\) is a better solution for the subproblem \(S_{i,k}\)
    • ​then \(|A^\prime_{i,k}| > |A_{i,k}|\)
    • thus \(|A^\prime_{i,k}| + 1 +  |A_{k,j}|  > |A_{i,j}|\)
    • contradicting the assumption that \(A_{i,j}\) is the optimal solution to the problem
  • ​We can use a symmetric argument for the solution to the subproblem \(S_{k,j}\)

 

 

Activity-Selection Problem

recursive relation

  • There is optimal substructure \(\rightarrow\) dynamic programming

\[|A_{i,j}| = \begin{cases}\displaystyle{\max_{a_k\in S_{i,j}}} \left(|A_{i,k}| + 1 + |A_{k,j}|\right) & \text{if $S_{i,j} \ne \empty$}\\             0  & \text{if $S_{i,j} = \empty$} \end{cases}\]

\(\text{select}(S_{i,j})\)

\(\text{select}(S_{i,i+1}) + \text{select}(S_{i+1,j})\)

\(\text{select}(S_{i,j-1}) + \text{select}(S_{j-1,j})\)

\(\text{select}(S_{i,k}) + \text{select}(S_{k,j})\)

pick \(a_{i+1}\)

pick \(a_{j-1}\)

\(\dots\)

\(\dots\)

pick \(a_{k}\)

\[|A_{i,j}| = \begin{cases}\displaystyle{\max_{a_k\in S_{i,j}}} \left(|A_{i,k}| + 1 + |A_{k,j}|\right) & \text{if $S_{i,j} \ne \empty$}\\             0  & \text{if $S_{i,j} = \empty$} \end{cases}\]

Activity-Selection Problem

greedy choice property

0

2

4

6

8

10

12

14

16

  • What is the greedy choice?

Activity-Selection Problem

greedy choice property

  • What is the greedy choice?
    • we want to maximise the number of activities
    • which activity, if chosen, leaves the as much resource as possible so we can fit in more activities?

Activity-Selection Problem

greedy choice property

0

2

4

6

8

10

12

14

16

  • Pick the one that finishes earliest

Activity-Selection Problem

greedy choice property

0

2

4

6

8

10

12

14

16

  • Pick the one that starts latest

Activity-Selection Problem

greedy choice property

  • In dynamic programming, the hard part is in working out what the subproblems are, and then defining the recursive relationship
    • overlapping subproblems are easy to spot
    • proof for optimal substructure are all very similar
  • In greedy algorithm, the hard part is in working out what is the greedy choice
    • can you think of a bad greedy choice for the activity selection problem?

Activity-Selection Problem

greedy choice property

  • Show that there is an optimal substructure
  • Show the recursive relation that gives optimal solution
  • Show that if we make the greedy choice only one subproblem remains
  • Show that it is safe to make the greedy choice

Activity-Selection Problem

show that only one subproblem remains

  • Show that if we make the greedy choice only one subproblem remains:
    • if we pick activity \(a_k\), then we need to find the solutions to the subproblem \(S_{i,k}\) and \(S_{k,j}\)
    • the greedy choice is \(a_{i+1}\) 
    • \(S_{i,i+1} = \emptyset\)
    • therefore there is only one subproblem to solve

Activity-Selection Problem

show that it is safe to make the greedy choice

  • Intuition:
    • pick the activity that finishes first, i.e. \(a_1\)
      • recall that the activities are sorted according to finish time, in increasing order
    • suppose \(a_1\) is not in the optimal solution
      • there must be another activity, say \(a_k\) that is the first activity in the optimal solution
      • \(a_k\) finishes later than \(a_1\)
      • so we can substitute \(a_1\) in for \(a_k\) and still get the optimal number of activities

Activity-Selection Problem

show that it is safe to make the greedy choice

  • Theorem:
    • If \(a_m\) be an activity in \(S_{k,j}\) with the earliest finish time, then \(a_m\) is in a maximum-size subset of compatible activities of \(S_{k,j}\)
  • Proof:
    • Let \(a_\ell\) be the activity in \(A_{k,j}\) with the earliest finish time.
    • If \(a_\ell = a_m\), then we are done
    • If \(a_\ell \ne a_m\), then replace \(a_\ell\) with \(a_m\)
      • i.e. let \(A^\prime_{k,j} = A_{k,j} - \{a_\ell\} + \{a_m\}\)
    • Since the activities in \(A_{k,j}\) are compatible, and \(a_\ell\) is the first activity in \(A_{k,j}\) to finish and \(f_m \le f_\ell\),
      • then it follows that the activities in \(A^\prime_{k,j}\) are also compatible
    • Since \(|A^\prime_{k,j}| = | A_{k,j}|\)
      • it follows that \(A^\prime_{k,j}\) is also a maximum-size subset of compatible activities of \(S_{k,j}\)

Activity-Selection Problem

recursive solution

// int s[] is the array containing start times
// int f[] is the array containing finish times

HashSet<Integer> select(int k, int n){}{
  int m = k+1;
  while (m <= n && s[m] < f(k))
  	m++:
  if (m <= n){
    HashSet<Integer> ans = new HashSet<>(select(m,n));
    ans.add(m);
    return ans;
  }
}

Activity-Selection Problem

recursive to iterative

// int s[] is the array containing start times
// int f[] is the array containing finish times

HashSet<Integer> select(){
  int n = s.length;
  HashSet<Integer> answer = new HashSet<Integer>();
  answer.add(1);
  k = 1;
  for(int m = 2; m < n; m++){
    if (s[m] >= f[k]){
      answer.add(m);
      k = m;
    }
  }
  return answer;
}

What is the complexity of this algorithm?

COMP333 Algorithm Theory and Design - W5 2019 - Greedy Algorithm

By Daniel Sutantyo

COMP333 Algorithm Theory and Design - W5 2019 - Greedy Algorithm

Lecture notes for Week 5 of COMP333, 2019, Macquarie University

  • 184