big o

TIME AND SPACE COMPLEXITY

What is Big O?

  • Big O is the notation we use to express the runtime or space complexity of an algorithm relative to its input, as the input gets arbitrarily large.


Big O is about growth, not actual time or space


  • Because Big O is about measuring the shape of a growth curve, we express Big O in terms of its largest factor, and drop all others.

Why You Should Care

  • Being able to evaluate the runtime/space complexity of an algorithm is critical to writing performant code.

 

  • When asked to optimize an algorithm, one of the first things you might do is determine its Big(O) complexity.

Basic StrategIES

/* Let's start out easy */

function foo (arr) {
  let sum = 0			
  let product = 1		

  for (let i = 0; i < arr.length; i++) {
    sum += arr[i]
  }	

  for (let j = 0; j < arr.length; j++) {
    product *= arr[i]
  }

  console.log(sum * product)
}
/* Piece of cake */

function foo (arr) {
  let sum = 0       			 // O(1)
  let product = 1			 // O(1)

  for (let i = 0; i < arr.length; i++) { // O(arr)
    sum += arr[i]			    // O(1)
  }

  for (let j = 0; j < arr.length; j++) { // O(arr)
    product *= arr[i]                       // O(1)
  }

  console.log(sum * product);		 // O(1)
}                  

// O(1) + O(1) + (O(arr) * O(1)) + (O(arr) * O(1)) + O(1)
// O(3 + 2arr) => O(arr) => O(n)

/* 

- Measure the complexity of the algorithm at each and every step.
  Ask yourself: would this change if the input got larger?

- Be concrete: you don't need to use 'n', 'm' etc if you don't want to. 
  Use the name of the inputs of the algorithm, or whatever makes the most sense to you!

- Add the complexity for each line at the same level of indentation.
  Multiply inner levels by their outer levels.

- When you've reduced as far as you can, drop everything but the largest term!

*/
/* Another softball */

function bar (arr) {
  for (let i = 0; i < arr.length; i++) {
    for (let j = 0; j < arr.length; j++) {
      console.log(arr[i] + arr[j])
     }
  }		    
}


/* All good! */

function bar (arr) {
  for (let i = 0; i < arr.length; i++) {      // O(arr)
    for (let j = 0; j < arr.length; j++) {        // O(arr)
      console.log(arr[i] + arr[j])                    // O(1)
     }
  }		    
}

// O(arr) * O(arr) * O(1) => O(arr^2) => O(n^2)
/* Don't worry, you got this */

function baz (arrA, arrB) {
  for (let i = 0; i < arrA.length; i++) {
    for (let j = 0; j < arrB.length; j++) {
      console.log(arrA[i] + arrB[j])
    }
  }	        
}
/* See that wasn't so bad */

function baz (arrA, arrB) {
  for (let i = 0; i < arrA.length; i++) {       // O(arrA)
    for (let j = 0; j < arrB.length; j++) {         // O(arrB)
      console.log(arrA[i] + arrB[j])                      // O(1)
    }
  }	        
}

// O(arrA) * O(arrB) * O(1) => O(nm)

Beyond the basics

  • Recursion
  • Space Complexity
  • Algorithms on algorithms

How to LOOK AT recursion

  • It's helpful to think of recursion as a tree.
  • When there is only one recursive branch, this is usually like a standard "for" loop, where the number of times we recurse is a function of the input size.
  • When there are multiple recursive branches, the runtime will often be similar to O(branches^depth)
    • Each level of 'depth' has 'branch' number more calls than the level before - an exponential relationship!
  • When in doubt, write it out!

Branches^depth

  • Depth is relative to the input size
  • For example, a balanced binary search tree with seven nodes is three levels deep. If n = 7, the depth is roughly log(n) (log base 2)
/*
        7
       / \
    4       9
   / \     / \
  1   6   8   12


Seven elements altogether
Three levels deep
For an algorithm that visits each node:
O(2^log(n)) ==> MATH ==> O(n)

*/
/* Let's take it to the limit! */

function fib (n) {
    if (n === 1 || n === 0) return n;
    else return fib(n - 1) + fib(n - 2);
}
/* This is why fib is so slow! */

/*
                     fib(4)
                    /      \
            fib(3)            fib(2)
           /      \          /      \
      fib(2)     fib(1)   fib(1)    fib(0)
      /    \
 fib(1)     fib(0)


our input is equal to 4: n = 4
we go four levels deep, so depth = n
we branch twice with each recursive call

therefore, runtime is O(2^n)!

*/
/* Let's get the memo! */

function fib (n, memo = {}) {
    if (n === 1 || n === 0) return n;

    else if (memo[n]) return memo[n];

    else memo[n] = fib(n - 1, memo) + fib(n - 2, memo);
    return memo[n];
}

/* Such quicker, much dynamic, wow! */

/*
                     fib(4)
                    /      \
            fib(3)            fib(2)
           /      \          /      \
      fib(2)     fib(1)   fib(1)    fib(0)
      /    \
 fib(1)     fib(0)



1. fib(4) = fib(3) + fib(2)
           /
2. fib(3) = fib(2) + fib(1)
             /
3. fib(2) = fib(1) + fib(0) = memo[2]   at this point, we've had to do O(n) work

4. fib(3) = memo[2] + fib(1) = memo[3]  but now, every calculation is constant time!

5. fib(4) = memo[3] + fib(2) => memo[3] + memo[2]


*/

/* 

That entire second branch got taken out of the picture!
Every step after we reach the bottom of the tree is O(1), thanks to the memo!
Using a memo cuts runtime down to O(n)!

*/

Space Complexity

  • Big O can also express space complexity
  • Measures how much storage space we use relative to the input (ex. by storing values in arrays and hashes, and simultaneous calls on the call stack).
    • Remember what matters is the growth curve - not the sheer number of bytes we store!

 

  • Space can be taken and then freed up again (the same can't be said of time!)
  • Usually we have plenty of space, but not enough time!
/* Memoized fibonacci revisited */

function fib (n, memo) {
    if (!memo) var memo = {};

    if (n === 1 || n === 0) return n;
    else if (memo[n]) return memo[n];
    else memo[n] = fib(n - 1, memo) + fib(n - 2, memo);
    return memo[n];
}

/* 

Call Stack:
fib(4)      fib(4)      fib(4)      fib(4)      fib(4)     fib(4)
fib(3)      fib(3)      fib(3)      fib(3)      fib(3)     etc...
fib(2)      fib(2)      fib(2)                  fib(1)           
fib(1)  >>          >>  fib(0)  >>          >>          >>

This is a lot quicker than the non-memoized version, but remember that it's still recursive, 
so we will eventually have n calls on the call stack. We also have the memo, 
but it ends up not mattering much. It will always contain a little less than n items.

Therefore, space complexity is O(n + n-ish) => O(n)
...which is the same as the non-memoized version!

*/

Multi-Level Algorithms

  • What if you have an algorithm that uses another algorithm? For example, what if you loop over an array of strings and sort each string?
  • Be careful not to confuse the input & runtime of the outer algorithm with the input & runtime of inner algorithms
/* Sorting an array of strings */

function sortedStrings (arr) {
    for (let i = 0; i < arr.length; i++) {
        arr[i].sort();                        // Let's say that .sort is O(n log n)
    }
}

// Fun fact: different browsers have different implementations for Array.prototype.sort!
/* Sorting an array of strings */


/* The key to understanding this is that we have two algorithms with two different inputs
    sortedStrings takes an array with an array input, with a length of say 'n'
    Array.prototype.sort takes a string input, with a length of say 's'
*/

function sortedStrings (arr) {
    for (let i = 0; i < arr.length; i++) {      // O(n)
        arr[i].sort();                             // O(s log s)
    }
}

// O(n) * O(s log s) => O(n * s log s)

Resources and Questions

Big O Final

By Tom Kelly

Big O Final

Tech Talk

  • 1,438