animesh ghosh
Slides! Whee!
Algorithms are ways to solve a certain problem.
An algorithm tells us in step-by-step procedure how to do certain operations and when to do them so as to facilitate the solving of the given problem.
Algorithm to check primality of a number
In computer programming, an algorithm are a set of statements which execute in a certain order so as to produce/compute an output (which might be based on an input).
A computer executes the statements of a program. It doesn’t know what it is doing.
It is the job of the programmer to make sure the algorithm being executed is a good (fast) and elegant (compact) one so as to make optimum use of the computer’s computational performance and memory resources.
A bad (slow) algorithm may use the best hardware available and still run slower than a good (fast) algorithm on an old computer system.
Examples of slow algorithms:
Matrix multiplication (CPU based)
Cryptographic key decryption (brute-force)
Determinant calculation using Laplace expansion
Factoring integers (prime integers)
Examples of fast algorithms:
Hash tables
Sorting algorithms such as quick sort, merge sort, heap sort
Binary search
Linear search (for small inputs)
Complexity of an algorithm gives us an idea about how the algorithm will run on a computer system.
The execution time of the algorithm depends on
Processor performance
Memory (RAM)
Inputs
Disk speeds (I/O)
Network speed (if required)
Power-consumption (for mobile devices)
These factors affect how the algorithm might perform in the real world.
But for theoretical purposes only, knowing how the execution time of an algorithm grows with increasing size of inputs (N) is a good enough measure of its expected performance.
Two main metrics of theoretical performance are:
Time Complexity
Space Complexity
Big-O notation is a mathematical notation describing the limiting nature of a function.
In the context of algorithmic analysis, Big-O notation allows us to get an idea of the asymptotic upper bound (<=) of the execution time or the space requirement as the input (N) of the algorithm grows larger.
Some other notations:
These notations are used to determine and classify the algorithms according to their performance. Generally, Big-O notation is used.
Time complexity is a function describing the amount of time an algorithm takes in terms of the amount of input to the algorithm. "Time" can mean
the number of memory accesses performed
the number of comparisons between integers
the number of times some inner loop is executed
Space complexity is a function describing the amount of memory (space) an algorithm takes in terms of the amount of input to the algorithm. We often speak of "extra" memory needed, not counting the memory needed to store the input itself.
For both time and space, we are interested in the asymptotic complexity of the algorithm: When n (the number of items of input) goes to infinity, what happens to the performance of the algorithm?
For an efficient algorithm, there seems to always be a space-time trade-off.
A task can be handled either by using a fast algorithm which used quite a lot of working memory, or by using a slower algorithm which used very little working memory.
We will be finding time as a function of the input, T(n) to determine the exact or the upper-bound of the time it takes for an algorithm to run, that is to produce an output according to the given input.
A few algorithms that we shall analyse:
Iterative algorithms:
Recursive algorithms:
Each recursive call contributes to the execution time. Can use repeated substitution, recursion tree or Master theorem to solve T(n).
Other statements can be assumed to have constant time cost.
To find the Big-O notation:
From T(n) find fastest growing term
Drop the coefficients
By animesh ghosh
An informative seminar on Analysis of Algorithms.