VS
Numerous computations and tasks become simple by properly sorting information in advance.
phone: 500$
bike: 100$
knife: 10$
purse: 60$
shoe: 70$
calculator: 15$
keyboard: 40$
shovel: 20$
lawnmower: 200$
scissors: 5$
scissors: 5$
knife: 10$
calculator: 15$
shovel: 20$
keyboard: 40$
purse: 60$
shoe: 70$
bike: 100$
lawnmower: 200$
phone: 500$
The ideal sorting algorithm would have the following properties:
- Stable: Equal keys aren't reordered.
- Operates in place, requiring O(1) extra space.
- Worst-case O(n·lg(n)) key comparisons.
- Worst-case O(n) swaps.
- Adaptive: Speeds up to O(n) when data is nearly sorted or when there are few unique keys.
There is no algorithm that has all of these properties, and so the choice of sorting algorithm depends on the application.
Time complexity:
- Best: O(n^2)
- Worst: O(n^2)
Find the smallest element using a linear scan and move it to the front. Then, find the second smallest and move it, again doing a linear scan. Continue doing this until all the elements are in place.
Time complexity:
- Best: O(n)
- Worst: O(n^2)
Start at the beginning of an array and swap the first two elements if the first is bigger than the second. Go to the next pair, etc, continuously making sweeps of the array until sorted.
Time complexity:
- Best: O(n)
- Worst: O(n^2)
Good for a small number of elements to sort or the elements in the initial collection are already "nearly sorted". Determining when the array is "small enough" varies from one machine to another and by programming language.
Time complexity:
- Best: O(n + m)
- Worst: O(n^2)
* m: number of buckets
Partition the array into a finite number of buckets, and then sort each bucket individually. This gives a time of O(n + m), where n is the number of items and m is the number of distinct items. (need to know the boundaries)
1) Divide and conquer
2) Recursive
1.1 Divide the array into 2 parts
1.1 A) There are more than 1 elem
- Continue dividing recursively
1.1 B) Only 1 elem
- Return the value and merge with its siblings
3) Stable
A stable algorithm pays attention to the relationships between element locations in the original collection
4) May need extra temporary array
* Looks like is possible to do it without extra data structure but not really worthy (1) (2)
Best: O(n log(n))
Average: O(n log(n))
Worst: O(n log(n))
1) Divide and conquer
2) Recursive
1.1 Pick a "pivot point". Picking a good pivot point can greatly affect the running time.
1.2 Break the list into two lists
1.3 Recursively sort each of the smaller lists.
1.4 Make one big list
qsort(left) PIVOT qsort(right)
4) May need extra temporary space but
3) Not Stable
Best: O(n log(n))
Average: O(n log(n))
Worst: O(n^2)
If Quicksort is O(n log n) on average, but mergesort is O(n log n) always
WHY?
: constant time to access any piece of memory on demand
Quicksort has a smaller constant than mergesort, in this case make a difference
WHY?
Even though complexity is quadratic in the worst case, looks like it is very rare to get this results if you choose wisely the pivot
Hits the average case way more often than the worst case.
Worst time complexity: O(n log(n))
Best time complexity: O(n)
Worst time complexity: O(n log(n))
Stable!!
Merge sort + Selection sort
http://www.ee.ryerson.ca/~courses/coe428/sorting/mergesort.html
References
Grokking algorithms book
Algorithms-in-a-nutshell-in-a-nutshell book