The beloved sorting algorithm.

Mansi Vikram
6 min readMar 22, 2021

Quicksort has been known as the “fastest” and “maximum efficient” sorting algorithm by many people. In reality, within the handful of conversations that I’ve had with my professors approximately sorting algorithms, quicksort nearly always comes up because of the exceptional algorithm. We’ll be focusing on how it works, how well it works where, and how it can be optimized.

Visual of different sorting algorithms.

But, what exactly is quicksort? A number of us — myself included — don’t even understand what it is after watching tutorials on repeat numerous times! It's time to replace that. If we’re going to be true algorithmic maestros, we’ve got to be precisely informed about the ins and outs of this notorious algorithm! The quicksort set of rules is a specially interesting one, and it took me a while to grasp it.

Okay, so what does this algorithm precisely do? The quicksort is a sorting algorithm that sorts a group by selecting a pivot point, and partitioning the group across the pivot, in order that factors smaller than the pivot is before it, and factors larger than the pivot are after it. It continues to pick a pivoting factor and break the group up into single-element lists, before amalgamating them to shape one sorted list.

There is a fact about quicksort that we’re familiar with already. The algorithm does something that we’ve seen earlier: it breaks down a bigger problem into smaller, subproblems. This is likewise known as a divide and conquer algorithm, which breaks down the question into simpler versions of itself.

Taking the analogical view in outlook, let’s examine a situation where one had to sort the papers bearing the names of the students, through a call from A-Z. One may pick any splitting point, say L (the pivot). They will then continue to divide the stack of papers into two. A-L and M-Z. It isn’t necessary that the piles need to be equal. Repeating the above steps with the A-L pile, they may cut up it into reasonably sized halves, and the M-Z pile cut up into its halves. The method is repeated until the piles are small enough to be taken care of effortlessly. Finally, the smaller piles may be joined with each other to provide an ordered set of papers. The method used here is reduction at every split to get to the single-element array. At each break/split, the pile was divided and then the identical approach turned into used for the smaller piles by the usage of the technique of recursion.

The Nerd Talk:

I believe everyone reading this must have watched a video about quicksort before. For your sanity (and mine!) I will keep the coding and numbers out of the picture today. However, here is a small example to polish the concept over again:

Diagram reference.

In step 1, we choose the last number as the pivot, that’s ‘6’ in this case, and execute partitioning, as a result re-arranging the array in such a manner that `6` will be positioned in its very last place and to its left will be all the elements much less than it and to its right, we can have all of the elements more than it.

Then we select the subarray to its left and the subarray on the right and select a pivot for them (as seen in the diagram), we chose `three` as the pivot for the left subarray and `11` as the pivot for the proper subarray.

We again implement `partitioning`.

Cost-Benefit of quicksort:

Now that we’re a bit more properly-versed inside the implementation(s) of quicksort, I’ll be jumping returned to my main question: just how precise is quicksort — and when can one appropriately use it?

Let’s start by taking a study on how the quicksort compares to its peers. We’re already familiar with the time complexity of quicksort. Its common overall performance and pleasant-case overall performance at runtime are similar to merge sort: it runs in linearithmic time, or O(n logn). So what different defining traits does this algorithm have?

Well, based upon my little research, quicksort’s space complexity is a piece of dialogue. In case we need a reminder, space complexity is decided by how much additional space/memory an algorithm desires to run. Now, the catch 22 situation round quicksort is that it does require a sure amount of more memory — the space for every recursive algorithm to allocate `left` and `right` pointer references as it continues to partition and sort the array all the way down to single-element lists.

However, here’s the rub: even though quicksort desires greater than the regular/O(1) quantity of memory, it doesn’t need a whole lot extra area than that. It requires a further O(log²n) quantity of memory, which continues to be quite small, but technically it’s extra than *O(1)* constant space. Some programmers appear to suppose that this makes it an out-of-area algorithm, however, most of the tutorial material that I’ve studied suggests that algorithms which require a minimum amount of area, inclusive of O(log²n), nevertheless count number as in-place. So we’ll say that quicksort is an in-place sorting algorithm.

Quicksort has one essential distinction with merge sort, and this ends up being a defining point among these in any other case tremendous-comparable algorithms. This is the fact that quicksort is an unstable algorithm, which means that it won’t be assured to maintain the order of elements as it reorders; two elements that are precisely the same in the unsorted array ought to show up in a reversed order inside the one which was already sorted! Stability ends up being what causes people to choose merge type over quicksort, and it’s the only place wherein merge type seems like the apparent winner.

Similarly, due to the fact, quicksort can sort items without creating a brand new or duplicated list, it doesn’t require outside memory; as an alternative, it can store its complete dataset without using external memory/space, making it an internal sorting algorithm. We already recognize that quicksort has a tendency to use recursion to all itself from inside itself, this means that that we will classify it as a recursive set of rules as well. And, we additionally know that it uses evaluation operators like `<` and `>`, so we additionally can be sure that it’s far a comparison algorithm.

These characteristics are the crucial key to comprehending why such a lot of programmers swear by using quicksort and don’t look back from it. Based upon this dual-component series, we can say that quicksort is usually a splendid preference if we discover ourselves meeting these situations:

1. We don’t care about maintaining the order of the items (stability isn’t essential to us).

2. We want to use an algorithm that can store all of our in the memory/space, without the use of any external space.

3. We are certain that we’ll by no means must readdress data that are looked after — or even majorly sorted.

As it turns out, programmers love quicksort a lot that they’ve even found out ways the way to optimize this algorithm even further!

1. We can optimize quicksort via running the recursive calls in parallel. When we run two quicksort calls in parallel, we are effectively spawning two unique tasks in an effort to run them at the same time. We type each partition of the array independently, the use of the identical quantity of area however best half of the time we did earlier than. This is often referred to as parallel quicksort.

2. Another exceptional optimization is being a little smarter about which pivot to pick. Rather than simply deciding on the first or the last element — which is a lot of what we’ve been doing — we may be a bit smarter with our algorithms. We could observe the last few factors in the list, and pick the median element from among them. This gives us an idea of what type of items are inside the list, and then we are able to do a little bit of math to pick out the excellent element as the pivot.

To summarise, quicksort is so extensively-loved that, if we start maintaining an eye out for it, we’ll start to see it all around us! Many modern-day browsers use an implementation of quicksort in their diverse versions of `sort` techniques! In fact, the browser that you’re the use of proper now likely has an internal definition of quicksort hidden deep within its sort code. Needless to say, it’s just up to you to locate it. So, go forth trooper — you understand precisely what to look for now!

--

--