UdaanPath Logo UdaanPath

📖 Chapters

Data Structures and Algorithms in C  (DSA)

Data Structures and Algorithms in C (DSA)

Category: IT Fundamentals & Programming

This course is designed to help learners master the core concepts of Data Structures and Algorithms (DSA) using the C programming language. Starting from the basics and advancing to complex topics like graphs, dynamic programming, and memory optimization, this course …

Quick Sort

📘 Quick Sort: The Efficient Partition

After mastering Merge Sort, we now turn our attention to another paramount "Divide and Conquer" sorting algorithm: Quick Sort. Developed by Tony Hoare, Quick Sort is widely regarded as one of the fastest sorting algorithms in practice for average-case scenarios. While sharing the divide and conquer philosophy with Merge Sort, its approach to partitioning and combining differs significantly, leading to distinct performance characteristics and interesting trade-offs. Let's unravel the elegance and power of Quick Sort.

⚡ The Quick Sort Strategy: Partition and Recurse

Quick Sort operates on a clever principle that involves partitioning:

  • Divide (Partition): It picks an element from the array, called a pivot. The goal is to rearrange the array such that all elements smaller than the pivot come before it, and all elements greater than the pivot come after it. Elements equal to the pivot can go on either side. This operation is called partitioning, and it places the pivot in its final sorted position.
  • Conquer (Recurse): The Quick Sort algorithm is then recursively applied to the sub-array of elements with smaller values and separately to the sub-array of elements with greater values.
  • Combine: Unlike Merge Sort, there's no explicit "combine" step. Once the sub-arrays are sorted, the entire array is sorted because the pivot is already in its correct place.

The choice of pivot is critical for Quick Sort's performance. Common strategies include picking the first element, last element, middle element, or a random element.

// C implementation of Quick Sort (using Lomuto Partition Scheme)

// Function to swap two elements
void swap(int* a, int* b) {
    int t = *a;
    *a = *b;
    *b = t;
}

/* This function takes last element as pivot, places
   the pivot element at its correct position in sorted
   array, and places all smaller (than pivot) to left
   of pivot and all greater elements to right of pivot */
int partition(int arr[], int low, int high) {
    int pivot = arr[high]; // Choosing the last element as pivot
    int i = (low - 1); // Index of smaller element

    for (int j = low; j <= high - 1; j++) {
        // If current element is smaller than or equal to pivot
        if (arr[j] <= pivot) {
            i++; // Increment index of smaller element
            swap(&arr[i], &arr[j]);
        }
    }
    swap(&arr[i + 1], &arr[high]);
    return (i + 1);
}

/* The main function that implements QuickSort
   arr[] --> Array to be sorted,
   low  --> Starting index,
   high --> Ending index */
void quickSort(int arr[], int low, int high) {
    if (low < high) {
        /* pi is partitioning index, arr[pi] is now
           at right place */
        int pi = partition(arr, low, high);

        // Separately sort elements before
        // partition and after partition
        quickSort(arr, low, pi - 1);
        quickSort(arr, pi + 1, high);
    }
}
  

Time Complexity:

  • Average Case: O(n log n). This is its highly efficient and most common performance.
  • Worst Case: O(n2). This occurs when the pivot selection consistently results in highly unbalanced partitions (e.g., always picking the smallest or largest element in an already sorted/reverse-sorted array).
  • Best Case: O(n log n), when the pivot consistently divides the array into two roughly equal halves.

Space Complexity: O(log n) on average due to the recursive call stack (auxiliary space). In the worst case, it can be O(n) if the partitions are extremely unbalanced.

Quick Sort Example Visualization: `[10, 80, 30, 90, 40, 50, 70]`

Initial:
10
80
30
90
40
50
70
Partition (1st call):
10
80
30
90
40
50
70
Pivot chosen: 70. Elements less than or equal to 70 move to left.
After Partition:
10
30
40
50
70
90
80
70 is now in its final sorted position. Left sub-array: `[10, 30, 40, 50]`. Right sub-array: `[90, 80]`.
Recurse (Left Sub-array):
10
30
40
50
70
90
80
Pivot for left: 50.
After Partition:
10
30
40
50
70
90
80
50 is in place. Left of 50 is `[10, 30, 40]`. No elements right of 50 in this sub-array.
Recurse (Left of 50):
10
30
40
50
70
90
80
Pivot for `[10, 30, 40]`: 40.
After Partition:
10
30
40
50
70
90
80
40 is in place. Left of 40 is `[10, 30]`.
Sorted Left Part:
10
30
40
50
70
90
80
Left part `[10, 30, 40, 50]` is fully sorted. Now focus on right part.
Recurse (Right Sub-array):
10
30
40
50
70
90
80
Pivot for `[90, 80]`: 80.
After Partition:
10
30
40
50
70
80
90
80 is in place. Left of 80 is empty. Right of 80 is `[90]`.
Final Sorted:
10
30
40
50
70
80
90
All elements sorted.

🚀 Advantages and Considerations of Quick Sort

Quick Sort is a highly popular algorithm due to its practical efficiency:

  • Fast in Practice: Despite a theoretical O(n2) worst-case, Quick Sort is often faster than Merge Sort for typical inputs due to its smaller constant factors and better cache performance. It performs an in-place sort, which means it doesn't need additional temporary arrays like Merge Sort, leading to fewer memory accesses.
  • In-Place Sorting (mostly): The auxiliary space complexity is typically O(log n) for recursion stack, making it more memory-efficient than Merge Sort.
  • Adaptability: With proper pivot selection strategies (e.g., median-of-three, random pivot), the worst-case scenario can be made very rare in practice.

However, it also has considerations:

  • Worst-Case Performance: Poor pivot choice can lead to O(n2) complexity, making it unsuitable for applications requiring guaranteed upper bounds on performance.
  • Not Stable: Unlike Merge Sort, Quick Sort is generally not a stable sorting algorithm. The relative order of equal elements might not be preserved. This might be a concern in specific use cases.
  • Recursive Overhead: For very small arrays, the overhead of recursive calls can make it less efficient than simple sorts like Insertion Sort. Hybrid sorting algorithms (e.g., IntroSort, used in many standard libraries) often switch to Insertion Sort for small sub-arrays to mitigate this.

📌 Real-Life Example (UdaanPath)

Quick Sort's efficiency makes it a strong contender for many real-world applications, including those on a platform like UdaanPath:

  • Ranking Students on a Leaderboard: If UdaanPath needs to display a live leaderboard of student performance, Quick Sort could be used to efficiently sort a large number of student records by their scores. Even with frequent updates, its average-case speed would provide a responsive user experience.
  • Internal Data Processing: Many backend operations on UdaanPath, such as processing batches of assignment submissions, generating reports based on performance metrics, or organizing large sets of internal resource files, could leverage Quick Sort for its speed when memory usage is a concern.
  • Database Query Optimization: While databases have highly optimized internal sorting routines, the concepts of Quick Sort's partitioning are often implemented for sorting query results or building temporary indexes on the fly.
  • File System Sorting: When you sort files by name or size in a folder on your operating system, underlying libraries often use a highly optimized version of Quick Sort or a hybrid approach. This speed would be critical for any file management features within UdaanPath's content creation tools or learning modules.
The crucial factor for Quick Sort's success in these scenarios is often a good pivot selection strategy to avoid the worst-case scenario.

✅ Summary: The "Quick" Choice

  • Quick Sort is a "Divide and Conquer" algorithm that partitions an array around a chosen pivot, then recursively sorts the sub-arrays.
  • Its average-case time complexity is O(n log n), making it very fast in practice due to its in-place sorting nature and good cache performance.
  • Its main vulnerability is a worst-case O(n2) complexity, which can be mitigated with robust pivot selection strategies.
  • Unlike Merge Sort, Quick Sort is generally not stable.
  • Quick Sort's efficiency and lower auxiliary space requirement (on average) make it a popular choice for library implementations and general-purpose sorting tasks.

📤 Coming Next: Beyond Comparison Sorts

We've now covered two of the most efficient comparison-based sorting algorithms. But what if we could sort even faster, by exploiting properties of the numbers themselves? In the next chapter, we'll explore Counting Sort, Radix Sort, and Bucket Sort—non-comparison based sorting algorithms that can achieve linear time complexity under specific conditions.

ECHO Education Point  📚🎒

ECHO Education Point 📚🎒

ECHO Education Point proudly presents its Full Stack Development program 💻 – designed to launch your career in tech!

  • 🚀 Master both Front-End and Back-End technologies
  • 🧪 Includes 11 Mock Tests, 35 Mini Projects & 3 Website Builds
  • 🎯 Special training for job interviews & placement preparation

📍 Location: D-Mart Road, Meghdoot Nagar, Mandsaur
📞 Contact: 8269399715

Start your coding journey with expert instructor Vijay Jain (B.C.A., M.Sc., M.C.A.)
10 Days Free Demo Classes – Limited seats available!

#ECHO #FullStackDevelopment #MandsaurCoding