Sorting Algorithm Visualizer

Watch sorting algorithms in action. Compare performance, understand the mechanics, and learn when to use each algorithm.

Select Sorting Algorithm

64
34
25
12
22
11
90
45
78
33

Merge Sort

Divide, sort halves, merge

Time
O(n log n)
Space
O(n)
Stable
Yes

Algorithm Comparison

AlgorithmBestAverageWorstSpaceStable
Bubble SortO(n)O(n²)O(n²)O(1)
Selection SortO(n²)O(n²)O(n²)O(1)
Insertion SortO(n)O(n²)O(n²)O(1)
Merge SortO(n log n)O(n log n)O(n log n)O(n)
Quick SortO(n log n)O(n log n)O(n²)O(log n)
Heap SortO(n log n)O(n log n)O(n log n)O(1)

Comprehensive Guide to Sorting Algorithms

Sorting is one of the most fundamental operations in computer science. Almost every application needs to organize data at some point—whether displaying search results by relevance, ordering transactions by date, or arranging products by price. The choice of sorting algorithm affects not just speed but also memory usage, stability, and behavior on different data patterns.

This guide covers the major sorting algorithms that every developer should understand. Each has trade-offs that make it ideal for certain situations and poor for others. Understanding these trade-offs enables informed choices for your specific use case.

Stability in Sorting

A stable sorting algorithm preserves the relative order of elements with equal keys. If two items have the same sort key and item A appears before item B in the input, a stable sort guarantees A will appear before B in the output.

Stability matters when sorting by multiple criteria. Sort by secondary key first, then by primary key with a stable sort—the secondary ordering is preserved within groups of equal primary keys. Unstable sorts require combining multiple criteria into a single comparison.

Simple Sorts: O(n²) Algorithms

Bubble Sort repeatedly passes through the array, swapping adjacent elements if they are out of order. After each pass, the largest unsorted element "bubbles" to its correct position. Simple to understand but inefficient: O(n²) comparisons and swaps in the average case.

Bubble Sort's one advantage: it can detect an already-sorted array in O(n) by tracking whether any swaps occurred. For nearly-sorted data, it performs reasonably well. In practice, Insertion Sort is almost always a better choice.

Selection Sort finds the minimum element in the unsorted portion and moves it to the sorted portion's end. It performs exactly n-1 swaps regardless of input, making it useful when write operations are expensive. However, comparisons are always O(n²), even for sorted input.

Insertion Sort builds the sorted array one element at a time by inserting each new element into its correct position among previously sorted elements. Best case is O(n) for sorted input; worst case is O(n²) for reverse-sorted input.

Insertion Sort is the best simple sort for several reasons: stable, in-place, O(n) on nearly-sorted data, and has low overhead for small arrays. Many sophisticated algorithms switch to Insertion Sort for subarrays below 10-15 elements.

Efficient Sorts: O(n log n) Algorithms

Merge Sort divides the array in half, recursively sorts each half, then merges the sorted halves. It guarantees O(n log n) performance in all cases—best, average, and worst. The trade-off is O(n) auxiliary space for the merge operation.

Merge Sort is stable and naturally parallelizable (the two halves can be sorted independently). It is the algorithm of choice for external sorting when data does not fit in memory and for linked lists where random access is expensive.

Quick Sort partitions the array around a pivot element, then recursively sorts the partitions. Average-case O(n log n), but worst-case O(n²) if pivots are consistently poor (already sorted data with first-element pivot, for example).

Despite worse theoretical worst case, Quick Sort is often the fastest in practice due to excellent cache behavior and low constant factors. Choosing good pivots (median-of-three, random) largely eliminates worst-case risk. Quick Sort is not stable.

Heap Sort builds a max-heap from the input, then repeatedly extracts the maximum to build the sorted array from the end. Guaranteed O(n log n) with O(1) auxiliary space—the only comparison sort achieving both bounds.

Heap Sort's weakness is poor cache performance: the heap structure requires jumping around memory. Despite optimal complexity, it is often slower in practice than Quick Sort or Merge Sort on real hardware.

Hybrid Algorithms

Modern standard library sorts are hybrids that combine the best aspects of multiple algorithms. Timsort (Python, Java) uses Merge Sort's stability and Insertion Sort's efficiency on small/sorted runs. Introsort (C++) starts with Quick Sort, falls back to Heap Sort if recursion depth indicates degenerate partitioning.

These algorithms adapt to input characteristics: they handle sorted data efficiently, avoid Quick Sort's worst case, and use Insertion Sort where its simplicity wins. For general-purpose sorting, use your language's built-in sort—it has been highly optimized.

When to Use Each

Quick Sort or library sort for general-purpose, in-memory sorting when stability is not required and worst case is acceptable (with good pivot selection).

Merge Sort when stability is required, for linked lists, or for external sorting of huge datasets.

Insertion Sort for small arrays (fewer than 10-20 elements) or nearly-sorted data.

Heap Sort when O(n log n) worst case is required with O(1) space, though this is rare in practice.

Counting Sort, Radix Sort (not comparison sorts) when keys are integers in a known range—these achieve O(n) but require extra space proportional to the key range.

Implementing Sorting

Implementing sorting algorithms from scratch develops understanding but is rarely necessary in production. Standard library sorts are extensively tested, optimized for modern hardware, and handle edge cases correctly.

When implementing, watch for: off-by-one errors in loop bounds, integer overflow in midpoint calculation, proper handling of empty/single-element arrays, and stack overflow in naive recursive implementations on large inputs.