Time & Space Complexity Cheat Sheet
The ultimate reference for algorithm and data structure complexities. Print it, bookmark it, ace your technical interviews.
| Algorithm / DS | Worst Time | Best Time | Space | Stable | Notes |
|---|---|---|---|---|---|
| Bubble Sort | O(n²) | O(n) | O(1) | ✅ | Simple, detects sorted input |
| Selection Sort | O(n²) | O(n²) | O(1) | ❌ | Minimal swaps |
| Insertion Sort | O(n²) | O(n) | O(1) | ✅ | Great for nearly sorted |
| Merge Sort | O(n log n) | O(n log n) | O(n) | ✅ | Guaranteed O(n log n) |
| Quick Sort | O(n²) | O(n log n) | O(log n) | ❌ | Fast in practice, avg O(n log n) |
| Heap Sort | O(n log n) | O(n log n) | O(1) | ❌ | In-place O(n log n) |
| Counting Sort | O(n + k) | O(n + k) | O(k) | ✅ | Linear for small ranges |
| Radix Sort | O(nk) | O(nk) | O(n + k) | ✅ | Linear for fixed digits |
| Array Access | O(1) | O(1) | O(1) | - | Direct index access |
| Array Search | O(n) | O(1) | O(1) | - | Linear scan |
| Array Insert | O(n) | O(1) | O(1) | - | End is O(1) amortized |
| Hash Table | O(n) | O(1) | O(n) | - | Avg O(1), worst O(n) |
| BST Search | O(n) | O(log n) | O(1) | - | Balanced: O(log n) |
| Stack/Queue | O(1) | O(1) | O(1) | - | Push/pop O(1) |
| Linked List Insert | O(1) | O(1) | O(1) | - | At known position |
| Heap Insert | O(log n) | O(1) | O(1) | - | Amortized O(1) |
| Linear Search | O(n) | O(1) | O(1) | - | Unsorted data |
| Binary Search | O(log n) | O(1) | O(1) | - | Sorted data required |
| Jump Search | O(√n) | O(1) | O(1) | - | Sorted data |
| Interpolation | O(n) | O(1) | O(1) | - | Avg O(log log n) uniform |
| BFS | O(V + E) | O(V + E) | O(V) | - | Shortest path unweighted |
| DFS | O(V + E) | O(V + E) | O(V) | - | Cycle detection, topo sort |
| Dijkstra | O(V² or E log V) | - | O(V) | - | Shortest path weighted |
| Bellman-Ford | O(VE) | - | O(V) | - | Handles negative edges |
The Essential Complexity Cheat Sheet
This cheat sheet compiles the time and space complexities of the most important algorithms and data structures in computer science. Whether you are studying for technical interviews, optimizing production code, or simply refreshing your knowledge, this reference has you covered.
Memorizing these complexities is valuable, but understanding why each algorithm has its particular complexity is even more important. This guide provides that context along with practical notes on when to use each approach.
Sorting Algorithm Complexities
Sorting algorithms divide into two categories: simple O(n²) algorithms and efficient O(n log n) algorithms. Simple sorts like Bubble, Selection, and Insertion Sort are easy to implement and have low overhead for small inputs. Efficient sorts like Merge Sort, Quick Sort, and Heap Sort handle large datasets but have higher constant factors.
Merge Sort guarantees O(n log n) in all cases but requires O(n) extra space. Quick Sort is O(n²) worst case but O(n log n) average, and is often fastest in practice due to cache efficiency. Heap Sort is O(n log n) worst case with O(1) space—the only comparison sort achieving both bounds.
Non-comparison sorts like Counting Sort and Radix Sort can achieve O(n) when applicable—when values fall in a known range or have fixed-length keys. They trade space for speed and have limited applicability.
Data Structure Complexities
Arrays provide O(1) access but O(n) insertion/deletion (except at the end). Linked Lists offer O(1) insertion/deletion at known positions but O(n) access. Choose based on your operation mix.
Hash Tables provide O(1) average-case for lookup, insert, and delete—the fastest general-purpose search structure. Worst case is O(n) with many collisions, but good hash functions make this rare. Trade-off: O(n) space and no ordering.
Binary Search Trees (balanced) provide O(log n) for all operations while maintaining sorted order. Unbalanced BSTs degrade to O(n). Use balanced variants (Red-Black, AVL) for guaranteed performance.
Heaps provide O(log n) insert and O(1) access to the max/min. Extract-max/min is O(log n). Perfect for priority queues and implementing Heap Sort.
Search Algorithm Complexities
Linear Search is O(n)—simple and works on any collection. Binary Search achieves O(log n) but requires sorted data and random access.
Hash-based search is O(1) average—fastest for exact match queries but requires extra space and does not support range queries.
For specialized searches, consider Jump Search (O(√n), good when backward iteration is expensive) or Interpolation Search (O(log log n) average for uniformly distributed data).
Graph Algorithm Complexities
BFS and DFS both traverse a graph in O(V + E), where V is vertices and E is edges. BFS finds shortest paths in unweighted graphs; DFS is useful for connectivity, cycle detection, and topological sorting.
Dijkstra's algorithm finds shortest paths in weighted graphs with non-negative edges. With a binary heap, it runs in O((V + E) log V); with a Fibonacci heap, O(E + V log V).
Bellman-Ford handles negative edges in O(VE)—slower than Dijkstra but more general. It can also detect negative cycles.
Choosing the Right Structure
For frequent lookups with rare modifications: hash tables or arrays (if indexed access suffices).
For frequent modifications at arbitrary positions: linked lists or balanced trees.
For sorted access and range queries: balanced BST or skip list.
For priority operations (get-min, insert): heap.
For general-purpose sorting: your language's built-in sort (typically an optimized hybrid algorithm).
Interview Tips
Know these complexities cold. Interviewers expect immediate recall of basic algorithm complexities. Be able to explain why—trace through the algorithm to show where the complexity comes from.
Distinguish average case from worst case. Hash tables are O(1) average but O(n) worst. Quick Sort is O(n log n) average but O(n²) worst. Know when worst case matters (adversarial inputs, real-time systems).
Consider space complexity. An interviewer asking about constraints might be hinting at space considerations. Mentioning that Merge Sort needs O(n) space while Quick Sort is in-place shows depth of understanding.
Practice analyzing novel code. Given a new algorithm, trace through to count operations. This skill is more valuable than memorization alone.