Algorithm Complexity Reference

A definitive reference guide for algorithm efficiency, covering time and space usage boundaries.

📊 Complexity Growth Comparison

O(1)
O(log n)
O(n)
O(n log n)
O(n²)
O(2ⁿ)
O(n!)

🎯 Time Complexities

O(1)(Constant)
Constant

Time remains constant regardless of input size.

Array accessHash table lookupPush/Pop stack
O(log n)(Logarithmic)
Logarithmic

Time grows logarithmically. Very efficient for large inputs.

Binary searchBalanced BST operationsBinary heap insert
O(n)(Linear)
Linear

Time grows linearly with input size.

Linear searchArray traversalFinding max/min
O(n log n)(Linearithmic)
Linearithmic

Common for efficient sorting algorithms.

Merge sortQuick sort (avg)Heap sort
O(n²)(Quadratic)
Quadratic

Time grows quadratically. Avoid for large inputs.

Bubble sortSelection sortNested loops
O(n³)(Cubic)
Cubic

Very slow. Only for small inputs.

Matrix multiplication3 nested loopsFloyd-Warshall
O(2ⁿ)(Exponential)
Exponential

Extremely slow. Doubles with each additional input.

Recursive FibonacciPower setTraveling salesman (brute)
O(n!)(Factorial)
Factorial

Worst complexity. Only viable for tiny inputs.

PermutationsBrute force TSPN-Queens (brute)

📚 Data Structure Complexities

Data StructureAccessSearchInsertDelete
ArrayO(1)O(n)O(n)O(n)
StackO(n)O(n)O(1)O(1)
QueueO(n)O(n)O(1)O(1)
Linked ListO(n)O(n)O(1)O(1)
Hash TableN/AO(1)O(1)O(1)
Binary Search TreeO(log n)O(log n)O(log n)O(log n)
HeapO(1)O(n)O(log n)O(log n)

Detailed Algorithm Complexity Reference

Welcome to the Algorithm Complexity Reference. In the world of software engineering, understanding how algorithms perform as input data scales is non-negotiable. This tool serves as a primary reference for developers, students, and engineers who need to quickly verify the theoretical limits of their code.

Whether you are optimizing a high-frequency trading bot, building a responsive UI, or simply preparing for a coding interview, having a solid grasp of time and space complexity is your best defense against performance bottlenecks. This reference breaks down the standard "Big O" notations into digestible, actionable insights.

Navigating Time Complexities

Time complexity is often the first concern. We generally categorize algorithms into classes. O(1) is ideal—it means the operation takes the same amount of time regardless of whether you have 10 items or 10 million. O(n) is common for simple loops. However, as we move to O(n²) or O(2ⁿ), performance degrades rapidly.

For example, sorting algorithms often fall into O(n log n). This is significantly better than a naive O(n²) sort. Knowing which bucket your algorithm falls into allows you to predict system behavior under load.

Space Complexity: The Other Half

Often overlooked, space complexity defines how much memory your algorithm consumes. A recursive algorithm might be elegant (O(n) time) but could consume O(n) stack space, potentially leading to stack overflow errors.

Iterative solutions can sometimes reduce space complexity to O(1) by reusing variables, even if the time complexity remains the same. This reference highlights these trade-offs, helping you make informed architectural decisions.

Using This Reference in Code Reviews

When reviewing code, use this reference to challenge assumptions. If a colleague proposes a nested loop over a large dataset, point to the O(n²) quadratic growth curve. Suggest alternatives like hash maps (O(1) lookups) to reduce inner loop costs.

Key Takeaways

  • Always aim for O(1) or O(log n) for core, frequently executed paths.
  • Be wary of O(n) inside another O(n) loop; that yields O(n²).
  • Space is cheap, but not infinite. Don't ignore memory usage.