Comparing Algorithmic Complexities & Real Trade-Offs
In the last post, we covered how to analyze realistic code, simplify time complexity expressions, and introduced space complexity. Now let’s go a level deeper.
1. Comparing Algorithm Complexities
Let’s say you're solving the same problem with two different approaches—how do you decide which one is better?
Example: Searching for a number in an array
// Linear search – O(n)
function linearSearch(arr, target) {
for (let i = 0; i < arr.length; i++) {
if (arr[i] === target) return i;
}
return -1;
}
// Binary search – O(log n), only works on sorted arrays
function binarySearch(arr, target) {
let left = 0, right = arr.length - 1;
while (left <= right) {
let mid = Math.floor((left + right) / 2);
if (arr[mid] === target) return mid;
else if (arr[mid] < target) left = mid + 1;
else right = mid - 1;
}
return -1;
}
📌 Linear Search works everywhere but is slower (O(n)).
📌 Binary Search is faster (O(log n)) but only works on sorted data.
So if you have to search once, linear search might be fine.
But if you search frequently, sorting the array first (O(n log n)) and then using binary search repeatedly makes more sense.
2. Time vs. Space: The Classic Trade-Off
Sometimes you can save time by using more memory, and other times, you use less memory at the cost of speed.
Here’s a common example:
a. Fibonacci without optimization (slow)
function fib(n) {
if (n <= 1) return n;
return fib(n - 1) + fib(n - 2);
}
// Time: O(2^n), Space: O(n)
b. Fibonacci with memoization (fast)
const memo = {};
function fib(n) {
if (n in memo) return memo[n];
if (n <= 1) return n;
return (memo[n] = fib(n - 1) + fib(n - 2));
}
// Time: O(n), Space: O(n)
c. Fibonacci with iteration (fastest, lowest memory)
function fib(n) {
let a = 0, b = 1;
for (let i = 2; i <= n; i++) {
[a, b] = [b, a + b];
}
return n ? b : 0;
}
// Time: O(n), Space: O(1)
💡 The iterative version uses no extra memory and is very efficient.
Choose based on what matters more in your use case: speed or memory.
3. Popular Sorting Algorithms: How They Scale
Let’s quickly compare three popular sorting methods:
Bubble Sort
Time Complexity: O(n²)
Space: O(1)
Notes: Easy to implement but inefficient for large datasets.Merge Sort
Time Complexity: O(n log n)
Space: O(n)
Notes: Very stable and has predictable performance.Quick Sort
Time Complexity: O(n log n) on average, O(n²) in the worst case
Space: O(log n)
Notes: Fast in practice, but performance depends on pivot selection.
Sorting comes up often in coding interviews—knowing which algorithm to use based on the context is a big advantage.
Final Thoughts
Understanding time and space complexity isn't just about solving interview puzzles—it's about making smarter decisions when building real apps.
In the next post, we’ll dive into:
Graph algorithms like BFS and DFS
Recursive vs iterative trade-offs
Real-life complexity analysis from open source code
Until then—trace your loops, compare strategies, and don’t forget to check for hidden costs.