Welcome to the new weekly email series on data structures & algorithms. With each email, I will be teaching you everything you need to know about DSA. We will conquer this beast piece by piece. So, let’s get to the business.
When building software, it's not enough to simply make something work — it also needs to be efficient. That’s where algorithms come in. They help us find solutions that are not just correct, but also optimal in terms of:
Running time (how long it takes to execute)
Memory usage (how much space it needs)
Effort of a developer (how easy it is to implement and maintain)
Understanding algorithms allows developers to compare different solutions and choose the best one for a given problem.
What Is Running Time Analysis?
Running time analysis answers a simple question:
How does the processing time increase as the size of the input grows?
Instead of measuring actual seconds (which depends on the machine, language, etc.), we use mathematical functions to describe the time complexity. This makes it machine-independent and helps us compare algorithms fairly.
How to Compare Algorithms?
Since each computer may have different speed and memory, comparing algorithms based on runtime alone isn’t helpful. Instead, we compare their asymptotic complexity, which means looking at how they behave as input size nnn becomes very large.
Below is a list of common time complexities, from fastest to slowest:
1 < √log n < n < log(n!) < n log n < n² < 2^n < 4^n < n! < 2^(2^n)
The lower on the list, the slower the algorithm becomes as n increases.
Examples of Time Complexities
Add to front of a linked list O(1)
Find in sorted array (binary search) O(log n)
Find in unsorted array (linear search) O(n)
Merge sort (divide and conquer) O(n log n)
Shortest path in a graph (Dijkstra’s simple) O(n²)
Matrix multiplication O(n³)
Towers of Hanoi problem O(2ⁿ)
Best Case, Worst Case, and Average Case
When analyzing algorithms, we usually consider three scenarios:
Worst Case (Big-O)
The input that causes the maximum running time.
Defines the upper bound of the algorithm.
Best Case (Omega)
The input that causes the minimum running time.
Defines the lower bound.
Average Case (Theta)
The expected running time over all possible inputs.
Helps predict performance when inputs are random.
Example: Big-O Notation
Let’s find the upper bound of this function:
f(n)=5n+8
Drop constants: 5 and 8 don’t matter for large n
Keep the dominant term: 5n
f(n)=O(n)
This tells us that as input size increases, the function grows linearly.
Omega (Ω) Notation — Lower Bound
Just like Big-O defines the worst-case ceiling, Omega defines the best-case floor.
If an algorithm has Ω(n) (Omega(n)), it means it will take at least linear time to complete.
Theta (θ) Notation — Tight Bound
When both the upper and lower bounds are the same, we say the function is Θ(f(n))
It means the algorithm always runs in f(n) time, regardless of input.
Why Is It Called Asymptotic Analysis?
Because we study how algorithms behave as the input size nnn becomes very large — approaching infinity.
In math, a curve that describes this kind of long-term behavior is called an asymptote. Hence the term asymptotic analysis.
Guidelines for Asymptotic Analysis
Focus only on the input size nnn
Ignore machine-specific constants
Drop lower-order terms and constants
Identify the dominant term
Use it to classify the algorithm's time/memory growth
Don’t forget to checkout codstak to practice DSA questions.