Why Complexity Analysis Matters
Complexity analysis helps you understand how your code will perform as data grows. It's not just about writing code that works, it's about writing code that still works efficiently with real-world data scales.
What is Algorithm Complexity?
Imagine you're cooking dinner. Making a sandwich takes 5 minutes whether it's for 1 person or 2 people (you just make another one quickly). But cooking a big pot of soup? The more people you're feeding, the longer it takes to chop all those vegetables!
Algorithm Complexity
Algorithm Complexity measures how the time (how long your code runs) or space (how much memory it uses) changes as your input gets bigger. It tells you the "growth rate" of your algorithm as the problem size increases.
If you have 10 items, your code might run instantly. But what happens with 10,000 items? Or 10 million? Without complexity analysis, you're just guessing. With it, you can predict exactly how your code will behave at scale.
Why it matters: A slow algorithm on small test data might become unbearably slow on real data. Complexity analysis helps you catch this before deployment, potentially saving hours of runtime and keeping users happy.
Real-World Scenario: The Library Problem
Let's understand why this matters with a concrete example. Imagine you're building a search system for a library with 1 million books. Your task: find a book by its title. The question isn't "can we find it?" but "how fast can we find it?"
Approach 1: Linear Search (Check Every Book)
Start at book #1 and check each book one by one until you find what you're looking for.
// Linear Search - Check every book
for (Book book : allBooks) {
if (book.title.equals(searchTitle)) {
return book;
}
}
Approach 2: Binary Search (Divide and Conquer)
Books are sorted alphabetically. Check the middle book. If your target comes before it alphabetically, search the left half. Otherwise, search the right half. Repeat this process.
// Binary Search - Divide and conquer
while (left <= right) {
int mid = (left + right) / 2;
if (books[mid].equals(target))
return mid;
else if (books[mid].compareTo(target) < 0)
left = mid + 1;
else
right = mid - 1;
}
Why "Correct" Isn't Good Enough
Both algorithms give the correct answer. But one is production-ready, the other is a disaster waiting to happen.
Performance
Your app has 1 million users. Will it handle the load or crash? Complexity analysis tells you BEFORE you deploy, saving you from 3am emergency fixes!
Example: Facebook processes billions of posts.
If their search was O(n²) instead of O(n log n), every search would take hours instead of milliseconds. They'd need 1000x more servers!
Comparison
"My solution works but is it good?" Compare your approach against others objectively. Stop guessing - know which algorithm wins!
Example: Three sorting algorithms:
• Bubble Sort: O(n²)
• Merge Sort: O(n log n)
• Tim Sort (Python's default): O(n log n) with optimizations
Interviews
"What's the time complexity?" Every single coding interview asks this. Not knowing = instant rejection at top companies like Google, Amazon, Meta.
Red Flag Response: "I'm not sure, but it works!"
Strong Response: "This is O(n) time and O(1) space. We could optimize to O(log n) using binary search if we sort first, trading O(n log n) preprocessing."
Time vs. Space: The Two Dimensions
Every algorithm has TWO complexities you must analyze: how fast it runs (time) and how much memory it uses (space). Sometimes they're in harmony, sometimes they conflict.
Approach 1: Iterative Solution (Best Space)
Let's find the sum of an array using a simple loop with just one variable.
// Iterative approach - space efficient
int sumIterative(int[] arr) {
int total = 0; // Just 1 variable
for (int num : arr) {
total += num;
}
return total;
}
// Time: O(n) - visit each element once
// Space: O(1) - only 'total' variable, constant space
total variable (4 bytes for an int). Whether the array has 10 or 10 million elements,
we still only use those same 4 bytes. This is constant space O(1) - the memory doesn't grow with input size.
Approach 2: Recursive Solution (Worse Space)
Now let's solve the same problem with recursion - each call invokes itself with a smaller problem.
// Recursive approach - uses more space
int sumRecursive(int[] arr, int index) {
if (index >= arr.length) return 0; // Base case
return arr[index] + sumRecursive(arr, index + 1); // Recursive call
}
// Time: O(n) - still visit each element once
// Space: O(n) - n function calls on call stack!
StackOverflowError - literally running out of stack memory!
Time-Space Trade-off: Choose Your Weapon
Often you face a choice: use more memory to run faster, or use less memory and run slower. There's no universal "best" - it depends on your constraints.
- Use More Space → Faster Time: Hash tables use O(n) space but give O(1) lookups instead of O(n) array searches.
- Use Less Space → Slower Time: In-place sorting (O(1) space) might be slightly slower than using temporary arrays (O(n) space).
What We'll Learn in This Module
Big O Notation
Learn to express complexity using mathematical notation. Master O(1), O(n), O(log n), O(n²), and more!
Time Complexity
Analyze how execution time grows with input. Identify loops, recursion, and hidden complexity.
Space Complexity
Track memory usage. Understand stack space, auxiliary space, and in-place algorithms.
Code Analysis
Practice analyzing real algorithms. Build the intuition to spot complexity patterns instantly.
- Look at any code and immediately identify its Big O complexity
- Compare multiple algorithms and choose the most efficient one
- Optimize your own code by identifying bottlenecks
- Ace complexity analysis questions in technical interviews
- Write scalable code that performs well with massive datasets
Big O Notation
Big O notation is the universal language for describing algorithm performance. It answers: "As my input grows, how does my algorithm's runtime grow?"
What is Big O?
Imagine comparing two cars. You can't just say "my car is fast" - that's meaningless without context. Is it fast on highways? In cities? Big O works the same way for algorithms.
Big O Notation
Big O notation is a standardized way to describe how an algorithm's runtime (or space usage) grows as the input size increases. The "O" stands for "Order of" - meaning the order of magnitude of growth.
When we write O(n), we're saying "this algorithm's time grows proportionally with n" (where n is the input size). When we write O(1), we mean the time stays constant regardless of input size.
Key principle: Big O describes the worst-case scenario - what happens when things are as bad as they can get. It's platform-independent, language-independent, and tells you how your algorithm scales at large input sizes.
Why We Need Big O: The Standardization Problem
Without Big O, comparing algorithms is nearly impossible. Your laptop is faster than mine. Your code runs on a server with 64 cores. How do we fairly compare? Big O solves this by measuring growth rate, not absolute time.
The Problem: Actual Runtime Comparison
Let's say you ask two programmers "How fast is your bubble sort?" Here's what you'd hear:
Bad Way: Actual Runtime
"My bubble sort took 5 seconds on my laptop!"
Why this doesn't work:
- Different on every computer
- Changes with CPU load
- Varies with programming language
- Depends on input data
- Can't predict scaling
Good Way: Big O Notation
"My algorithm is O(n²)"
Why this works:
- Platform-independent
- Language-independent
- Describes growth rate
- Predicts performance at scale
- Universally understood
Understanding Growth Rates: A Visual Analogy
Imagine you're running different types of businesses. As you get more customers (n), how does your workload grow? These three businesses show the three main complexity patterns:
Vending Machine
Pressing a button takes the same time whether you're the 1st or 1 millionth customer today. Constant work, no matter how many users.
return arr[0]; // Always 1 step
Custom T-Shirts
Each customer needs 1 custom shirt. 10 customers = 10 shirts to make. 1000 customers = 1000 shirts. Work grows directly with customers.
for (item in list) // n iterations
Handshake Event
Everyone shakes hands with everyone else. 10 people = 45 handshakes. 100 people = 4,950 handshakes! Work explodes with growth.
for i: for j // n × n times
Why "Worst-Case"? Understanding Upper Bounds
// Example: Linear Search
boolean contains(int[] arr, int target) {
for (int num : arr) {
if (num == target) {
return true; // Found it!
}
}
return false; // Not found
}
Let's analyze this linear search algorithm in all three scenarios:
Best Case
Target is the first element
1 comparison
O(1)
Average Case
Target is somewhere in the middle
n/2 comparisons
O(n)
Worst Case
Target is last or doesn't exist
n comparisons
O(n)
Why Worst-Case Matters
- Guarantees: "This will NEVER be worse than O(n²)"
- Planning: Design for worst case, hope for average/best
- Safety: Production systems need reliability, not optimism
- Comparability: Standardized way to compare algorithms
What Big O Ignores
- Constants: O(2n) and O(n) are both "O(n)"
- Lower terms: O(n² + n) simplifies to O(n²)
- Hardware: Doesn't matter if you have fast/slow CPU
- Language: Java, Python, C++ all use same notation
The Three Asymptotic Notations
Big O has two siblings: Omega (Ω) and Theta (Θ). Together they give us the complete picture of algorithm performance.
| Notation | Name | Meaning | Use Case | Interview Frequency |
|---|---|---|---|---|
| O (Big O) | Upper Bound | Worst-case "will never be slower than this" | Most commonly used | 95% of questions |
| Ω (Omega) | Lower Bound | Best-case "will never be faster than this" | Less common, theoretical | 3% of questions |
| Θ (Theta) | Tight Bound | Exact bound "always this complexity" | Precise analysis | 2% of questions |
Detailed Breakdown of Each Notation
O - Big O
(Upper Bound)
Definition: "At worst, it will be this slow or faster."
Analogy: Like a speed limit sign. You won't go faster than 60 mph. (But you might go 30 mph!)
Example: Linear Search
O(n) - worst case: check all n elements
Even if you find it first try (O(1)), we still say it's O(n) because that's the upper bound guarantee.
Ω - Omega
(Lower Bound)
Definition: "At best, it will be this fast or slower."
Analogy: Minimum wage. You'll make at least $15/hour. (But could make $50!)
Example: Linear Search
Ω(1) - best case: find it immediately
Even if you have to check all n elements (worst case), we can say Ω(1) because that's the best possible outcome.
Θ - Theta
(Tight Bound)
Definition: "It will always be exactly this complexity."
Analogy: Fixed-price menu. $20 for dinner, no matter what you order.
Example: Print All Elements
Θ(n) - always check all n elements
Can't skip elements. Must print all. Best, worst, average all = n operations. This is Θ(n).
Complete Example: Understanding All Three
// Linear Search in an array
boolean linearSearch(int[] arr, int target) {
for (int i = 0; i < arr.length; i++) {
if (arr[i] == target) {
return true;
}
}
return false;
}
// Analysis:
// Best case (Ω): Target is arr[0] → 1 comparison → Ω(1)
// Worst case (O): Target is arr[n-1] or missing → n comparisons → O(n)
// Average case (Θ): On average, find at position n/2 → Θ(n)
Why We Usually Just Say O(n)
In practice, we care about the guarantee - the worst that can happen. Saying "this is O(n)" tells your team and interviewer: "I guarantee this won't be slower than linear time."
The Key Insight: Big O is a promise, not a measurement. You're telling others what to expect in the worst case.
When to Use All Three
Academic papers, algorithm research, and advanced interviews might ask for all three. For example:
QuickSort Example:
O(n²) worst case | Θ(n log n) average | Ω(n log n) best
Saying "O(1) best case, O(n) worst case" is technically correct but unnecessary.
Big O Rules - The 4 Golden Rules
Rule 1 Drop Constants
"The 2× doesn't matter at scale" - We care about growth PATTERN, not exact multipliers.
// ============================================
// RULE 1: Drop Constants (The "Ignore Multipliers" Rule)
// ============================================
// WHY? Because we care about growth PATTERN, not exact speed.
// O(2n) and O(n) both grow linearly - the "2" doesn't change the pattern.
O(2n) → O(n) // "Twice as fast" still grows linearly
O(100n) → O(n) // Even 100x faster is still linear
O(500) → O(1) // 500 operations is still "constant" (doesn't grow)
O(n/2) → O(n) // Half the work is still linear
O(3n + 5) → O(n) // The 3 and 5 don't matter at infinity
Why Constants Don't Matter
As n approaches infinity, constant multipliers become negligible.
- When n = 1,000,000: Whether it's 2n or 100n, both are in the millions - same order of magnitude
- The difference between 2 million and 100 million is huge in absolute terms
- But both are still "linear" - they grow at the same rate
- When n doubles to 2,000,000: Both also double. That's the pattern we care about!
Real-World Example
Two cars traveling at 60 mph vs 120 mph. Different speeds, but both grow linearly with time. In Big O terms, both are O(n) where n = hours traveled.
Rule 1 in Action: Detailed Examples
// Example 1: Multiple operations in sequence
void processArray(int[] arr) {
// Step 1: Print all elements
for (int num : arr) { // O(n)
System.out.println(num);
}
// Step 2: Print all elements AGAIN
for (int num : arr) { // O(n)
System.out.println(num);
}
}
// Total: O(n) + O(n) = O(2n) → Simplifies to O(n)
Step by Step Analysis
We loop through the array twice. That's 2n operations total.
Why We Drop the 2
- At scale, whether you loop twice or once doesn't change the fundamental behavior
- If input doubles from 1000 to 2000, both versions take proportionally longer
- The growth rate is the same
- We're not optimizing exact speed here - we're classifying growth patterns
// Example 2: Constant operations
void printFirst5(int[] arr) {
System.out.println(arr[0]); // Operation 1
System.out.println(arr[1]); // Operation 2
System.out.println(arr[2]); // Operation 3
System.out.println(arr[3]); // Operation 4
System.out.println(arr[4]); // Operation 5
}
// Total: 5 operations = O(5) → Simplifies to O(1)
Why O(5) Becomes O(1)
These 5 operations happen regardless of array size:
- Array of 10? Still 5 prints
- Array of 1 million? Still 5 prints
The Key Insight
- The number of operations doesn't grow with input
- That's constant time
- Whether it's 1, 5, or 500 fixed operations, they all simplify to
O(1)because they don't scale with n
Rule 2 Drop Non-Dominant Terms
"The biggest term wins" - When n gets huge, smaller terms become insignificant.
// ============================================
// RULE 2: Drop Non-Dominant Terms (The "Biggest Wins" Rule)
// ============================================
// WHY? When n gets huge, smaller terms become insignificant.
// If n = 1,000,000: n² = 1,000,000,000,000 but n = just 1,000,000
O(n² + n) → O(n²) // n² crushes n when n is large
O(n + log n) → O(n) // n grows faster than log n
O(n³ + n² + n + 1) → O(n³) // Only the biggest term matters
O(n + 1000000) → O(n) // Even a million is nothing compared to n→∞
O(n log n + n) → O(n log n) // n log n dominates plain n
Visual Proof: n = 1,000
1,000,000
1,000
1,001,000 ≈ 1,000,000
n term barely adds anything!
At Scale: n = 1,000,000
1,000,000,000,000
(1 trillion)
1,000,000
(1 million)
n term is 0.0001% of total - completely negligible!
Complexity Hierarchy (Memorize This!)
// Example 3: Combo of operations
void processData(int[] arr) {
// Part 1: Single loop (O(n))
for (int num : arr) {
System.out.println(num);
}
// Part 2: Nested loops (O(n²))
for (int i = 0; i < arr.length; i++) {
for (int j = 0; j < arr.length; j++) {
System.out.println(arr[i] + arr[j]);
}
}
// Part 3: Constant operations (O(1))
System.out.println("Done!");
}
// Total: O(n) + O(n²) + O(1) = O(n² + n + 1) → O(n²)
Breaking It Down
Single Loop
Visits each element once
n
operations
Nested Loops
Visits all pairs
n × n
n²
Constant
One print statement
1
operation
n² + n + 1
Simplification Logic
n² dominates everything. Let's prove it with real numbers:
n = 1000:
n² = 1,000,000
n + 1 = 1,001
We keep only the dominant term:
O(n²)
Rule 3 Different Inputs = Different Variables
"Be specific about WHICH input" - Can't call everything 'n' if there are multiple inputs!
// ============================================
// RULE 3: Different Inputs = Different Variables
// ============================================
// WHY? If arrays have different sizes, you can't call both "n"!
// Be specific about WHICH input you're measuring.
void processTwo(int[] users, int[] products) {
for (int u : users) { } // O(u) where u = users.length
for (int p : products) { } // O(p) where p = products.length
// Total: O(u + p), NOT O(n) - they're independent!
}
// WRONG: "This is O(2n)" - which n? users or products?
// RIGHT: "This is O(u + p)" - clear and precise!
Why This Matters
Imagine users has 100 items and products has 1,000,000 items:
Wrong Way
Saying "O(n)"
n? users or products?
Ambiguous and imprecise
Right Way
Saying "O(u + p)"
O(100 + 1,000,000) = O(1,000,100)
Clear and precise
u + p to just p.
They're independent variables - we must keep both!
Pro Tip for Interviews
Always clarify your variables explicitly:
u = users.length and p = products.length. Then this is O(u + p)."
This shows precise analytical thinking!
// Example 4: Nested loops with different arrays
void findPairs(int[] arrA, int[] arrB) {
for (int a : arrA) { // Runs a times (a = arrA.length)
for (int b : arrB) { // For EACH a, runs b times (b = arrB.length)
System.out.println(a + ", " + b);
}
}
}
// Total: O(a × b), NOT O(n²)!
Analysis
arrA
Runs a times
a iterations
arrB
Runs b times for EACH a
b iterations
Operations
a × b
O(a × b)
Why NOT O(n²)?
O(n²) implies both dimensions are the same size (n × n)
a and b could be completely different!
Example: If a = 10 and b = 1000
10,000
100
100x difference!
O(a × b) or O(ab)This clearly shows the relationship between both inputs
Rule 4 Nested Loops Multiply
"Loops inside loops multiply" - The inner loop runs COMPLETELY for EACH iteration of the outer loop.
// ============================================
// RULE 4: Nested Loops Multiply (The "Multiplication" Rule)
// ============================================
// WHY? The inner loop runs COMPLETELY for EACH iteration of outer loop.
// 10 outer × 10 inner = 100 total operations
for (int i = 0; i < n; i++) { // Runs n times
for (int j = 0; j < n; j++) { // For EACH i, runs n times
System.out.println(i + j); // Total: n × n = n² operations!
}
}
// Triple nested? n × n × n = O(n³) - gets slow VERY fast!
Key Insight
The inner loop doesn't just run n times total. It runs n times FOR EACH ITERATION of the outer loop.
j: 0 to n-1
n times
j: 0 to n-1
n times
j: 0 to n-1
n times
continues
n times
n
(outer)
n
(inner)
n²
n + n (addition) - it's n × n (multiplication)!
Visual Proof: Single vs Nested Loops
Let's see why nested loops multiply. If n = 4:
Single Loop - O(n)
Just 4 steps for 4 elements. Simple!
Nested Loops - O(n²)
i=0:
j=0,1,2,3
(4 ops)
i=1:
j=0,1,2,3
(4 ops)
i=2:
j=0,1,2,3
(4 ops)
i=3:
j=0,1,2,3
(4 ops)
4 × 4 = 16 steps!
The Gap Grows Exponentially!
// Example 5: Tricky nested loop (not starting at 0)
void printPairs(int[] arr) {
for (int i = 0; i < arr.length; i++) {
for (int j = i + 1; j < arr.length; j++) { // j starts at i+1
System.out.println(arr[i] + ", " + arr[j]);
}
}
}
// Still O(n²)! Here's why:
Common Interview Trap
"The inner loop doesn't run n times, it starts at i+1! Is it still O(n²)?"
Let's Count the Iterations
i = 0
j: 1 to n-1
n-1 timesi = 1
j: 2 to n-1
n-2 timesi = 2
j: 3 to n-1
n-3 times...
continues
→ 1 timeTotal Operations
(n-1) + (n-2) + (n-3) + ... + 1
n(n-1)/2 = (n² - n)/2
Simplification (Apply the Rules!)
(n² - n)/2
n² - n
O(n²)
n²/2. After dropping constants → O(n²)
Rules Summary: Quick Reference Cards
Rule 1: Drop Constants
O(5n) → O(n) • O(100) → O(1)
Constants don't affect the growth pattern. O(2n) and O(n) both grow linearly - the "2" is irrelevant when n→∞.
Rule 2: Drop Non-Dominant Terms
O(n² + n) → O(n²) • O(n + log n) → O(n)
The biggest term wins! When n = 1,000,000: n² = 1 trillion, but n = just 1 million. Keep only the dominant term.
Rule 3: Different Inputs = Different Variables
processTwo(arrA, arrB) → O(a + b)
Can't call everything 'n'! If processing two arrays of different sizes, use O(a + b) or O(a × b), not ambiguous O(n).
Rule 4: Nested Loops Multiply
O(n) × O(n) = O(n²) • O(a) × O(b) = O(ab)
Loops inside loops multiply, not add! The inner loop runs completely for EACH iteration of the outer loop.
Master These Rules!
90% of complexity analysis boils down to applying these 4 rules correctly
O(5n) = O(n), O(200) = O(1)
Keep only the fastest-growing term
Use a, b, m, n - be specific!
Inner × Outer, not Inner + Outer
Practice Questions: Big O Notation
Task: Apply the simplification rules to express O(3n² + 2n + 100) in its simplest Big O form.
Hint: Remember rules about constants and dominant terms.
Show Solution
Answer: O(n²)
Step 1: Drop constants → n² + n + 1
Step 2: Drop non-dominant terms (n² dominates n and 1) → O(n²)
void mystery(int[] arr) {
for (int i = 0; i < arr.length; i++) {
System.out.println(arr[i]); // O(1)
}
for (int i = 0; i < arr.length; i++) {
for (int j = 0; j < arr.length; j++) {
System.out.println(arr[i] + arr[j]); // O(1)
}
}
}
Task: Determine the overall time complexity.
Show Solution
Answer: O(n²)
First loop: O(n)
Second nested loop: O(n²)
Total: O(n) + O(n²) = O(n²)
Rule 2: Drop non-dominant terms. n² dominates n.
void process(int[] arr1, int[] arr2) {
for (int i = 0; i < arr1.length; i++) {
System.out.println(arr1[i]);
}
for (int j = 0; j < arr2.length; j++) {
System.out.println(arr2[j]);
}
}
Task: Express the time complexity using appropriate variables.
Show Solution
Answer: O(a + b) where a = arr1.length, b = arr2.length
Rule 3: Different inputs = different variables. We can't say O(n) because the arrays may have different sizes!
void matchPairs(String[] names, int[] ids) {
for (int i = 0; i < names.length; i++) {
for (int j = 0; j < ids.length; j++) {
System.out.println(names[i] + ": " + ids[j]);
}
}
}
Task: Determine the time complexity. Is it O(n²)?
Show Solution
Answer: O(n × m) where n = names.length, m = ids.length
Rule 3 + Rule 4! It's NOT O(n²) because the arrays are different. Outer loop runs n times, inner runs m times. If n=1000 and m=10, that's only 10,000 operations, not 1,000,000!
Task: Order these complexities from fastest (best) to slowest (worst):
O(n log n), O(1), O(n²), O(log n), O(2ⁿ), O(n)
Show Solution
From fastest to slowest:
O(1)- Constant (instant)O(log n)- Logarithmic (very fast)O(n)- Linear (scales well)O(n log n)- Linearithmic (efficient sorting)O(n²)- Quadratic (slows down quickly)O(2ⁿ)- Exponential (unusable for large n)
Common Time Complexities
From fastest to slowest (best to worst). Memorize this hierarchy! Green = Great, Blue = Good, Yellow = Be Careful, Red = Avoid if possible.
The "Million Item Test"
How many operations for 1,000,000 items?
O(1)
O(log n)
O(n)
O(n log n)
O(n²)
O(2ⁿ)
The Complexity Hierarchy
| Complexity | Name | Example Algorithm | n=1000 | n=1,000,000 |
|---|---|---|---|---|
O(1) |
Constant | Array access, HashMap lookup | 1 | 1 |
O(log n) |
Logarithmic | Binary search | ~10 | ~20 |
O(n) |
Linear | Linear search, single loop | 1,000 | 1,000,000 |
O(n log n) |
Linearithmic | Merge sort, Quick sort | ~10,000 | ~20,000,000 |
O(n²) |
Quadratic | Nested loops, Bubble sort | 1,000,000 | 1,000,000,000,000 |
O(n³) |
Cubic | Triple nested loops | 1,000,000,000 | 10¹⁸ |
O(2ⁿ) |
Exponential | Recursive fibonacci, subsets | 10³⁰⁰ (astronomical) | Impossible |
O(n!) |
Factorial | Permutations | Impossible | Impossible |
Detailed Breakdown of Each Complexity
O(1) Constant Time - The Holy Grail
The best possible complexity. Operations that take the same time regardless of input size.
// Example 1: Array Access
int getFirstElement(int[] arr) {
return arr[0]; // Direct memory access
}
// Example 2: HashMap Get
Integer getValue(HashMap map, String key) {
return map.get(key); // Hash lookup (average case)
}
// Example 3: Mathematical Operation
int addTwo(int a, int b) {
return a + b; // Single arithmetic operation
}
Why O(1)?
Same work whether 10 or 10 million elements
Array Access
arr[0]
Calculates memory address instantly:base + 0 × size
HashMap Lookup
map.get(key)
Hash code → bucket lookup
Fixed steps always
Math Operations
a + b
CPU instructions execute
in fixed cycles
O(log n) Logarithmic Time - Divide and Conquer
Each step eliminates a large portion of the remaining data. Extremely efficient!
// Example 1: Binary Search
int binarySearch(int[] arr, int target) {
int left = 0, right = arr.length - 1;
while (left <= right) {
int mid = left + (right - left) / 2;
if (arr[mid] == target) return mid; // Found!
if (arr[mid] < target) left = mid + 1; // Search right half
else right = mid - 1; // Search left half
}
return -1; // Not found
}
// Example 2: Finding height of balanced tree
int treeHeight(TreeNode root) {
if (root == null) return 0;
return 1 + Math.max(treeHeight(root.left), treeHeight(root.right));
}
// For balanced tree with n nodes, height = log(n)
Why O(log n)?
Each iteration cuts the problem size in half!
Halving 1,000,000 Items
You can halve a million items only ~20 times before reaching 1!
O(n) Linear Time - The Workhorse
Visit every element exactly once. The most common complexity you'll encounter.
// Example 1: Find Maximum
int findMax(int[] arr) {
int max = arr[0];
for (int i = 1; i < arr.length; i++) {
if (arr[i] > max) max = arr[i];
}
return max;
}
// Example 2: Count Occurrences
int countOccurrences(int[] arr, int target) {
int count = 0;
for (int num : arr) {
if (num == target) count++;
}
return count;
}
// Example 3: Build Frequency Map
Map countFrequency(int[] arr) {
Map freq = new HashMap<>();
for (int num : arr) { // O(n) loop
freq.put(num, freq.getOrDefault(num, 0) + 1); // O(1) per put
}
return freq; // Total: n × O(1) = O(n)
}
Why O(n)?
Must examine every element - can't skip any!
Finding Max
Need to check all elements - the last one could be the largest!
Counting
Can't know the total without checking each element
Frequency Map
Every element needs to be counted once
Performance at Scale
O(n log n) Linearithmic Time - The Optimal Sort
The sweet spot for efficient sorting. Faster than O(n²) but slower than O(n).
// Example 1: Merge Sort
void mergeSort(int[] arr, int left, int right) {
if (left < right) {
int mid = left + (right - left) / 2;
// Divide (creates log n levels)
mergeSort(arr, left, mid); // Sort left half
mergeSort(arr, mid + 1, right); // Sort right half
// Conquer (n work per level)
merge(arr, left, mid, right); // Merge sorted halves
}
}
// Example 2: Quick Sort (average case)
void quickSort(int[] arr, int low, int high) {
if (low < high) {
int pi = partition(arr, low, high); // O(n) partitioning
quickSort(arr, low, pi - 1); // Recursively sort left
quickSort(arr, pi + 1, high); // Recursively sort right
}
}
Why O(n log n)?
Divide-and-conquer creates log n levels, each doing n work
Merge Sort Breakdown (n = 8)
8 ops
8 ops
8 ops
8 ops
n × log n = 8 × log₂8 = 8 × 3 ≈ 24-32
Why it's the best we can do for comparison sorts: Mathematically proven that any comparison-based sorting algorithm requires at least O(n log n) comparisons in the worst case! This is a theoretical lower bound.
Real-world analogy: Tournament bracket. 16 teams? Need 4 rounds (log₂16 = 4), with 8+4+2+1 = 15 total games (≈ n).
O(n²) Quadratic Time - The Danger Zone
Nested loops. Fine for small inputs, catastrophic for large ones.
// Example 1: Bubble Sort
void bubbleSort(int[] arr) {
int n = arr.length;
for (int i = 0; i < n; i++) { // n iterations
for (int j = 0; j < n - 1; j++) { // n-1 iterations each
if (arr[j] > arr[j + 1]) {
// Swap
int temp = arr[j];
arr[j] = arr[j + 1];
arr[j + 1] = temp;
}
}
}
}
// Example 2: Check for Duplicates (Brute Force)
boolean hasDuplicateBrute(int[] arr) {
for (int i = 0; i < arr.length; i++) {
for (int j = i + 1; j < arr.length; j++) {
if (arr[i] == arr[j]) return true;
}
}
return false;
}
// Example 3: Print All Pairs
void printAllPairs(int[] arr) {
for (int i = 0; i < arr.length; i++) {
for (int j = 0; j < arr.length; j++) {
System.out.println("(" + arr[i] + ", " + arr[j] + ")");
}
}
}
Why O(n²)?
Inner loop runs n times FOR EACH iteration of outer loop. That's n × n = n² operations!
The Problem with O(n²)
When O(n²) is Acceptable
Real-world analogy: Everyone shakes hands with everyone at a party!
Time Complexity Examples
O(1) Constant Time
int getFirst(int[] arr) {
return arr[0]; // Single operation
}
We access the first element directly using index 0. The array stores elements in contiguous memory locations, so the computer can calculate the exact memory address instantly using: base_address + (index × element_size).
Whether the array has 5 elements or 5 million elements, this takes the exact same time - just one memory lookup. No loops, no searching, no comparisons needed.
This is why arrays are so powerful for random access! Any element can be retrieved in constant time if you know its index.
Think of it like: Opening a book directly to page 1. It doesn't matter if the book has 100 pages or 10,000 pages - you just flip to that page instantly!
O(log n) Logarithmic Time
int binarySearch(int[] arr, int target) {
int left = 0, right = arr.length - 1;
while (left <= right) {
int mid = left + (right - left) / 2; // Find middle
if (arr[mid] == target) return mid; // Found it!
if (arr[mid] < target) left = mid + 1; // Search right half
else right = mid - 1; // Search left half
}
return -1; // Not found
}
Each iteration eliminates HALF the remaining elements. This is the power of binary search - we don't check every element, we strategically divide the search space.
Watch the elimination: 1000 → 500 → 250 → 125 → 62 → 31 → 15 → 7 → 3 → 1. Only ~10 steps for 1000 elements! For 1 million elements? Just ~20 steps!
Key line: mid = left + (right - left) / 2 finds the middle index safely (avoids integer overflow). We compare target with the middle element, then discard the irrelevant half.
⚠️ Prerequisite: The array MUST be sorted! Binary search only works on sorted data because we rely on the ordering to know which half to eliminate.
O(n) Linear Time
int sum(int[] arr) {
int total = 0;
for (int num : arr) {
total += num; // Visit each element once
}
return total;
}
The for-each loop visits every element exactly once. No element is skipped, no element is visited twice - it's a single pass through the entire array.
The relationship is perfectly proportional: 1,000 elements = 1,000 additions. 1 million elements = 1 million additions. Double the input? Double the time.
This is the most common complexity you'll encounter. Anytime you need to examine, process, or transform every element in a collection, you get O(n).
Examples: finding maximum value, calculating sum, printing all elements, searching for a specific value (linear search), copying an array.
O(n log n) Linearithmic Time
void mergeSort(int[] arr, int left, int right) {
if (left < right) {
int mid = left + (right - left) / 2;
mergeSort(arr, left, mid); // Sort left half
mergeSort(arr, mid + 1, right); // Sort right half
merge(arr, left, mid, right); // Merge sorted halves
}
}
Divide and Conquer in action! The algorithm recursively splits the array in half until we have single elements, then merges them back in sorted order.
The split creates log n levels (halving each time), and at each level we do n work to merge everything. That's why it's n × log n.
Example with n = 8: Level 1: merge all 8 elements. Level 2: merge 4+4 elements. Level 3: merge 2+2+2+2 elements. That's 3 levels (log₂8 = 3), each doing ~8 operations = 24 total operations.
This is the theoretical optimal complexity for comparison-based sorting. You cannot sort faster than O(n log n) using comparisons - it's mathematically proven!
O(n²) Quadratic Time
void bubbleSort(int[] arr) {
int n = arr.length;
for (int i = 0; i < n; i++) { // Outer: n times
for (int j = 0; j < n - 1; j++) { // Inner: n times each
if (arr[j] > arr[j + 1]) {
// Swap adjacent elements
int temp = arr[j];
arr[j] = arr[j + 1];
arr[j + 1] = temp;
}
}
}
}
Nested loops are the signature of O(n²). The outer loop runs n times, and for EACH outer iteration, the inner loop ALSO runs n times.
Total operations: n × n = n² comparisons. The time doesn't just double when input doubles - it quadruples!
The scaling problem: 100 elements = 10,000 ops (fast). 1,000 elements = 1,000,000 ops (noticeable). 10,000 elements = 100,000,000 ops (painfully slow!).
That's why bubble sort is only used for tiny arrays or educational purposes. For real sorting, use O(n log n) algorithms like merge sort or quicksort.
O(2ⁿ) Exponential Time
int fib(int n) {
if (n <= 1) return n; // Base case
return fib(n - 1) + fib(n - 2); // Two recursive calls!
}
Each function call spawns TWO more calls. This creates a binary tree of calls that doubles at every level - the most dangerous growth pattern!
The problem: we're recalculating the same values over and over. fib(5) calls fib(3) twice, fib(2) three times, and so on. Massive redundant work!
The explosion: fib(5) = ~15 calls. fib(10) = ~177 calls. fib(30) = over 2 million calls. fib(50) would literally take YEARS to complete!
Solution: Use memoization (cache results) or dynamic programming (build bottom-up) to reduce complexity to O(n). This is a classic interview optimization!
Constant
No matter if the array has 10 or 10 million elements, getting the first element always takes the same time.
Logarithmic
Binary search cuts the problem in half each time. For 1000 elements, only ~10 comparisons needed!
Linear
Visit each element exactly once. Double the input? Double the time. Proportional and predictable.
Linearithmic
The sweet spot for sorting! Merge sort divides (log n levels) and conquers (n work per level).
Quadratic
Bubble sort compares every pair - that's n × n comparisons. 1000 elements = 1 million operations!
Exponential
Each call spawns TWO more calls. For fib(30), that's over 1 billion calls! Use memoization to fix.
Advanced Time Complexity Patterns
Let's explore more complex scenarios where the complexity isn't immediately obvious.
Pattern 1: Amortized O(1) - ArrayList Add
// ArrayList.add() - usually O(1), sometimes O(n)
ArrayList list = new ArrayList<>(); // Initial capacity = 10
for (int i = 0; i < 100; i++) {
list.add(i); // Amortized O(1)
}
// What's happening internally:
// Adds 1-10: O(1) each - array has space
// Add 11: O(n) - must resize! Copy 10 elements to new array (size 20)
// Adds 12-20: O(1) each
// Add 21: O(n) - resize again! Copy 20 elements to new array (size 40)
// And so on...
Amortized Analysis
Why? Resizes happen exponentially less frequently
O(1)
O(n)
1+1+...+2+1+...+4+1+...+8+...
≈
2n
2n / n
=
2
=
O(1) amortized!
Key Lesson: "Amortized" analysis spreads cost over many operations. Even though some ops are expensive, on average they're cheap! This is why ArrayList is so efficient despite occasional resizing.
Pattern 2: Multiple Loops - Add vs Multiply
// Scenario A: Sequential loops (ADD)
void processData(int[] arr1, int[] arr2) {
for (int x : arr1) { } // O(n)
for (int y : arr2) { } // O(m)
}
// Total: O(n + m) - different inputs, add them
// Scenario B: Nested loops (MULTIPLY)
void processDataNested(int[] arr1, int[] arr2) {
for (int x : arr1) { // O(n)
for (int y : arr2) { // O(m) for EACH x
}
}
}
// Total: O(n × m) - for each of n, do m work
Key Distinction: ADD vs MULTIPLY
loop 1
loop 2
Do n operations, then m operations
n + m total
loop 1
{
loop 2
}
For each of n operations, do m operations
n × m total
Interview Tip: Always clarify: "Are the inputs the same size?"
Pattern 3: Recursion with Multiple Branches
// How many leaves in a complete binary tree of height h?
int countLeaves(TreeNode root, int height) {
if (height == 0) return 1; // Leaf node
int left = countLeaves(root.left, height - 1);
int right = countLeaves(root.right, height - 1);
return left + right;
}
// Time complexity analysis:
// Level 0: 1 call
// Level 1: 2 calls (each parent calls 2 children)
// Level 2: 4 calls (2² calls)
// Level 3: 8 calls (2³ calls)
// ...
// Level h: 2^h calls
// Total calls = 1 + 2 + 4 + 8 + ... + 2^h
// = 2^(h+1) - 1
// = O(2^h) where h is height
// If tree has n nodes: h = log(n), so O(2^log n) = O(n)
Recursive Complexity Formula
Number of Calls
How many times does the function call itself?
Work Per Call
How much work (excluding recursive calls) in each?
Total Complexity
(Calls) × (Work per call)
For Binary Recursion (2 calls per function)
O(2^h) calls
O(n) calls
Master this pattern! It appears everywhere:
Pattern 4: Logarithmic Recursion Depth
// Binary search recursively
int binarySearchRecursive(int[] arr, int target, int left, int right) {
if (left > right) return -1; // Base case: not found
int mid = left + (right - left) / 2;
if (arr[mid] == target) return mid;
if (arr[mid] < target)
return binarySearchRecursive(arr, target, mid + 1, right);
else
return binarySearchRecursive(arr, target, left, mid - 1);
}
// Recursion depth analysis:
// Each call cuts search space in half
// n → n/2 → n/4 → n/8 → ... → 1
// How many times can you halve n before reaching 1?
// Answer: log₂(n) times
//
// Time: O(log n) - max log n recursive calls, O(1) work per call
// Space: O(log n) - call stack depth = log n frames
Recognizing O(log n) Recursion
Space Complexity Trap!
Iterative
Binary Search =
O(1) space
Recursive
Binary Search =
O(log n) space
Recursive uses O(log n) space due to call stack frames. This is why iterative is often preferred for binary search!
Pattern 5: String Concatenation Trap
// SLOW: O(n²) time!
String result = "";
for (int i = 0; i < n; i++) {
result += "a"; // Creates new string each time!
}
// Why O(n²)?
// Iteration 1: Copy 0 chars, add 1 = 1 operation
// Iteration 2: Copy 1 char, add 1 = 2 operations
// Iteration 3: Copy 2 chars, add 1 = 3 operations
// ...
// Iteration n: Copy n-1 chars, add 1 = n operations
// Total: 1+2+3+...+n = n(n+1)/2 = O(n²)
// FAST: O(n) time!
StringBuilder sb = new StringBuilder();
for (int i = 0; i < n; i++) {
sb.append("a"); // Appends to existing buffer, amortized O(1)
}
String result = sb.toString(); // O(n) final conversion
// Total: n × O(1) + O(n) = O(n)
String vs StringBuilder: The Hidden Cost
O(n²)
Why Strings are Expensive
+= creates a NEW string, copying all old characters
O(n)
Why StringBuilder is Fast
Rule of Thumb: Building a string in a loop? Use StringBuilder! Never use += in a loop for strings!
Practice Questions: Time Complexity
int findMax(int[] arr) {
int max = arr[0];
for (int i = 1; i < arr.length; i++) {
if (arr[i] > max) {
max = arr[i];
}
}
return max;
}
Task: Determine the time and space complexity of this function.
Show Solution
Time: O(n) - Single loop through all n elements
Space: O(1) - Only one variable 'max' regardless of input size
Why? We visit each element exactly once (linear), and use constant extra space (just the 'max' variable).
void logLoop(int n) {
for (int i = 1; i < n; i = i * 2) {
System.out.println(i);
}
}
Task: What is the time complexity? How many times does the loop execute for n = 16?
Show Solution
Time: O(log n)
For n = 16: i goes 1, 2, 4, 8, 16 → 5 iterations (log₂(16) = 4, plus the final check)
Key insight: Loop variable doubles each time. How many times can you double 1 before reaching n? That's log₂(n) times! This is the pattern behind binary search.
ArrayList list = new ArrayList<>();
for (int i = 0; i < n; i++) {
list.add(i); // What's the complexity of this single operation?
}
Task: What is the amortized time complexity of a single add() operation? What about the worst case?
Show Solution
Amortized: O(1) per add
Worst case: O(n) when resizing is needed
Amortized analysis: Most adds are O(1), but occasionally we need to resize (copy all elements). Over n operations, resizing happens log(n) times, copying 1+2+4+8+...+n ≈ 2n elements total. So 2n work / n operations = O(1) amortized per operation!
int power(int base, int exp) {
if (exp == 0) return 1;
if (exp % 2 == 0) {
int half = power(base, exp / 2);
return half * half;
} else {
return base * power(base, exp - 1);
}
}
Task: Determine time and space complexity. Why is this faster than naive multiplication?
Show Solution
Time: O(log n) where n = exp
Space: O(log n) - stack depth = number of recursive calls
Fast exponentiation! Instead of multiplying n times (O(n)), we halve the problem at each even step. 2^16 takes only 5 calls, not 16. This pattern appears frequently in interview problems!
// Method A
String result = "";
for (int i = 0; i < n; i++) {
result += "a";
}
// Method B
StringBuilder sb = new StringBuilder();
for (int i = 0; i < n; i++) {
sb.append("a");
}
String result = sb.toString();
Task: Compare the time complexity of both methods. Which should you use?
Show Solution
Method A: O(n²) - Each += creates a new string, copying all previous characters!
Method B: O(n) - StringBuilder appends to existing buffer
Always use StringBuilder in loops! Method A does 1+2+3+...+n = n(n+1)/2 character copies = O(n²). For n=10,000, that's 50 million operations vs just 10,000!
Space Complexity
What is Space Complexity?
Imagine you're packing for a trip. Time complexity is how long packing takes. Space complexity is how many suitcases you need!
In programming, space = memory. Every variable, array, object, and recursive call uses memory. Running out of memory = your program crashes. So we need to track how much memory grows with input size.
Two types of space:
1. Input Space - memory for the input data (usually not counted)
2. Auxiliary Space - EXTRA memory your algorithm needs (this is what we measure!)
O(1) Constant Space - In-Place
void swap(int[] arr, int i, int j) {
int temp = arr[i]; // Only ONE extra variable
arr[i] = arr[j];
arr[j] = temp;
}
We only use ONE extra variable (temp) which takes just 4 bytes. Whether the array has 10 or 10 million elements, we only need this single temporary variable.
The array itself is input space (not counted in auxiliary space analysis). We're modifying the existing data structure, not creating new ones.
This is called "in-place" algorithm - we work directly on the input without allocating additional memory proportional to input size.
Examples: swap operations, in-place sorting (like insertion sort), two-pointer techniques, reversing arrays.
O(n) Linear Space - New Array
int[] copyArray(int[] arr) {
int[] copy = new int[arr.length]; // Allocate n integers
for (int i = 0; i < arr.length; i++) {
copy[i] = arr[i];
}
return copy;
}
The statement new int[arr.length] allocates n × 4 bytes of memory. This is auxiliary space that grows with input size.
If input is 1,000 integers, we allocate 1,000 more integers = 4 KB. If input is 1 million integers, we allocate 1 million more = 4 MB.
Memory grows proportionally with input size. Double the input? Double the memory needed.
Common scenarios: creating result arrays, storing intermediate computations, building new data structures from input.
O(n) Linear Space - Recursion Stack
int factorial(int n) {
if (n <= 1) return 1; // Base case
return n * factorial(n - 1); // Each call waits for the next
}
// factorial(5) creates this call stack:
// factorial(5) waiting...
// factorial(4) waiting...
// factorial(3) waiting...
// factorial(2) waiting...
// factorial(1) returns 1
Each recursive call creates a "stack frame" in memory containing local variables, parameters, and return address (~50-100 bytes each).
These frames stack up! factorial(5) creates 5 simultaneous frames in memory before any can be released.
The danger: factorial(10000) creates 10,000 frames and will likely cause StackOverflowError! The default stack size is limited (usually 512KB-1MB).
Solution: Convert deep recursion to iterative loops for O(1) space. Or use tail recursion optimization (when available).
O(n²) Quadratic Space - 2D Array
int[][] createMatrix(int n) {
return new int[n][n]; // n rows × n columns
}
// For n = 1000:
// 1000 × 1000 = 1,000,000 integers
// = 4,000,000 bytes = 4 MB
A 2D array of size n×n requires n rows × n columns = n² integers. Memory grows quadratically!
For n=100: 10,000 ints = 40 KB. For n=1,000: 1 million ints = 4 MB. For n=10,000: 100 million ints = 400 MB!
This grows very fast - be extremely careful with large matrices. A 100,000 × 100,000 matrix would need 40 GB of RAM!
Common in: dynamic programming tables, adjacency matrices for graphs, image processing, game boards.
Common Space Complexity Patterns
| Pattern | Space | Example | Memory for n=1000 |
|---|---|---|---|
| Few variables | O(1) |
int temp, counter, sum | ~12 bytes (constant) |
| Iterative with pointers | O(1) |
Two-pointer technique | ~8 bytes (constant) |
| New array of size n | O(n) |
int[] copy = new int[n] | ~4 KB |
| Recursive calls (n deep) | O(n) |
factorial(n), tree traversal | ~4 KB stack |
| HashMap with n entries | O(n) |
Counting frequency | ~40 KB |
| 2D array n×n | O(n²) |
int[][] matrix = new int[n][n] | ~4 MB |
Watch Out for Recursion! Each recursive call adds a "stack frame" to memory. A function that calls itself 10,000 times uses 10,000 stack frames. Go too deep? You get the dreaded StackOverflowError!
Advanced Space Complexity Examples
Example 1: Stack Frame Visualization - Understanding Recursive Memory
// Recursive function to calculate sum
int recursiveSum(int n) {
if (n == 0) return 0;
return n + recursiveSum(n - 1);
}
// Call recursiveSum(5):
//
// Memory Stack Visualization:
// ┌─────────────────────────────────┐
// │ recursiveSum(1) → n=1, return │ ← 5th call (deepest)
// ├─────────────────────────────────┤
// │ recursiveSum(2) → n=2, waiting │ ← 4th call
// ├─────────────────────────────────┤
// │ recursiveSum(3) → n=3, waiting │ ← 3rd call
// ├─────────────────────────────────┤
// │ recursiveSum(4) → n=4, waiting │ ← 2nd call
// ├─────────────────────────────────┤
// │ recursiveSum(5) → n=5, waiting │ ← 1st call (original)
// └─────────────────────────────────┘
//
// Each box is a "stack frame" containing:
// - Parameter values (n)
// - Local variables
// - Return address (where to go back)
// - Typically 50-100 bytes each
Space Complexity
At the deepest point, we have n stack frames simultaneously in memory!
Stack Frame Chain
recursiveSum(5)
calls
recursiveSum(4)
recursiveSum(4)
calls
recursiveSum(3)
recursiveSum(1)
calls
recursiveSum(0)
Base case!
All 6 frames (5, 4, 3, 2, 1, 0) exist in memory at once! None can be released until the base case returns.
Contrast with Iterative Version
int iterativeSum(int n) {
int sum = 0; // Only 2 variables in memory!
for (int i = 1; i <= n; i++) {
sum += i;
}
return sum;
}
// Space complexity: O(1) - only 2 variables, no matter how big n is!
Key Lesson: Recursion is elegant but uses O(depth) space for the call stack. If possible, convert to iteration for O(1) space!
Example 2: Hidden Space in String Building
// Approach 1: String concatenation (HIDDEN O(n²) space!)
String buildString(int n) {
String result = "";
for (int i = 0; i < n; i++) {
result += "a"; // Each += creates a NEW string!
}
return result;
}
// Space complexity analysis:
// Iteration 1: Create new string of length 1 → 1 char allocated
// Iteration 2: Create new string of length 2 → 2 chars allocated (old 1 becomes garbage)
// Iteration 3: Create new string of length 3 → 3 chars allocated (old 2 becomes garbage)
// ...
// Total PEAK memory before garbage collection: 1+2+3+...+n = O(n²) space!
//
// Even after garbage collection, we created O(n²) temporary objects
// Approach 2: StringBuilder (O(n) space)
String buildStringEfficient(int n) {
StringBuilder sb = new StringBuilder(n); // Pre-allocate size n
for (int i = 0; i < n; i++) {
sb.append("a"); // Append to existing buffer
}
return sb.toString();
}
// Space complexity: O(n) - one buffer of size n, that's it!
Why the Difference?
String +=
Approach 1 Space
StringBuilder
Approach 2 Space
Each += creates a brand new string by copying all existing characters plus the new one
Has a resizable internal char array, just appends to the end
Real Impact for n = 10,000 characters:
StringBuilder accounts for resize doubling, but that's still 3,300x more efficient!
Example 3: HashSet vs Array - Space Trade-offs
// Problem: Remove duplicates from array and return new array
// Approach 1: Using HashSet (O(n) time, O(n) space)
int[] removeDuplicatesHashSet(int[] arr) {
Set set = new HashSet<>(); // O(n) space worst case
for (int num : arr) {
set.add(num); // Stores each unique element
}
int[] result = new int[set.size()]; // O(k) space where k = unique count
int i = 0;
for (int num : set) {
result[i++] = num;
}
return result;
// Total space: O(n) for HashSet + O(k) for result ≤ O(n) + O(n) = O(n)
}
// Approach 2: Sort in-place (O(n log n) time, O(1) space if you can modify input)
int removeDuplicatesSorted(int[] arr) {
if (arr.length == 0) return 0;
Arrays.sort(arr); // O(log n) space for merge sort, but we can use O(1) heap sort
int j = 1; // Pointer for unique elements
for (int i = 1; i < arr.length; i++) {
if (arr[i] != arr[i - 1]) {
arr[j++] = arr[i]; // Overwrite duplicates in-place
}
}
return j; // Number of unique elements (arr[0..j-1] has uniques)
// Space: O(1) auxiliary - only 2 integer variables (i, j)
}
Space Analysis Comparison
| Approach | Time | Auxiliary Space | Total Memory | Preserves Order? |
|---|---|---|---|---|
| HashSet | O(n) |
O(n) |
O(n) + O(n) = O(n) | No (unordered) |
| Sort in-place | O(n log n) |
O(1)* |
O(1) | No (sorted) |
Assuming heap sort or in-place quick sort. Merge sort would be O(n) space.
When to Choose Which?
When speed matters most and you have memory available
When memory is very limited (embedded systems, huge datasets)
Key Insight: Sometimes we can trade time for space or space for time. Understand your constraints!
Example 4: Two Pointers - The O(1) Space Hero
// Problem: Reverse a string in-place
// GOOD: Two pointers O(1) space
void reverseStringInPlace(char[] s) {
int left = 0, right = s.length - 1; // Just 2 integer variables!
while (left < right) {
// Swap s[left] and s[right]
char temp = s[left]; // 1 char variable
s[left] = s[right];
s[right] = temp;
left++;
right--;
}
}
// Space: O(1) - only 3 variables (left, right, temp)
// No matter if array has 10 or 10 million characters!
// BAD: Creating new array O(n) space
char[] reverseStringNewArray(char[] s) {
char[] reversed = new char[s.length]; // O(n) space!
for (int i = 0; i < s.length; i++) {
reversed[i] = s[s.length - 1 - i];
}
return reversed;
}
// Space: O(n) - allocates a whole new array
Why Two Pointers are Powerful
Two-Pointer Pattern Recognition:
If you need to process elements from both ends or compare pairs, two pointers can often achieve O(1) space!
Example 5: Dynamic Programming - Trading Space for Speed
// Problem: Nth Fibonacci number
// Approach 1: Naive recursion (O(2^n) time, O(n) space)
int fibRecursive(int n) {
if (n <= 1) return n;
return fibRecursive(n - 1) + fibRecursive(n - 2);
}
// Time: O(2^n) - exponential! fib(40) = 102,334,155 calls!
// Space: O(n) - recursion depth at most n
// Approach 2: Memoization (O(n) time, O(n) space)
int fibMemo(int n, Map memo) {
if (n <= 1) return n;
if (memo.containsKey(n)) return memo.get(n);
int result = fibMemo(n - 1, memo) + fibMemo(n - 2, memo);
memo.put(n, result);
return result;
}
// Time: O(n) - each number computed once, then cached
// Space: O(n) for HashMap + O(n) for recursion stack = O(n)
// Approach 3: Tabulation (O(n) time, O(n) space, no recursion)
int fibTable(int n) {
if (n <= 1) return n;
int[] dp = new int[n + 1];
dp[0] = 0;
dp[1] = 1;
for (int i = 2; i <= n; i++) {
dp[i] = dp[i - 1] + dp[i - 2];
}
return dp[n];
}
// Time: O(n) - single loop
// Space: O(n) - array of size n+1
// Approach 4: Optimized iteration (O(n) time, O(1) space!)
int fibOptimized(int n) {
if (n <= 1) return n;
int prev2 = 0, prev1 = 1; // Only track last 2 values!
for (int i = 2; i <= n; i++) {
int current = prev1 + prev2;
prev2 = prev1;
prev1 = current;
}
return prev1;
}
// Time: O(n) - single loop
// Space: O(1) - only 3 variables!
Evolution of space complexity:
| Approach | Time | Space | Memory for n=40 | Notes |
|---|---|---|---|---|
| Naive recursion | O(2^n) | O(n) | ~2 KB stack | SLOW - 102M calls! |
| Memoization | O(n) | O(n) | ~2 KB stack + ~1.6 KB map | Fast, uses memory |
| Tabulation | O(n) | O(n) | ~164 bytes array | Iterative, cleaner |
| Optimized | O(n) | O(1) | ~12 bytes (3 ints) | Best space! |
Key Lesson
DP uses O(n) space to reduce O(2ⁿ) time → O(n) time
But we only need the last 2 values, not the whole array!
Achieve O(n) time with O(1) space!
Pattern: When DP solution only depends on last k values, you can optimize from O(n) space to O(k) space!
- Count all extra data structures: Arrays, HashSets, HashMaps, stacks, queues
- Count recursion depth: Each recursive call = one stack frame
- Watch for hidden copies: String concatenation, array cloning
- Look for in-place opportunities: Two pointers, array modification
- Consider space-time trade-offs: HashMap for speed vs. sorting for space
Practice Questions: Space Complexity
void reverseArray(int[] arr) {
int left = 0, right = arr.length - 1;
while (left < right) {
int temp = arr[left];
arr[left] = arr[right];
arr[right] = temp;
left++;
right--;
}
}
Task: What is the space complexity? What makes this "in-place"?
Show Solution
Space: O(1)
Only 3 variables used: left, right, temp - regardless of array size!
In-place means we modify the original array without creating a new one. Two-pointer technique is a classic O(1) space pattern!
int factorial(int n) {
if (n <= 1) return 1;
return n * factorial(n - 1);
}
Task: What is the space complexity? What would factorial(10000) likely cause?
Show Solution
Space: O(n) - Call stack depth = n
factorial(10000) would likely cause StackOverflowError!
Each recursive call creates a stack frame (~50-100 bytes). 10,000 frames can exceed Java's default stack size (512KB-1MB). Solution: Convert to iterative loop for O(1) space.
// Method A: HashSet
boolean hasDuplicateA(int[] arr) {
Set seen = new HashSet<>();
for (int num : arr) {
if (!seen.add(num)) return true;
}
return false;
}
// Method B: Sort first
boolean hasDuplicateB(int[] arr) {
Arrays.sort(arr);
for (int i = 1; i < arr.length; i++) {
if (arr[i] == arr[i-1]) return true;
}
return false;
}
Task: Compare time and space complexity of both methods. When would you choose each?
Show Solution
Method A: Time O(n), Space O(n)
Method B: Time O(n log n), Space O(1)* (in-place sort)
Trade-off: HashSet is faster but uses more memory. Sorting is slower but space-efficient. Choose HashSet when speed matters and memory is available. Choose sorting when memory is limited or you can't modify the original array.
// Fibonacci with full DP array
int fibTable(int n) {
int[] dp = new int[n + 1];
dp[0] = 0; dp[1] = 1;
for (int i = 2; i <= n; i++) {
dp[i] = dp[i-1] + dp[i-2];
}
return dp[n];
}
// Optimized Fibonacci
int fibOptimized(int n) {
if (n <= 1) return n;
int prev2 = 0, prev1 = 1;
for (int i = 2; i <= n; i++) {
int curr = prev1 + prev2;
prev2 = prev1;
prev1 = curr;
}
return prev1;
}
Task: Compare space complexity. What's the key insight that allows optimization?
Show Solution
fibTable: O(n) space
fibOptimized: O(1) space
Key insight: We only need the last 2 values, not the entire array! When a DP solution only depends on the last k values, you can optimize from O(n) to O(k) space. This pattern applies to many DP problems!
int[][] createMatrix(int n) {
return new int[n][n];
}
// How much memory for n = 10,000?
Task: What is the space complexity? Calculate actual memory usage for n = 10,000.
Show Solution
Space: O(n²)
For n = 10,000:
- 10,000 × 10,000 = 100,000,000 integers
- 100M × 4 bytes = 400 MB!
Be careful with 2D arrays! O(n²) space grows very fast. A 100,000 × 100,000 matrix would need 40 GB of RAM!
Analyzing Code Step-by-Step
// Example: Find if array has duplicate
boolean hasDuplicate(int[] arr) {
// Method 1: Brute Force - O(n²) time, O(1) space
for (int i = 0; i < arr.length; i++) { // n iterations
for (int j = i + 1; j < arr.length; j++) { // ~n iterations
if (arr[i] == arr[j]) return true;
}
}
return false;
// Method 2: Using HashSet - O(n) time, O(n) space
Set seen = new HashSet<>(); // O(n) space
for (int num : arr) { // O(n) iterations
if (!seen.add(num)) return true; // O(1) lookup
}
return false;
// Method 3: Sort First - O(n log n) time, O(1) space
Arrays.sort(arr); // O(n log n)
for (int i = 1; i < arr.length; i++) { // O(n)
if (arr[i] == arr[i - 1]) return true;
}
return false;
}
Three Approaches to the Same Problem
Brute Force
How: For each element, check ALL elements after it.
Why O(n²): Element 1 checks n-1, Element 2 checks n-2... Total ≈ n²/2
HashSet
How: Store seen elements. Check before adding.
Why O(n): HashSet ops are O(1). Do n times → O(n). Uses O(n) memory.
Sort First
How: Sort array. Duplicates now adjacent! Check neighbors.
Why: Sorting O(n log n) + check O(n) = O(n log n)
- Brute Force O(n²): Simple to write, but compares every pair. For 1000 elements = 500,000 comparisons!
- HashSet O(n): Fastest! Uses extra memory to track seen elements. Perfect when you have memory to spare.
- Sort First O(n log n): Middle ground. No extra space (sorts in-place), but modifies the original array and isn't as fast as HashSet.
Comparison Table
| Method | Time | Space | When to Use |
|---|---|---|---|
| Brute Force | O(n²) | O(1) | Small arrays, no extra memory |
| HashSet | O(n) | O(n) | Best for speed, memory available |
| Sorting | O(n log n) | O(1)* | Already sorted or can modify array |
Deep Dive: 15 More Code Analysis Examples
Let's analyze various code patterns you'll encounter. For each example, identify: time complexity, space complexity, and why.
Example 1 Contains Duplicate Value
// Check if array contains any duplicate values
boolean containsDuplicate(int[] nums) {
Set seen = new HashSet<>();
for (int num : nums) {
if (seen.contains(num)) {
return true;
}
seen.add(num);
}
return false;
}
- Single loop iterates through all n elements
HashSet.contains()andadd()are O(1) average- Total: n iterations × O(1) = O(n)
- HashSet stores up to n elements (all unique)
- If duplicates found early, uses less space
- Worst case is still O(n)
Key Insight: Trading space for time. Without the HashSet, we'd need O(n²) nested loops. The HashSet gives us O(1) lookup, making the whole algorithm O(n)!
Example 2 First Unique Character
// Find index of first non-repeating character
int firstUniqChar(String s) {
// Step 1: Count frequencies
Map freq = new HashMap<>();
for (char c : s.toCharArray()) { // O(n)
freq.put(c, freq.getOrDefault(c, 0) + 1);
}
// Step 2: Find first unique
for (int i = 0; i < s.length(); i++) { // O(n)
if (freq.get(s.charAt(i)) == 1) {
return i;
}
}
return -1;
}
- First loop: O(n) to build frequency map
- Second loop: O(n) to find first unique
- Total: O(n) + O(n) = O(2n) → O(n) (drop constant)
- HashMap stores at most 26 characters (lowercase)
- Or 128 if ASCII, or 256 if extended ASCII
- Map size bounded by alphabet (constant) → O(1)!
Surprising! Even though we use a HashMap, it's still O(1) space because the alphabet size is constant and doesn't grow with input. This is a common pattern in string problems!
Example 3 Group Anagrams
// Group strings that are anagrams
List> groupAnagrams(String[] strs) {
Map> map = new HashMap<>();
for (String str : strs) { // O(n) strings
char[] chars = str.toCharArray(); // O(k)
Arrays.sort(chars); // O(k log k) where k = string length
String sorted = new String(chars); // O(k)
if (!map.containsKey(sorted)) {
map.put(sorted, new ArrayList<>());
}
map.get(sorted).add(str); // O(1)
}
return new ArrayList<>(map.values());
}
n = number of strings
k = maximum string length
- Outer loop: n iterations
- For each string: sort takes O(k log k)
- Total: n × (k log k) = O(n × k log k)
- HashMap stores all n strings
- Each sorted key up to k characters
- Result list stores all n strings → O(n × k)
Analysis Tip: When you have two variables (n and k), express complexity using both! Don't just say O(n) - be specific about which dimension.
Example 4 Product of Array Except Self
// Calculate product of all elements except current
int[] productExceptSelf(int[] nums) {
int n = nums.length;
int[] result = new int[n];
// Left products: result[i] = product of all to left of i
result[0] = 1;
for (int i = 1; i < n; i++) { // O(n)
result[i] = result[i - 1] * nums[i - 1];
}
// Right products: multiply by product of all to right
int rightProduct = 1;
for (int i = n - 1; i >= 0; i--) { // O(n)
result[i] *= rightProduct;
rightProduct *= nums[i];
}
return result;
}
- First loop: O(n) to calculate left products
- Second loop: O(n) to multiply by right products
- Total: O(n) + O(n) = O(n)
- Result array doesn't count (it's the output)
- Only extra variable:
rightProduct(4 bytes) - Constant auxiliary space → O(1)
Brilliant technique! Instead of dividing total product by current element (doesn't work with zeros), we calculate left and right products separately. Two passes = O(n), using result array for intermediate storage = O(1) extra space!
Example 5 Maximum Subarray Sum (Kadane's Algorithm)
// Find contiguous subarray with maximum sum
int maxSubArray(int[] nums) {
int maxSoFar = nums[0];
int maxEndingHere = nums[0];
for (int i = 1; i < nums.length; i++) { // O(n)
maxEndingHere = Math.max(nums[i], maxEndingHere + nums[i]);
maxSoFar = Math.max(maxSoFar, maxEndingHere);
}
return maxSoFar;
}
- Single pass through array
- Each iteration:
2 comparisons + arithmetic= O(1) - Total:
n × O(1) = O(n)
- Only 2 variables:
maxSoFar,maxEndingHere - No recursion, no extra data structures
- Constant space!
Kadane's Algorithm magic! Naive solution is O(n²) checking all subarrays. Kadane reduces to O(n) by maintaining running sum. Key insight: at each position, either extend previous subarray or start fresh!
Example 6 Spiral Matrix Traversal
// Print matrix in spiral order
List spiralOrder(int[][] matrix) {
List result = new ArrayList<>();
if (matrix.length == 0) return result;
int top = 0, bottom = matrix.length - 1;
int left = 0, right = matrix[0].length - 1;
while (top <= bottom && left <= right) { // O(m×n)
// Traverse right
for (int i = left; i <= right; i++) result.add(matrix[top][i]);
top++;
// Traverse down
for (int i = top; i <= bottom; i++) result.add(matrix[i][right]);
right--;
// Traverse left
if (top <= bottom) {
for (int i = right; i >= left; i--) result.add(matrix[bottom][i]);
bottom--;
}
// Traverse up
if (left <= right) {
for (int i = bottom; i >= top; i--) result.add(matrix[i][left]);
left++;
}
}
return result;
}
m= rows,n= columns- Visit each cell exactly once
- Total cells =
m × n - Thus: O(m × n)
- Result list doesn't count (it's the output)
- Only 4 pointer variables:
topbottomleftright - Constant auxiliary space!
Complex but optimal! Even though there are nested loops, each cell is visited exactly once. We cleverly shrink boundaries after each direction. Can't do better than O(m × n) since we must visit all cells!
Example 7 Rotate Matrix 90 Degrees
// Rotate n×n matrix 90 degrees clockwise in-place
void rotate(int[][] matrix) {
int n = matrix.length;
// Step 1: Transpose (swap across diagonal)
for (int i = 0; i < n; i++) { // O(n)
for (int j = i; j < n; j++) { // O(n) but j starts at i
int temp = matrix[i][j];
matrix[i][j] = matrix[j][i];
matrix[j][i] = temp;
}
}
// Step 2: Reverse each row
for (int i = 0; i < n; i++) { // O(n)
for (int j = 0; j < n / 2; j++) { // O(n/2)
int temp = matrix[i][j];
matrix[i][j] = matrix[i][n - 1 - j];
matrix[i][n - 1 - j] = temp;
}
}
}
- Step 1 Transpose:
n(n+1)/2cells ≈ n²/2 → O(n²) - Step 2 Reverse:
n × n/2swaps → O(n²) - Total:
O(n²) + O(n²) = O(n²)
- In-place swaps, no extra arrays
- Only 1 variable:
temp - Constant space!
Clever two-step! Transpose flips over main diagonal, then reversing rows completes 90° rotation. This in-place solution beats creating a new matrix (which would be O(n²) space)!
Example 8 Longest Palindromic Substring
// Find longest palindromic substring
String longestPalindrome(String s) {
if (s == null || s.length() < 1) return "";
int start = 0, end = 0;
for (int i = 0; i < s.length(); i++) { // O(n)
int len1 = expandAroundCenter(s, i, i); // Odd length
int len2 = expandAroundCenter(s, i, i + 1); // Even length
int len = Math.max(len1, len2);
if (len > end - start) {
start = i - (len - 1) / 2;
end = i + len / 2;
}
}
return s.substring(start, end + 1);
}
int expandAroundCenter(String s, int left, int right) {
while (left >= 0 && right < s.length() &&
s.charAt(left) == s.charAt(right)) { // O(n) worst case
left--;
right++;
}
return right - left - 1;
}
- Outer loop:
niterations (one for each center) expandAroundCenter: O(n) worst case- Total:
n × n = O(n²)
- Only a few pointer variables
- No extra data structures
- Constant space!
Better than O(n³)! Naive solution checks all substrings O(n²) and validates each palindrome O(n) = O(n³). This "expand around center" approach reduces to O(n²) by expanding from each center instead of checking all substrings!
Example 9 Three Sum
// Find all triplets that sum to zero
List> threeSum(int[] nums) {
List> result = new ArrayList<>();
Arrays.sort(nums); // O(n log n)
for (int i = 0; i < nums.length - 2; i++) { // O(n)
if (i > 0 && nums[i] == nums[i - 1]) continue; // Skip duplicates
int left = i + 1, right = nums.length - 1;
while (left < right) { // O(n) worst case
int sum = nums[i] + nums[left] + nums[right];
if (sum == 0) {
result.add(Arrays.asList(nums[i], nums[left], nums[right]));
while (left < right && nums[left] == nums[left + 1]) left++;
while (left < right && nums[right] == nums[right - 1]) right--;
left++;
right--;
} else if (sum < 0) {
left++;
} else {
right--;
}
}
}
return result;
}
- Sort
O(n log n) - Outer loop O(n)
- Two-pointer O(n) per iteration
- Total:
O(n log n) + O(n²) = O(n²)
- Ignoring output:
O(1)auxiliary - Output: up to
O(n²)triplets - Sorting:
O(log n)or O(n)
Clever optimization! Brute force is O(n³) checking all triplets. By sorting first and using two pointers, we reduce to O(n²). The sorted array allows us to move pointers intelligently!
Example 10 Word Ladder (BFS)
// Find shortest transformation from beginWord to endWord
int ladderLength(String beginWord, String endWord, List wordList) {
Set wordSet = new HashSet<>(wordList); // O(n)
if (!wordSet.contains(endWord)) return 0;
Queue queue = new LinkedList<>();
queue.offer(beginWord);
int level = 1;
while (!queue.isEmpty()) { // O(n) words
int size = queue.size();
for (int i = 0; i < size; i++) {
String current = queue.poll();
if (current.equals(endWord)) return level;
char[] chars = current.toCharArray();
for (int j = 0; j < chars.length; j++) { // O(L) word length
char original = chars[j];
for (char c = 'a'; c <= 'z'; c++) { // O(26) = O(1)
chars[j] = c;
String newWord = new String(chars);
if (wordSet.contains(newWord)) {
queue.offer(newWord);
wordSet.remove(newWord); // Mark visited
}
}
chars[j] = original; // Restore
}
}
level++;
}
return 0;
}
n = number of words
L = word length
- BFS visits each word once: O(n)
- Per word:
L × 26letter tries - String creation:
O(L) - Total:
O(n × L² × 26) → O(n × L²)
HashSetstores n words × L charsQueuecan hold up to n words- Total:
O(n × L)
BFS ensures shortest path! We explore all words at distance 1, then distance 2, etc. First time we reach endWord is guaranteed to be the shortest path. The complexity looks scary but is optimal for this problem!
Practice Questions: Code Analysis
// Method A: Brute Force
int[] twoSumBrute(int[] nums, int target) {
for (int i = 0; i < nums.length; i++) {
for (int j = i + 1; j < nums.length; j++) {
if (nums[i] + nums[j] == target) {
return new int[] {i, j};
}
}
}
return new int[] {-1, -1};
}
// Method B: HashMap
int[] twoSumHash(int[] nums, int target) {
Map map = new HashMap<>();
for (int i = 0; i < nums.length; i++) {
int complement = target - nums[i];
if (map.containsKey(complement)) {
return new int[] {map.get(complement), i};
}
map.put(nums[i], i);
}
return new int[] {-1, -1};
}
Task: Analyze both methods' time and space complexity. What's the trade-off between them?
Show Solution
Method A (Brute Force):
Time: O(n²) - Nested loops, n*(n-1)/2 comparisons
Space: O(1) - Only uses loop variables
Method B (HashMap):
Time: O(n) - Single pass through array
Space: O(n) - HashMap stores up to n elements
Why? Classic time-space trade-off! We trade O(n) extra space to reduce time from O(n²) to O(n). For n=10,000: Brute Force does ~50M operations, HashMap does ~10K operations - HashMap is 5,000x faster!
boolean isValid(String s) {
Stack stack = new Stack<>();
for (char c : s.toCharArray()) {
if (c == '(' || c == '[' || c == '{') {
stack.push(c);
} else {
if (stack.isEmpty()) return false;
char top = stack.pop();
if (c == ')' && top != '(') return false;
if (c == ']' && top != '[') return false;
if (c == '}' && top != '{') return false;
}
}
return stack.isEmpty();
}
Task: Determine time and space complexity. Explain why a stack is the right data structure for this problem.
Show Solution
Time: O(n) - Single pass through string, each character processed exactly once. Stack operations (push/pop) are O(1).
Space: O(n) - Worst case: all opening brackets "(((((...", stack holds up to n/2 brackets.
Why? Matching brackets follow LIFO (Last-In-First-Out) order! The most recent opening bracket MUST match the next closing bracket - this is exactly what a stack does.
int longestConsecutive(int[] nums) {
Set numSet = new HashSet<>();
for (int num : nums) numSet.add(num);
int longest = 0;
for (int num : numSet) {
if (!numSet.contains(num - 1)) { // Only start of sequence
int currentNum = num;
int streak = 1;
while (numSet.contains(currentNum + 1)) {
currentNum++;
streak++;
}
longest = Math.max(longest, streak);
}
}
return longest;
}
// Test case: nums = [100, 4, 200, 1, 3, 2]
// Expected output: 4 (sequence: 1, 2, 3, 4)
Task: This code has nested loops but is still O(n). Explain why it's NOT O(n²). What's the key optimization?
Show Solution
Time: O(n) - NOT O(n²)! Each number is visited at most TWICE: once in outer loop, once in while loop.
Space: O(n) - HashSet stores all elements.
Why? The condition if (!numSet.contains(num - 1)) ensures we ONLY start counting from the beginning of a sequence. Numbers like 4, 3, 2 are skipped because they have predecessors. Total operations = 2n = O(n).
ListNode mergeKLists(ListNode[] lists) {
PriorityQueue minHeap = new PriorityQueue<>(
(a, b) -> a.val - b.val
);
for (ListNode list : lists) {
if (list != null) minHeap.offer(list);
}
ListNode dummy = new ListNode(0);
ListNode current = dummy;
while (!minHeap.isEmpty()) {
ListNode smallest = minHeap.poll();
current.next = smallest;
current = current.next;
if (smallest.next != null) {
minHeap.offer(smallest.next);
}
}
return dummy.next;
}
// Given: k = 3 lists, N = total nodes across all lists
// lists[0]: 1 → 4 → 5
// lists[1]: 1 → 3 → 4
// lists[2]: 2 → 6
Task: Analyze time and space complexity in terms of N (total nodes) and k (number of lists). Compare with the naive approach.
Show Solution
Time: O(N log k) - Each of N nodes added/removed from heap once. Heap operations take O(log k).
Space: O(k) - Heap holds at most k nodes (one from each list).
Why? Naive approach compares all k heads each time = O(N × k). Heap maintains sorted order efficiently. For k=100 lists, N=1M nodes: Naive does 100M ops, Heap does 7M ops - Heap is 14x faster!
int[] maxSlidingWindow(int[] nums, int k) {
Deque deque = new ArrayDeque<>(); // Stores indices
int[] result = new int[nums.length - k + 1];
for (int i = 0; i < nums.length; i++) {
// Remove indices outside current window
while (!deque.isEmpty() && deque.peek() < i - k + 1) {
deque.poll();
}
// Remove smaller elements (they can never be max)
while (!deque.isEmpty() && nums[deque.peekLast()] < nums[i]) {
deque.pollLast();
}
deque.offer(i);
if (i >= k - 1) {
result[i - k + 1] = nums[deque.peek()];
}
}
return result;
}
// Test: nums = [1, 3, -1, -3, 5, 3, 6, 7], k = 3
// Output: [3, 3, 5, 5, 6, 7]
Task: This has nested while loops but is still O(n). Explain why. What is the "monotonic deque" pattern?
Show Solution
Time: O(n) - NOT O(n × k)! Each element enters and exits the deque AT MOST ONCE. Total: 2n = O(n).
Space: O(k) - Deque holds at most k elements.
Why? Monotonic deque maintains elements in DECREASING order. Front = index of maximum. We remove smaller elements because they can NEVER be the max (a larger element to their right will always beat them). Naive = O(n×k), Monotonic Deque = O(n).
Key Takeaways & Cheat Sheet
Bookmark This Page!
Your go-to reference for coding interviews
These are the must-know concepts for every coding interview. Come back to this section whenever you need a quick refresher on time & space complexity!
Quick Complexity Cheat Sheet
arr[i]
O(1)
Direct access
for i in arr
O(n)
Single loop
for i, for j
O(n²)
Nested loops
i = i * 2
O(log n)
Halving/doubling
The 4 Golden Rules
O(2n) → O(n), O(100) → O(1)O(n² + n) → O(n²)O(a + b) not O(n)O(n) × O(m) = O(nm)