Master Theorem
Divide-and-Conquer Recurrences
Many divide-and-conquer algorithms follow the same pattern: split the input into smaller pieces, solve each piece recursively, and combine the results. This shared structure means their running times satisfy recurrences of the same shape.
A divide-and-conquer recurrence has the form:
\[T(n) = aT\!\left(\left\lceil n/b \right\rceil\right) + \Theta(n^d)\]
where:
- \(a \geq 1\) is the number of subproblems (the branching factor),
- \(b > 1\) is the factor by which the input shrinks at each level,
- \(n^d\) is the cost of the divide and combine step.
Three forces compete: the branching factor \(a\) creates more work at each level, the shrinkage factor \(b\) makes each subproblem cheaper, and the non-recursive cost \(n^d\) sets the price of splitting and merging. The Master Theorem tells us which force wins.
The Recursion Tree
The key to understanding the Master Theorem is to unroll the recurrence into a recursion tree and tally the work level by level.
The tree has depth \(\log_b n\): each level divides the problem by \(b\), so after \(\log_b n\) levels the subproblems hit the base case. At level \(i\) there are \(a^i\) subproblems, each of size \(n/b^i\), so each does \((n/b^i)^d\) non-recursive work. The total work at level \(i\) is therefore:
\[W(i) = a^i \cdot \left(\frac{n}{b^i}\right)^{\!d} = n^d \cdot \left(\frac{a}{b^d}\right)^{\!i}\]
This is a geometric series in the ratio \(r = a/b^d\). Whether the series is decreasing, constant, or increasing determines the total.
The ratio \(r = a/b^d\) is the geometric common ratio of work across levels. Level-to-level work is multiplied by \(r\):
- \(r < 1\) (equivalently \(d > \log_b a\)): work shrinks going deeper — the root dominates.
- \(r = 1\) (equivalently \(d = \log_b a\)): work is constant at every level — balanced.
- \(r > 1\) (equivalently \(d < \log_b a\)): work grows going deeper — the leaves dominate.
The Three Cases
If \(T(n) = aT\!\left(\left\lceil n/b \right\rceil\right) + \Theta(n^d)\) where \(a \geq 1\), \(b > 1\), and \(d \geq 0\), then:
\[T(n) = \begin{cases} \Theta(n^d), & \text{if } d > \log_b a \\ \Theta(n^d \log n), & \text{if } d = \log_b a \\ \Theta(n^{\log_b a}), & \text{if } d < \log_b a \end{cases}\]
Case 1: Root-Dominated (\(d > \log_b a\))
The ratio \(r = a/b^d < 1\), so work shrinks geometrically down the tree. The geometric series is dominated by its first term — the root level’s \(\Theta(n^d)\) work. The deeper levels are negligible. This case is relatively rare in practice, since it requires the non-recursive divide-and-combine step to be more expensive than the recursive calls.
Case 2: Balanced (\(d = \log_b a\))
The ratio \(r = 1\), so every level contributes exactly \(\Theta(n^d)\) work. With \(\log_b n\) levels the total is \(\Theta(n^d \log n)\). This is the most common case in practice — merge sort, binary search, closest pair, and FFT all fall here.
Case 3: Leaf-Dominated (\(d < \log_b a\))
The ratio \(r = a/b^d > 1\), so work grows geometrically down the tree. The geometric series is dominated by its last term. The \(n^{\log_b a}\) leaves, each doing \(\Theta(1)\) work, determine the total: \(\Theta(n^{\log_b a})\). This arises when the branching factor is aggressive relative to the problem shrinkage — algorithms like Karatsuba and Strassen’s fall here.
Examples
| Algorithm | Recurrence | \(a\) | \(b\) | \(d\) | \(\log_b a\) | Case | Result |
|---|---|---|---|---|---|---|---|
| Linear Select* | \[T(n)=T(7n/10)+T(n/5)+O(n)\] | — | — | 1 | — | 1 | \(\Theta(n)\) |
| Merge Sort | \[T(n)=2T(n/2)+\Theta(n)\] | 2 | 2 | 1 | 1 | 2 | \(\Theta(n\log n)\) |
| Binary Search | \[T(n)=T(n/2)+\Theta(1)\] | 1 | 2 | 0 | 0 | 2 | \(\Theta(\log n)\) |
| FFT | \[T(n)=2T(n/2)+\Theta(n)\] | 2 | 2 | 1 | 1 | 2 | \(\Theta(n\log n)\) |
| Closest Pair | \[T(n)=2T(n/2)+\Theta(n)\] | 2 | 2 | 1 | 1 | 2 | \(\Theta(n\log n)\) |
| Karatsuba | \[T(n)=3T(n/2)+\Theta(n)\] | 3 | 2 | 1 | 1.58 | 3 | \(\Theta(n^{\log_2 3})\) |
| Strassen’s | \[T(n)=7T(n/2)+\Theta(n^2)\] | 7 | 2 | 2 | 2.81 | 3 | \(\Theta(n^{\log_2 7})\) |
| Naive Matrix Mult. | \[T(n)=8T(n/2)+\Theta(n^2)\] | 8 | 2 | 2 | 3 | 3 | \(\Theta(n^3)\) |
Walkthrough: Merge Sort. We have \(a = 2\) subproblems, each of size \(n/b = n/2\), with \(\Theta(n^1)\) merge work so \(d = 1\). Since \(d = 1 = \log_2 2 = \log_b a\), this is Case 2: \(T(n) = \Theta(n \log n)\).
Walkthrough: Strassen’s. Here \(a = 7\), \(b = 2\), and the combine step costs \(\Theta(n^2)\) so \(d = 2\). We compare: \(\log_2 7 \approx 2.81 > 2 = d\), so this is Case 3: \(T(n) = \Theta(n^{\log_2 7}) \approx \Theta(n^{2.81})\).
Walkthrough: Binary Search. We recurse on \(a = 1\) subproblem of size \(n/2\), doing \(\Theta(1)\) comparison work so \(d = 0\). Since \(d = 0 = \log_2 1 = \log_b a\), this is Case 2: \(T(n) = \Theta(\log n)\).
Linear Select (Median of Medians) doesn’t fit the standard Master Theorem form — it makes two recursive calls of different sizes: \(T(7n/10)\) and \(T(n/5)\). The result \(\Theta(n)\) follows from the observation that \(7n/10 + n/5 = 9n/10 < n\), so the total subproblem size shrinks geometrically. Formally this requires the Akra-Bazzi theorem, but the spirit is Case 1: the root’s linear work dominates.
Reducing the branching factor from \(a = 8\) (naive) to \(a = 7\) (Strassen) changes \(\log_2 a\) from \(3\) to \(\approx 2.81\), yielding sub-cubic matrix multiplication. The same principle underlies Karatsuba: reducing integer multiplication’s branching from \(a = 4\) (schoolbook) to \(a = 3\) drops the exponent from \(2\) to \(\log_2 3 \approx 1.58\).
Summary
The Master Theorem reduces divide-and-conquer analysis to a single question: is the geometric series of level-wise work increasing, constant, or decreasing? The ratio \(r = a/b^d\) encodes everything. If the root level dominates (\(r < 1\)), the answer is \(\Theta(n^d)\). If every level contributes equally (\(r = 1\)), the \(\log n\) levels give \(\Theta(n^d \log n)\). If the leaves dominate (\(r > 1\)), the answer is \(\Theta(n^{\log_b a})\). Knowing \(a\), \(b\), and \(d\) for any divide-and-conquer algorithm lets you read off the complexity immediately — no summation or substitution required.
Backlinks (2)
“We all die. The goal isn’t to live forever, the goal is to create something that will.” — Chuck Palahniuk
Originally the AI suffix stood for archived intellect, however these days it has concretised to becoming an Augmenting Infrastructure — a place from which to branch out in many directions.
Within this site you will find self-contained material in the form of project posts and blog posts, but also external links 1 to other work – my own as well as not.