id stringlengths 6 26 | chapter stringclasses 36
values | section stringlengths 3 5 | title stringlengths 3 27 | source_file stringlengths 13 29 | question_markdown stringlengths 17 6.29k | answer_markdown stringlengths 3 6.76k | code_blocks listlengths 0 9 | has_images bool 2
classes | image_refs listlengths 0 7 |
|---|---|---|---|---|---|---|---|---|---|
01-1.1-1 | 01 | 1.1 | 1.1-1 | docs/Chap01/1.1.md | Give a real-world example that requires sorting or a real-world example that requires computing a convex hull. | - Sorting: browse the price of the restaurants with ascending prices on NTU street.
- Convex hull: computing the diameter of set of points. | [] | false | [] |
01-1.1-2 | 01 | 1.1 | 1.1-2 | docs/Chap01/1.1.md | Other than speed, what other measures of efficiency might one use in a real-world setting? | Memory efficiency and coding efficiency. | [] | false | [] |
01-1.1-3 | 01 | 1.1 | 1.1-3 | docs/Chap01/1.1.md | Select a data structure that you have seen previously, and discuss its strengths and limitations. | Linked-list:
- Strengths: insertion and deletion.
- Limitations: random access. | [] | false | [] |
01-1.1-4 | 01 | 1.1 | 1.1-4 | docs/Chap01/1.1.md | How are the shortest-path and traveling-salesman problems given above similar? How are they different? | - Similar: finding path with shortest distance.
- Different: traveling-salesman has more constraints. | [] | false | [] |
01-1.1-5 | 01 | 1.1 | 1.1-5 | docs/Chap01/1.1.md | Come up with a real-world problem in which only the best solution will do. Then come up with one in which a solution that is "approximately" the best is good enough. | - Best: find the GCD of two positive integer numbers.
- Approximately: find the solution of differential equations. | [] | false | [] |
01-1.2-1 | 01 | 1.2 | 1.2-1 | docs/Chap01/1.2.md | Give an example of an application that requires algorithmic content at the application level, and discuss the function of the algorithms involved. | Drive navigation. | [] | false | [] |
01-1.2-2 | 01 | 1.2 | 1.2-2 | docs/Chap01/1.2.md | Suppose we are comparing implementations of insertion sort and merge sort on the same machine. For inputs of size $n$ , insertion sort runs in $8n^2$ steps, while merge sort runs in $64n\lg n$ steps. For which values of $n$ does insertion sort beat merge sort? | $$
\begin{aligned}
8n^2 & < 64n\lg n \\\\
2^n & < n^8 \\\\
2 \le n & \le 43.
\end{aligned}
$$ | [] | false | [] |
01-1.2-3 | 01 | 1.2 | 1.2-3 | docs/Chap01/1.2.md | What is the smallest value of $n$ such that an algorithm whose running time is $100n^2$ runs faster than an algorithm whose running time is $2^n$ on the same machine? | $$
\begin{aligned}
100n^2 & < 2^n \\\\
n & \ge 15.
\end{aligned}
$$ | [] | false | [] |
01-1-1 | 01 | 1-1 | 1-1 | docs/Chap01/Problems/1-1.md | For each function $f(n)$ and time $t$ in the following table, determine the largest size $n$ of a problem that can be solved in time $t$, assuming that the algorithm to solve the problem takes $f(n)$ microseconds. | $$
\begin{array}{cccccccc}
& \text{1 second} & \text{1 minute} & \text{1 hour} & \text{1 day} & \text{1 month} & \text{1 year} & \text{1 century} \\\\
\hline
\lg n & 2^{10^6} & 2^{6 \times 10^7} & 2^{3.6 \times 10^9} & 2^{8.64 \times 10^{10}} & 2^{2.59 \times... | [] | false | [] |
02-2.1-1 | 02 | 2.1 | 2.1-1 | docs/Chap02/2.1.md | Using Figure 2.2 as a model, illustrate the operation of $\text{INSERTION-SORT}$ on the array $A = \langle 31, 41, 59, 26, 41, 58 \rangle$. | 
The operation of $\text{INSERTION-SORT}$ on the array $A = \langle 31, 41, 59, 26, 41, 58 \rangle$. Array indices appear above the rectangles, and values stored in the array positions appear within the rectangles.
(a)-(e) are iterations of the for loop of lines 1-8.
In each iteration, ... | [] | true | [
"../img/2.1-1-1.png"
] |
02-2.1-2 | 02 | 2.1 | 2.1-2 | docs/Chap02/2.1.md | Rewrite the $\text{INSERTION-SORT}$ procedure to sort into nonincreasing instead of nondecreasing order. | ```cpp
INSERTION-SORT(A)
for j = 2 to A.length
key = A[j]
i = j - 1
while i > 0 and A[i] < key
A[i + 1] = A[i]
i = i - 1
A[i + 1] = key
``` | [
{
"lang": "cpp",
"code": "INSERTION-SORT(A)\n for j = 2 to A.length\n key = A[j]\n i = j - 1\n while i > 0 and A[i] < key\n A[i + 1] = A[i]\n i = i - 1\n A[i + 1] = key"
}
] | false | [] |
02-2.1-3 | 02 | 2.1 | 2.1-3 | docs/Chap02/2.1.md | Consider the **_searching problem_**:
**Input**: A sequence of $n$ numbers $A = \langle a_1, a_2, \ldots, a_n \rangle$ and a value $v$.
**Output:** An index $i$ such that $v = A[i]$ or the special value $\text{NIL}$ if $v$ does not appear in $A$.
Write pseudocode for **_linear search_**, which scans through the sequ... | ```cpp
LINEAR-SEARCH(A, v)
for i = 1 to A.length
if A[i] == v
return i
return NIL
```
**Loop invariant:** At the start of each iteration of the **for** loop, the subarray $A[1..i - 1]$ consists of elements that are different than $v$.
**Initialization:** Before the first loop iteration ($i ... | [
{
"lang": "cpp",
"code": "LINEAR-SEARCH(A, v)\n for i = 1 to A.length\n if A[i] == v\n return i\n return NIL"
}
] | false | [] |
02-2.1-4 | 02 | 2.1 | 2.1-4 | docs/Chap02/2.1.md | Consider the problem of adding two $n$-bit binary integers, stored in two $n$-element arrays $A$ and $B$. The sum of the two integers should be stored in binary form in an $(n + 1)$-element array $C$. State the problem formally and write pseudocode for adding the two integers. | **Input:** An array of booleans $A = \langle a_1, a_2, \ldots, a_n \rangle$ and an array of booleans $B = \langle b_1, b_2, \ldots, b_n \rangle$, each representing an integer stored in binary format (each digit is a number, either $0$ or $1$, **least-significant digit first**) and each of length $n$.
**Output:** An ar... | [
{
"lang": "cpp",
"code": "ADD-BINARY(A, B)\n carry = 0\n for i = 1 to A.length\n sum = A[i] + B[i] + carry\n C[i] = sum % 2 // remainder\n carry = sum / 2 // quotient\n C[A.length + 1] = carry\n return C"
}
] | false | [] |
02-2.2-1 | 02 | 2.2 | 2.2-1 | docs/Chap02/2.2.md | Express the function $n^3 / 1000 - 100n^2 - 100n + 3$ in terms of $\Theta$-notation. | $\Theta(n^3)$. | [] | false | [] |
02-2.2-2 | 02 | 2.2 | 2.2-2 | docs/Chap02/2.2.md | Consider sorting $n$ numbers stored in array $A$ by first finding the smallest element of $A$ and exchanging it with the element in $A[1]$. Then find the second smallest element of $A$, and exchange it with $A[2]$. Continue in this manner for the first $n - 1$ elements of $A$. Write pseudocode for this algorithm, which... | - Pseudocode:
```cpp
n = A.length
for i = 1 to n - 1
minIndex = i
for j = i + 1 to n
if A[j] < A[minIndex]
minIndex = j
swap(A[i], A[minIndex])
```
- Loop invariant:
At the start of the loop in line 1, the subarray $A[1..i - 1]$ consists of the ... | [
{
"lang": "cpp",
"code": " n = A.length\n for i = 1 to n - 1\n minIndex = i\n for j = i + 1 to n\n if A[j] < A[minIndex]\n minIndex = j\n swap(A[i], A[minIndex])"
}
] | false | [] |
02-2.2-3 | 02 | 2.2 | 2.2-3 | docs/Chap02/2.2.md | Consider linear search again (see Exercise 2.1-3). How many elements of the input sequence need to be checked on the average, assuming that the element being searched for is equally likely to be any element in the array? How about in the worst case? What are the average-case and worst-case running times of linear searc... | If the element is present in the sequence, half of the elements are likely to be checked before it is found in the average case. In the worst case, all of them will be checked. That is, $n / 2$ checks for the average case and $n$ for the worst case. Both of them are $\Theta(n)$. | [] | false | [] |
02-2.2-4 | 02 | 2.2 | 2.2-4 | docs/Chap02/2.2.md | How can we modify almost any algorithm to have a good best-case running time? | You can modify any algorithm to have a best case time complexity by adding a special case. If the input matches this special case, return the pre-computed answer. | [] | false | [] |
02-2.3-1 | 02 | 2.3 | 2.3-1 | docs/Chap02/2.3.md | Using Figure 2.4 as a model, illustrate the operation of merge sort on the array $A = \langle 3, 41, 52, 26, 38, 57, 9, 49 \rangle$. | $$[3] \quad [41] \quad [52] \quad [26] \quad [38] \quad [57] \quad [9] \quad [49]$$
$$\downarrow$$
$$[3|41] \quad [26|52] \quad [38|57] \quad [9|49]$$
$$\downarrow$$
$$[3|26|41|52] \quad [9|38|49|57]$$
$$\downarrow$$
$$[3|9|26|38|41|49|52|57]$$ | [] | false | [] |
02-2.3-2 | 02 | 2.3 | 2.3-2 | docs/Chap02/2.3.md | Rewrite the $\text{MERGE}$ procedure so that it does not use sentinels, instead stopping once either array $L$ or $R$ has had all its elements copied back to $A$ and then copying the remainder of the other array back into $A$. | ```cpp
MERGE(A, p, q, r)
n1 = q - p + 1
n2 = r - q
let L[1..n1] and R[1..n2] be new arrays
for i = 1 to n1
L[i] = A[p + i - 1]
for j = 1 to n2
R[j] = A[q + j]
i = 1
j = 1
for k = p to r
if i > n1
A[k] = R[j]
j = j + 1
else if j > n2... | [
{
"lang": "cpp",
"code": "MERGE(A, p, q, r)\n n1 = q - p + 1\n n2 = r - q\n let L[1..n1] and R[1..n2] be new arrays\n for i = 1 to n1\n L[i] = A[p + i - 1]\n for j = 1 to n2\n R[j] = A[q + j]\n i = 1\n j = 1\n for k = p to r\n if i > n1\n A[k] = R[j]\n... | false | [] |
02-2.3-3 | 02 | 2.3 | 2.3-3 | docs/Chap02/2.3.md | Use mathematical induction to show that when $n$ is an exact power of $2$, the solution of the recurrence
$$
T(n) =
\begin{cases}
2 & \text{if } n = 2, \\\\
2T(n / 2) + n & \text{if } n = 2^k, \text{for } k > 1
\end{cases}
$$
is $T(n) = n\lg n$. | - Base case
For $n = 2^1$, $T(n) = 2\lg 2 = 2$.
- Suppose $n = 2^k$, $T(n) = n\lg n = 2^k \lg 2^k = 2^kk$.
For $n = 2^{k + 1}$,
$$
\begin{aligned}
T(n) & = 2T(2^{k + 1} / 2) + 2^{k + 1} \\\\
& = 2T(2^k) + 2^{k + 1} \\\\
& = 2 \cdot 2^kk + 2^{k + 1} \\\\
& = 2^{k + 1}(k + 1) \\\\
... | [] | false | [] |
02-2.3-4 | 02 | 2.3 | 2.3-4 | docs/Chap02/2.3.md | We can express insertion sort as a recursive procedure as follows. In order to sort $A[1..n]$, we recursively sort $A[1..n - 1]$ and then insert $A[n]$ into the sorted array $A[1..n - 1]$. Write a recurrence for the running time of this recursive version of insertion sort. | It takes $\Theta(n)$ time in the worst case to insert $A[n]$ into the sorted array $A[1..n - 1]$. Therefore, the recurrence
$$
T(n) = \begin{cases}
\Theta(1) & \text{if } n = 1, \\\\
T(n - 1) + \Theta(n) & \text{if } n > 1.
\end{cases}
$$
The solution of the recurrence is $\Theta(n^2)$. | [] | false | [] |
02-2.3-5 | 02 | 2.3 | 2.3-5 | docs/Chap02/2.3.md | Referring back to the searching problem (see Exercise 2.1-3), observe that if the sequence $A$ is sorted, we can check the midpoint of the sequence against $v$ and eliminate half of the sequence from further consideration. The **_binary search_** algorithm repeats this procedure, halving the size of the remaining porti... | - Iterative:
```cpp
ITERATIVE-BINARY-SEARCH(A, v, low, high)
while low ≤ high
mid = floor((low + high) / 2)
if v == A[mid]
return mid
else if v > A[mid]
low = mid + 1
else high = mid - 1
return NIL
```
- Recursive:
```cpp
RECUR... | [
{
"lang": "cpp",
"code": " ITERATIVE-BINARY-SEARCH(A, v, low, high)\n while low ≤ high\n mid = floor((low + high) / 2)\n if v == A[mid]\n return mid\n else if v > A[mid]\n low = mid + 1\n else high = mid - 1\n return NIL"
},
{
... | false | [] |
02-2.3-6 | 02 | 2.3 | 2.3-6 | docs/Chap02/2.3.md | Observe that the **while** loop of lines 5–7 of the $\text{INSERTION-SORT}$ procedure in Section 2.1 uses a linear search to scan (backward) through the sorted subarray $A[i..j - 1]$. Can we use a binary search (see Exercise 2.3-5) instead to improve the overall worst-case running time of insertion sort to $\Theta(n\lg... | Each time the **while** loop of lines 5-7 of $\text{INSERTION-SORT}$ scans backward through the sorted array $A[1..j - 1]$. The loop not only searches for the proper place for $A[j]$, but it also moves each of the array elements that are bigger than $A[j]$ one position to the right (line 6). These movements takes $\The... | [] | false | [] |
02-2.3-7 | 02 | 2.3 | 2.3-7 $\star$ | docs/Chap02/2.3.md | Describe a $\Theta(n\lg n)$-time algorithm that, given a set $S$ of $n$ integers and another integer $x$, determines whether or not there exist two elements in $S$ whose sum is exactly $x$. | First, sort $S$, which takes $\Theta(n\lg n)$.
Then, for each element $s_i$ in $S$, $i = 1, \dots, n$, search $A[i + 1..n]$ for $s_i' = x - s_i$ by binary search, which takes $\Theta(\lg n)$.
- If $s_i'$ is found, return its position;
- otherwise, continue for next iteration.
The time complexity of the algorithm is $... | [] | false | [] |
02-2-1 | 02 | 2-1 | 2-1 | docs/Chap02/Problems/2-1.md | Although merge sort runs in $\Theta(n\lg n)$ worst-case time and insertion sort runs in $\Theta(n^2)$ worst-case time, the constant factors in insertion sort can make it faster in practice for small problem sizes on many machines. Thus, it makes sense to **_coarsen_** the leaves of the recursion by using insertion sort... | **a.** The worst-case time to sort a list of length $k$ by insertion sort is $\Theta(k^2)$. Therefore, sorting $n / k$ sublists, each of length $k$ takes $\Theta(k^2 \cdot n / k) = \Theta(nk)$ worst-case time.
**b.** We have $n / k$ sorted sublists each of length $k$. To merge these $n / k$ sorted sublists to a single... | [] | false | [] |
02-2-2 | 02 | 2-2 | 2-2 | docs/Chap02/Problems/2-2.md | Bubblesort is a popular, but inefficient, sorting algorithm. It works by repeatedly swapping adjacent elements that are out of order.
```cpp
BUBBLESORT(A)
for i = 1 to A.length - 1
for j = A.length downto i + 1
if A[j] < A[j - 1]
exchange A[j] with A[j - 1]
```
**a.** Let $A'$ denote the output of $\text{BUBBLESORT}(... | **a.** $A'$ consists of the elements in $A$ but in sorted order.
**b.** **Loop invariant:** At the start of each iteration of the **for** loop of lines 2-4, the subarray $A[j..n]$ consists of the elements originally in $A[j..n]$ before entering the loop but possibly in a different order and the first element $A[j]$ is... | [
{
"lang": "cpp",
"code": "> BUBBLESORT(A)\n> for i = 1 to A.length - 1\n> for j = A.length downto i + 1\n> if A[j] < A[j - 1]\n> exchange A[j] with A[j - 1]\n>"
}
] | false | [] |
02-2-3 | 02 | 2-3 | 2-3 | docs/Chap02/Problems/2-3.md | The following code fragment implements Horner's rule for evaluating a polynomial
$$
\begin{aligned}
P(x) & = \sum_{k = 0}^n a_k x^k \\\\
& = a_0 + x(a_1 + x (a_2 + \cdots + x(a_{n - 1} + x a_n) \cdots)),
\end{aligned}
$$
given the coefficients $a_0, a_1, \ldots, a_n$ and a value of $x$:
```cpp
y = 0
for i = n downto... | **a.** $\Theta(n)$.
**b.**
```cpp
NAIVE-HORNER()
y = 0
for k = 0 to n
temp = 1
for i = 1 to k
temp = temp * x
y = y + a[k] * temp
```
The running time is $\Theta(n^2)$, because of the nested loop. It is obviously slower.
**c.** **Initialization:** It is pretty trivial, si... | [
{
"lang": "cpp",
"code": "> y = 0\n> for i = n downto 0\n> y = a[i] + x * y\n>"
},
{
"lang": "cpp",
"code": "NAIVE-HORNER()\n y = 0\n for k = 0 to n\n temp = 1\n for i = 1 to k\n temp = temp * x\n y = y + a[k] * temp"
}
] | false | [] |
02-2-4 | 02 | 2-4 | 2-4 | docs/Chap02/Problems/2-4.md | Let $A[1..n]$ be an array of $n$ distinct numbers. If $i < j$ and $A[i] > A[j]$, then the pair $(i, j)$ is called an **_inversion_** of $A$.
**a.** List the five inversions in the array $\langle 2, 3, 8, 6, 1 \rangle$.
**b.** What array with elements from the set $\\{1, 2, \ldots, n\\}$ has the most inversions? How m... | **a.** $(1, 5)$, $(2, 5)$, $(3, 4)$, $(3, 5)$, $(4, 5)$.
**b.** The array $\langle n, n - 1, \dots, 1 \rangle$ has the most inversions $(n - 1) + (n - 2) + \cdots + 1 = n(n - 1) / 2$.
**c.** The running time of insertion sort is a constant times the number of inversions. Let $I(i)$ denote the number of $j < i$ such t... | [
{
"lang": "cpp",
"code": "COUNT-INVERSIONS(A, p, r)\n if p < r\n q = floor((p + r) / 2)\n left = COUNT-INVERSIONS(A, p, q)\n right = COUNT-INVERSIONS(A, q + 1, r)\n inversions = MERGE-INVERSIONS(A, p, q, r) + left + right\n return inversions"
},
{
"lang": "cpp",... | false | [] |
03-3.1-1 | 03 | 3.1 | 3.1-1 | docs/Chap03/3.1.md | Let $f(n) + g(n)$ be asymptotically nonnegative functions. Using the basic definition of $\Theta$-notation, prove that $\max(f(n), g(n)) = \Theta(f(n) + g(n))$. | For asymptotically nonnegative functions $f(n)$ and $g(n)$, we know that
$$
\begin{aligned}
\exists n_1, n_2: & f(n) \ge 0 & \text{for} \, n > n_1 \\\\
& g(n) \ge 0 & \text{for} \, n > n_2.
\end{aligned}
$$
Let $n_0 = \max(n_1, n_2)$ and we know the equations below would be true for $n > n_0$:
$$
\... | [] | false | [] |
03-3.1-2 | 03 | 3.1 | 3.1-2 | docs/Chap03/3.1.md | Show that for any real constants $a$ and $b$, where $b > 0$,
$$(n + a)^b = \Theta(n^b). \tag{3.2}$$ | Expand $(n + a)^b$ by the Binomial Expansion, we have
$$(n + a)^b = C_0^b n^b a^0 + C_1^b n^{b - 1} a^1 + \cdots + C_b^b n^0 a^b.$$
Besides, we know below is true for any polynomial when $x \ge 1$.
$$a_0 x^0 + a_1 x^1 + \cdots + a_n x^n \le (a_0 + a_1 + \cdots + a_n) x^n.$$
Thus,
$$C_0^b n^b \le C_0^b n^b a^0 + C_... | [] | false | [] |
03-3.1-3 | 03 | 3.1 | 3.1-3 | docs/Chap03/3.1.md | Explain why the statement, "The running time of algorithm $A$ is at least $O(n^2)$," is meaningless. | $T(n)$: running time of algorithm $A$. We just care about the upper bound and the lower bound of $T(n)$.
The statement: $T(n)$ is at least $O(n^2)$.
- Upper bound: Because "$T(n)$ is at least $O(n^2)$", there's no information about the upper bound of $T(n)$.
- Lower bound: Assume $f(n) = O(n^2)$, then the statement: ... | [] | false | [] |
03-3.1-4 | 03 | 3.1 | 3.1-4 | docs/Chap03/3.1.md | Is $2^{n + 1} = O(2^n)$? Is $2^{2n} = O(2^n)$? | - True. Note that $2^{n + 1} = 2 \times 2^n$. We can choose $c \ge 2$ and $n_0 = 0$, such that $0 \le 2^{n + 1} \le c \times 2^n$ for all $n \ge n_0$. By definition, $2^{n + 1} = O(2^n)$.
- False. Note that $2^{2n} = 2^n \times 2^n = 4^n$. We can't find any $c$ and $n_0$, such that $0 \le 2^{2n} = 4^n \le c \times 2^n... | [] | false | [] |
03-3.1-5-1 | 03 | 3.1 | 3.1-5 | docs/Chap03/3.1.md | Prove Theorem 3.1. | The theorem states:
> For any two functions $f(n)$ and $g(n)$, we have $f(n) = \Theta(g(n))$ if and only if $f(n) = O(g(n))$ and $f(n) = \Omega(g(n))$.
From $f = \Theta(g(n))$, we have that
$$0 \le c_1 g(n) \le f(n) \le c_2g(n) \text{ for } n > n_0.$$
We can pick the constants from here and use them in the definiti... | [] | false | [] |
03-3.1-5-2 | 03 | 3.1 | 3.1-5 | docs/Chap03/3.1.md | For any two functions $f(n)$ and $g(n)$, we have $f(n) = \Theta(g(n))$ if and only if $f(n) = O(g(n))$ and $f(n) = \Omega(g(n))$. | The theorem states:
> For any two functions $f(n)$ and $g(n)$, we have $f(n) = \Theta(g(n))$ if and only if $f(n) = O(g(n))$ and $f(n) = \Omega(g(n))$.
From $f = \Theta(g(n))$, we have that
$$0 \le c_1 g(n) \le f(n) \le c_2g(n) \text{ for } n > n_0.$$
We can pick the constants from here and use them in the definiti... | [] | false | [] |
03-3.1-6 | 03 | 3.1 | 3.1-6 | docs/Chap03/3.1.md | Prove that the running time of an algorithm is $\Theta(g(n))$ if and only if its worst-case running time is $O(g(n))$ and its best-case running time is $\Omega(g(n))$. | If $T_w$ is the worst-case running time and $T_b$ is the best-case running time, we know that
$$
\begin{aligned}
& 0 \le c_1g(n) \le T_b(n) & \text{ for } n > n_b \\\\
\text{and } & 0 \le T_w(n) \le c_2g(n) & \text{ for } n > n_w.
\end{aligned}
$$
Combining them we get
$$0 \le c_1g(n) \le T_b(n) \le T_w(... | [] | false | [] |
03-3.1-7 | 03 | 3.1 | 3.1-7 | docs/Chap03/3.1.md | Prove $o(g(n)) \cap w(g(n))$ is the empty set. | Let $f(n) = o(g(n)) \cap w(g(n))$.
We know that for any $c_1 > 0$, $c_2 > 0$,
$$
\begin{aligned}
& \exists n_1 > 0: 0 \le f(n) < c_1g(n) \\\\
\text{and } & \exists n_2 > 0: 0 \le c_2g(n) < f(n).
\end{aligned}
$$
If we pick $n_0 = \max(n_1, n_2)$, and let $c_1 = c_2$, from the problem definition we get
$$... | [] | false | [] |
03-3.1-8 | 03 | 3.1 | 3.1-8 | docs/Chap03/3.1.md | We can extend our notation to the case of two parameters $n$ and $m$ that can go to infinity independently at different rates. For a given function $g(n, m)$ we denote $O(g(n, m))$ the set of functions:
$$
\begin{aligned}
O(g(n, m)) = \\{f(n, m):
& \text{ there exist positive constants } c, n_0, \text{ and } m_0 \\\\
... | $$
\begin{aligned}
\Omega(g(n, m)) = \\{ f(n, m):
& \text{ there exist positive constants $c$, $n_0$, and $m_0$ such that } \\\\
& \text{ $0 \le cg(n, m) \le f(n, m)$ for all $n \ge n_0$ and $m \ge m_0$}.\\}
\end{aligned}
$$
$$
\begin{aligned}
\Theta(g(n, m)) = \\{ f(n, m):
& \text{ there exist positive constant... | [] | false | [] |
03-3.2-1 | 03 | 3.2 | 3.2-1 | docs/Chap03/3.2.md | Show that if $f(n)$ and $g(n)$ are monotonically increasing functions, then so are the functions $f(n) + g(n)$ and $f(g(n))$, and if $f(n)$ and $g(n)$ are in addition nonnegative, then $f(n) \cdot g(n)$ is monotonically increasing. | $$
\begin{aligned}
f(m) & \le f(n) \quad \text{ for } m \le n \\\\
g(m) & \le g(n) \quad \text{ for } m \le n, \\\\
\to f(m) + g(m) & \le f(n) + g(n),
\end{aligned}
$$
which proves the first function.
Then
$$f(g(m)) \le f(g(n)) \text{ for } m \le n.$$
This is true, since $g(m) \le g(n)$ and $f... | [] | false | [] |
03-3.2-2 | 03 | 3.2 | 3.2-2 | docs/Chap03/3.2.md | Prove equation $\text{(3.16)}$. | $$
\begin{aligned}
a^{\log_b c} = a^\frac{\log_a c}{\log_a b} = (a^{\log_a c})^{\frac{1}{\log_a b}} = c^{\log_b a}
\end{aligned}
$$ | [] | false | [] |
03-3.2-3 | 03 | 3.2 | 3.2-3 | docs/Chap03/3.2.md | Prove equation $\text{(3.19)}$. Also prove that $n! = \omega(2^n)$ and $n! = o(n^n)$.
$$\lg(n!) = \Theta(n\lg n) \tag{3.19}$$ | We can use **Stirling's approximation** to prove these three equations.
For equation $\text{(3.19)}$,
$$
\begin{aligned}
\lg(n!)
& = \lg\Bigg(\sqrt{2\pi n}\Big(\frac{n}{e}\Big)^n\Big(1 + \Theta(\frac{1}{n})\Big)\Bigg) \\\\
& = \lg\sqrt{2\pi n } + \lg\Big(\frac{n}{e}\Big)^n + \lg\Big(1+\Theta(\frac{1}{n})\Big) \\\... | [] | false | [] |
03-3.2-4 | 03 | 3.2 | 3.2-4 $\star$ | docs/Chap03/3.2.md | Is the function $\lceil \lg n \rceil!$ polynomially bounded? Is the function $\lceil \lg\lg n \rceil!$ polynomially bounded? | Proving that a function $f(n)$ is polynomially bounded is equivalent to proving that $\lg(f(n)) = O(\lg n)$ for the following reasons.
- If $f$ is polynomially bounded, then there exist constants $c$, $k$, $n_0$ such that for all $n \ge n_0$, $f(n) \le cn^k$. Hence, $\lg(f(n)) \le kc\lg n$, which means that $\lg(f(n))... | [] | false | [] |
03-3.2-5 | 03 | 3.2 | 3.2-5 $\star$ | docs/Chap03/3.2.md | Which is asymptotically larger: $\lg(\lg^\*n)$ or $\lg^\*(\lg n)$? | We have $\lg^\* 2^n = 1 + \lg^\* n$,
$$
\begin{aligned}
\lim_{n \to \infty} \frac{\lg(\lg^\*n)}{\lg^\*(\lg n)}
& = \lim_{n \to \infty} \frac{\lg(\lg^\* 2^n)}{\lg^\*(\lg 2^n)} \\\\
& = \lim_{n \to \infty} \frac{\lg(1 + \lg^\* n)}{\lg^\* n} \\\\
& = \lim_{n \to \infty} \frac{\lg(1 + n)}{n} \\\\
& = \lim_... | [] | false | [] |
03-3.2-6 | 03 | 3.2 | 3.2-6 | docs/Chap03/3.2.md | Show that the golden ratio $\phi$ and its conjugate $\hat \phi$ both satisfy the equation $x^2 = x + 1$. | $$
\begin{aligned}
\phi^2 & = \Bigg(\frac{1 + \sqrt 5}{2}\Bigg)^2 = \frac{6 + 2\sqrt 5}{4} = 1 + \frac{1 + \sqrt 5}{2} = 1 + \phi \\\\
\hat\phi^2 & = \Bigg(\frac{1 - \sqrt 5}{2}\Bigg)^2 = \frac{6 - 2\sqrt 5}{4} = 1 + \frac{1 - \sqrt 5}{2} = 1 + \hat\phi.
\end{aligned}
$$ | [] | false | [] |
03-3.2-7 | 03 | 3.2 | 3.2-7 | docs/Chap03/3.2.md | Prove by induction that the $i$th Fibonacci number satisfies the equality
$$F_i = \frac{\phi^i - \hat\phi^i}{\sqrt 5},$$
where $\phi$ is the golden ratio and $\hat\phi$ is its conjugate. | - Base case
For $i = 0$,
$$
\begin{aligned}
\frac{\phi^0 - \hat\phi^0}{\sqrt 5}
& = \frac{1 - 1}{\sqrt 5} \\\\
& = 0 \\\\
& = F_0.
\end{aligned}
$$
For $i = 1$,
$$
\begin{aligned}
\frac{\phi^1 - \hat\phi^1}{\sqrt 5}
& = \frac{(1 + \sqrt 5) - (1... | [] | false | [] |
03-3.2-8 | 03 | 3.2 | 3.2-8 | docs/Chap03/3.2.md | Show that $k\ln k = \Theta(n)$ implies $k = \Theta(n / \lg n)$. | From the symmetry of $\Theta$,
$$k\ln k = \Theta(n) \Rightarrow n = \Theta(k\ln k).$$
Let's find $\ln n$,
$$\ln n = \Theta(\ln(k\ln k)) = \Theta(\ln k + \ln\ln k) = \Theta(\ln k).$$
Let's divide the two,
$$\frac{n}{\ln n} = \frac{\Theta(k\ln k)}{\Theta(\ln k)} = \Theta\Big({\frac{k\ln k}{\ln k}}\Big) = \Theta(k).$... | [] | false | [] |
03-3-1 | 03 | 3-1 | 3-1 | docs/Chap03/Problems/3-1.md | Let
$$p(n) = \sum_{i = 0}^d a_i n^i,$$
where $a_d > 0$, be a degree-$d$ polynomial in $n$, and let $k$ be a constant. Use the definitions of the asymptotic notations to prove the following properties.
**a.** If $k \ge d$, then $p(n) = O(n^k)$.
**b.** If $k \le d$, then $p(n) = \Omega(n^k)$.
**c.** If $k = d$, then... | Let's see that $p(n) = O(n^d)$. We need do pick $c = a_d + b$, such that
$$\sum\limits_{i = 0}^d a_i n^i = a_d n^d + a_{d - 1}n^{d - 1} + \cdots + a_1n + a_0 \le cn^d.$$
When we divide by $n^d$, we get
$$c = a_d + b \ge a_d + \frac{a_{d - 1}}n + \frac{a_{d - 2}}{n^2} + \cdots + \frac{a_0}{n^d}.$$
and
$$b \ge \frac... | [] | false | [] |
03-3-2 | 03 | 3-2 | 3-2 | docs/Chap03/Problems/3-2.md | Indicate for each pair of expressions $(A, B)$ in the table below, whether $A$ is $O$, $o$, $\Omega$, $\omega$, or $\Theta$ of $B$. Assume that $k \ge 1$, $\epsilon > 0$, and $c > 1$ are constants. Your answer should be in the form of the table with "yes" or "no" written in each box. | $$
\begin{array}{ccccccc}
A & B & O & o & \Omega & \omega & \Theta \\\\
\hline
\lg^k n & n^\epsilon & yes & yes & no & no & no \\\\
n^k & c^n & yes & yes & no & no & no \\\\
\sqrt n & n^{\sin n} & no & no & no & no & no \\\\
2^n & 2^{n / ... | [] | false | [] |
03-3-3 | 03 | 3-3 | 3-3 | docs/Chap03/Problems/3-3.md | **a.** Rank the following functions by order of growth; that is, find an arrangement $g_1, g_2, \ldots , g_{30}$ of the functions $g_1 = \Omega(g_2), g_2 = \Omega(g_3), \ldots, g_{29} = \Omega(g_{30})$. Partition your list into equivalence classes such that functions $f(n)$ and $g(n)$ are in the same class if and only ... | $$
\begin{array}{ll}
2^{2^{n + 1}} & \\\\
2^{2^n} & \\\\
(n + 1)! & \\\\
n! & \\\\
e^n & \\\\
n\cdot 2^n & \\\\
2^n & \\\\
(3 / ... | [] | false | [] |
03-3-4 | 03 | 3-4 | 3-4 | docs/Chap03/Problems/3-4.md | Let $f(n)$ and $g(n)$ by asymptotically positive functions. Prove or disprove each of the following conjectures.
**a.** $f(n) = O(g(n))$ implies $g(n) = O(f(n))$.
**b.** $f(n) + g(n) = \Theta(\min(f(n), g(n)))$.
**c.** $f(n) = O(g(n))$ implies $\lg(f(n)) = O(\lg(g(n)))$, where $\lg(g(n)) \ge 1$ and $f(n) \ge 1$ for ... | **a.** Disprove, $n = O(n^2)$, but $n^2 \ne O(n)$.
**b.** Disprove, $n^2 + n \ne \Theta(\min(n^2, n)) = \Theta(n)$.
**c.** Prove, because $f(n) \ge 1$ after a certain $n \ge n_0$.
$$
\begin{aligned}
\exists c, n_0: \forall n \ge n_0, 0 \le f(n) \le cg(n) \\\\
\Rightarrow 0 \le \lg f(n) \le \lg (cg(n)) = \lg c + \lg ... | [] | false | [] |
03-3-5 | 03 | 3-5 | 3-5 | docs/Chap03/Problems/3-5.md | Some authors define $\Omega$ in a slightly different way than we do; let's use ${\Omega}^{\infty}$ (read "omega infinity") for this alternative definition. We say that $f(n) = {\Omega}^{\infty}(g(n))$ if there exists a positive constant $c$ such that $f(n) \ge cg(n) \ge 0$ for infinitely many integers $n$.
**a.** Show... | **a.** We have
$$
f(n) =
\begin{cases}
O(g(n)) \text{ and } {\Omega}^{\infty}(g(n)) & \text{if $f(n) = \Theta(g(n))$}, \\\\
O(g(n)) & \text{if $0 \le f(n) \le cg(n)$}, \\\\
{\Omega}^{\infty}(g(n)) & \text{if $0 \le cg(n) \le f(n)$, for infinitely many integers ... | [] | false | [] |
03-3-6 | 03 | 3-6 | 3-6 | docs/Chap03/Problems/3-6.md | We can apply the iteration operator $^\*$ used in the $\lg^\*$ function to any monotonically increasing function $f(n)$ over the reals. For a given constant $c \in \mathbb R$, we define the iterated function ${f_c}^\*$ by ${f_c}^\*(n) = \min \\{i \ge 0 : f^{(i)}(n) \le c \\}$ which need not be well defined in all cases... | For each of the following functions $f(n)$ and constants $c$, give as tight a bound as possible on ${f_c}^\*(n)$.
$$
\begin{array}{ccl}
f(n) & c & {f_c}^\* \\\\
\hline
n - 1 & 0 & \Theta(n) \\\\
\lg n & 1 & \Theta(\lg^\*{n}) \\\\
n / 2 & 1 & \Theta(\lg n... | [] | false | [] |
04-4.1-1 | 04 | 4.1 | 4.1-1 | docs/Chap04/4.1.md | What does $\text{FIND-MAXIMUM-SUBARRAY}$ return when all elements of $A$ are negative? | It will return the greatest element of $A$. | [] | false | [] |
04-4.1-2 | 04 | 4.1 | 4.1-2 | docs/Chap04/4.1.md | Write pseudocode for the brute-force method of solving the maximum-subarray problem. Your procedure should run in $\Theta(n^2)$ time. | ```cpp
BRUTE-FORCE-FIND-MAXIMUM-SUBARRAY(A)
n = A.length
max-sum = -∞
for l = 1 to n
sum = 0
for h = l to n
sum = sum + A[h]
if sum > max-sum
max-sum = sum
low = l
high = h
return (low, high, max-sum)
``` | [
{
"lang": "cpp",
"code": "BRUTE-FORCE-FIND-MAXIMUM-SUBARRAY(A)\n n = A.length\n max-sum = -∞\n for l = 1 to n\n sum = 0\n for h = l to n\n sum = sum + A[h]\n if sum > max-sum\n max-sum = sum\n low = l\n high = h\n r... | false | [] |
04-4.1-3 | 04 | 4.1 | 4.1-3 | docs/Chap04/4.1.md | Implement both the brute-force and recursive algorithms for the maximum-subarray problem on your own computer. What problem size $n_0$ gives the crossover point at which the recursive algorithm beats the brute-force algorithm? Then, change the base case of the recursive algorithm to use the brute-force algorithm whenev... | On my computer, $n_0$ is $37$.
If the algorithm is modified to used divide and conquer when $n \ge 37$ and the brute-force approach when $n$ is less, the performance at the crossover point almost doubles. The performance at $n_0 - 1$ stays the same, though (or even gets worse, because of the added overhead).
What I f... | [] | false | [] |
04-4.1-4 | 04 | 4.1 | 4.1-4 | docs/Chap04/4.1.md | Suppose we change the definition of the maximum-subarray problem to allow the result to be an empty subarray, where the sum of the values of an empty subarray is $0$. How would you change any of the algorithms that do not allow empty subarrays to permit an empty subarray to be the result? | If the original algorithm returns a negative sum, returning an empty subarray instead. | [] | false | [] |
04-4.1-5 | 04 | 4.1 | 4.1-5 | docs/Chap04/4.1.md | Use the following ideas to develop a nonrecursive, linear-time algorithm for the maximum-subarray problem. Start at the left end of the array, and progress toward the right, keeping track of the maximum subarray seen so far. Knowing a maximum subarray $A[1..j]$, extend the answer to find a maximum subarray ending at in... | ```cpp
ITERATIVE-FIND-MAXIMUM-SUBARRAY(A)
n = A.length
max-sum = -∞
sum = -∞
for j = 1 to n
currentHigh = j
if sum > 0
sum = sum + A[j]
else
currentLow = j
sum = A[j]
if sum > max-sum
max-sum = sum
low = currentL... | [
{
"lang": "cpp",
"code": "ITERATIVE-FIND-MAXIMUM-SUBARRAY(A)\n n = A.length\n max-sum = -∞\n sum = -∞\n for j = 1 to n\n currentHigh = j\n if sum > 0\n sum = sum + A[j]\n else\n currentLow = j\n sum = A[j]\n if sum > max-sum\n ... | false | [] |
04-4.2-1 | 04 | 4.2 | 4.2-1 | docs/Chap04/4.2.md | Use Strassen's algorithm to compute the matrix product
$$
\begin{pmatrix}
1 & 3 \\\\
7 & 5
\end{pmatrix}
\begin{pmatrix}
6 & 8 \\\\
4 & 2
\end{pmatrix}
.
$$
Show your work. | The first matrices are
$$
\begin{array}{ll}
S_1 = 6 & S_6 = 8 \\\\
S_2 = 4 & S_7 = -2 \\\\
S_3 = 12 & S_8 = 6 \\\\
S_4 = -2 & S_9 = -6 \\\\
S_5 = 6 & S_{10} = 14.
\end{array}
$$
The products are
$$
\begin{aligned}
P_1 & = 1 \cdot 6 = 6 \\\\
P_2 & = 4 \cdot 2 = 8 \\\\
P_3 & = 6 \cdot 12 = 7... | [] | false | [] |
04-4.2-2 | 04 | 4.2 | 4.2-2 | docs/Chap04/4.2.md | Write pseudocode for Strassen's algorithm. | ```cpp
STRASSEN(A, B)
n = A.rows
if n == 1
return a[1, 1] * b[1, 1]
let C be a new n × n matrix
A[1, 1] = A[1..n / 2][1..n / 2]
A[1, 2] = A[1..n / 2][n / 2 + 1..n]
A[2, 1] = A[n / 2 + 1..n][1..n / 2]
A[2, 2] = A[n / 2 + 1..n][n / 2 + 1..n]
B[1, 1] = B[1..n / 2][1..n / 2]
B[1,... | [
{
"lang": "cpp",
"code": "STRASSEN(A, B)\n n = A.rows\n if n == 1\n return a[1, 1] * b[1, 1]\n let C be a new n × n matrix\n A[1, 1] = A[1..n / 2][1..n / 2]\n A[1, 2] = A[1..n / 2][n / 2 + 1..n]\n A[2, 1] = A[n / 2 + 1..n][1..n / 2]\n A[2, 2] = A[n / 2 + 1..n][n / 2 + 1..n]\n ... | false | [] |
04-4.2-3 | 04 | 4.2 | 4.2-3 | docs/Chap04/4.2.md | How would you modify Strassen's algorithm to multiply $n \times n$ matrices in which $n$ is not an exact power of $2$? Show that the resulting algorithm runs in time $\Theta(n^{\lg7})$. | We can just extend it to an $n \times n$ matrix and pad it with zeroes. It's obviously $\Theta(n^{\lg7})$. | [] | false | [] |
04-4.2-4 | 04 | 4.2 | 4.2-4 | docs/Chap04/4.2.md | What is the largest $k$ such that if you can multiply $3 \times 3$ matrices using $k$ multiplications (not assuming commutativity of multiplication), then you can multiply $n \times n$ matrices is time $o(n^{\lg 7})$? What would the running time of this algorithm be? | Assume $n = 3^m$ for some $m$. Then, using block matrix multiplication, we obtain the recursive running time $T(n) = kT(n / 3) + O(1)$.
By master theorem, we can find the largest $k$ to satisfy $\log_3 k < \lg 7$ is $k = 21$. | [] | false | [] |
04-4.2-5 | 04 | 4.2 | 4.2-5 | docs/Chap04/4.2.md | V. Pan has discovered a way of multiplying $68 \times 68$ matrices using $132464$ multiplications, a way of multiplying $70 \times 70$ matrices using $143640$ multiplications, and a way of multiplying $72 \times 72$ matrices using $155424$ multiplications. Which method yields the best asymptotic running time when used ... | **Analyzing Pan's Methods**
Pan has introduced three methods for divide-and-conquer matrix multiplication, each with different parameters. We will analyze the recurrence relations, compute the exponents using the Master Theorem, and compare the resulting asymptotic running times to Strassen’s algorithm.
**Method 1:**... | [] | false | [] |
04-4.2-6 | 04 | 4.2 | 4.2-6 | docs/Chap04/4.2.md | How quickly can you multiply a $kn \times n$ matrix by an $n \times kn$ matrix, using Strassen's algorithm as a subroutine? Answer the same question with the order of the input matrices reversed. | - $(kn \times n)(n \times kn)$ produces a $kn \times kn$ matrix. This produces $k^2$ multiplications of $n \times n$ matrices.
- $(n \times kn)(kn \times n)$ produces an $n \times n$ matrix. This produces $k$ multiplications and $k - 1$ additions. | [] | false | [] |
04-4.2-7 | 04 | 4.2 | 4.2-7 | docs/Chap04/4.2.md | Show how to multiply the complex numbers $a + bi$ and $c + di$ using only three multiplications of real numbers. The algorithm should take $a$, $b$, $c$ and $d$ as input and produce the real component $ac - bd$ and the imaginary component $ad + bc$ separately. | The three matrices are
$$
\begin{aligned}
A & = (a + b)(c + d) = ac + ad + bc + bd \\\\
B & = ac \\\\
C & = bd.
\end{aligned}
$$
The result is
$$(B - C) + (A - B - C)i.$$ | [] | false | [] |
04-4.3-1 | 04 | 4.3 | 4.3-1 | docs/Chap04/4.3.md | Show that the solution of $T(n) = T(n - 1) + n$ is $O(n^2)$. | We guess $T(n) \le cn^2$,
$$
\begin{aligned}
T(n) & \le c(n - 1)^2 + n \\\\
& = cn^2 - 2cn + c + n \\\\
& = cn^2 + n(1 - 2c) + c \\\\
& \le cn^2,
\end{aligned}
$$
where the last step holds for $c > \frac{1}{2}$. | [] | false | [] |
04-4.3-2 | 04 | 4.3 | 4.3-2 | docs/Chap04/4.3.md | Show that the solution of $T(n) = T(\lceil n / 2 \rceil) + 1$ is $O(\lg n)$. | We guess $T(n) \le c\lg(n - a)$,
$$
\begin{aligned}
T(n) & \le c\lg(\lceil n / 2 \rceil - a) + 1 \\\\
& \le c\lg((n + 1) / 2 - a) + 1 \\\\
& = c\lg((n + 1 - 2a) / 2) + 1 \\\\
& = c\lg(n + 1 - 2a) - c\lg 2 + 1 & (c \ge 1) \\\\
& \le c\lg(n + 1 - 2a) & (a \ge 1) \\\\
& \le c\lg(n - a),
\end{... | [] | false | [] |
04-4.3-3 | 04 | 4.3 | 4.3-3 | docs/Chap04/4.3.md | We saw that the solution of $T(n) = 2T(\lfloor n / 2 \rfloor) + n$ is $O(n\lg n)$. Show that the solution of this recurrence is also $\Omega(n\lg n)$. Conclude that the solution is $\Theta(n\lg n)$. | First, we guess $T(n) \le cn\lg n$,
$$
\begin{aligned}
T(n) & \le 2c\lfloor n / 2 \rfloor\lg{\lfloor n / 2 \rfloor} + n \\\\
& \le cn\lg(n / 2) + n \\\\
& = cn\lg n - cn\lg 2 + n \\\\
& = cn\lg n + (1 - c)n \\\\
& \le cn\lg n,
\end{aligned}
$$
where the last step holds for $c \ge 1$.
Next, we... | [] | false | [] |
04-4.3-4 | 04 | 4.3 | 4.3-4 | docs/Chap04/4.3.md | Show that by making a different inductive hyptohesis, we can overcome the difficulty with the boundary condition $T(1) = 1$ for recurrence $\text{(4.19)}$ without adjusting the boundary conditions for the inductive proof. | We guess $T(n) \le n\lg n + n$,
$$
\begin{aligned}
T(n) & \le 2(c\lfloor n / 2 \rfloor\lg{\lfloor n / 2 \rfloor} + \lfloor n / 2 \rfloor) + n \\\\
& \le 2c(n / 2)\lg(n / 2) + 2(n / 2) + n \\\\
& = cn\lg(n / 2) + 2n \\\\
& = cn\lg n - cn\lg{2} + 2n \\\\
& = cn\lg n + (2 - c)n \\\\
& \le c... | [] | false | [] |
04-4.3-5-1 | 04 | 4.3 | 4.3-5 | docs/Chap04/4.3.md | Show that $\Theta(n\lg n)$ is the solution to the "exact" recurrence $\text{(4.3)}$ for merge sort. | The recurrence is
> $$T(n) = T(\lceil n / 2 \rceil) + T(\lfloor n / 2 \rfloor) + \Theta(n) \tag{4.3}$$
To show $\Theta$ bound, separately show $O$ and $\Omega$ bounds.
- For $O(n\lg n)$, we guess $T(n) \le c(n - 2)\lg(n - 2) - 2c$,
$$
\begin{aligned}
T(n) & \le c(\lceil n / 2 \rceil -2 )\lg(\lceil n / 2 \rcei... | [] | false | [] |
04-4.3-5-2 | 04 | 4.3 | 4.3-5 | docs/Chap04/4.3.md | $$T(n) = T(\lceil n / 2 \rceil) + T(\lfloor n / 2 \rfloor) + \Theta(n) \tag{4.3}$$ | The recurrence is
> $$T(n) = T(\lceil n / 2 \rceil) + T(\lfloor n / 2 \rfloor) + \Theta(n) \tag{4.3}$$
To show $\Theta$ bound, separately show $O$ and $\Omega$ bounds.
- For $O(n\lg n)$, we guess $T(n) \le c(n - 2)\lg(n - 2) - 2c$,
$$
\begin{aligned}
T(n) & \le c(\lceil n / 2 \rceil -2 )\lg(\lceil n / 2 \rcei... | [] | false | [] |
04-4.3-6 | 04 | 4.3 | 4.3-6 | docs/Chap04/4.3.md | Show that the solution to $T(n) = 2T(\lfloor n / 2 \rfloor + 17) + n$ is $O(n\lg n)$. | We guess $T(n) \le c(n - a)\lg(n - a)$,
$$
\begin{aligned}
T(n) & \le 2c(\lfloor n / 2 \rfloor + 17 - a)\lg(\lfloor n / 2 \rfloor + 17 - a) + n \\\\
& \le 2c(n / 2 + 17 - a)\lg(n / 2 + 17 - a) + n \\\\
& = c(n + 34 - 2a)\lg\frac{n + 34 - 2a}{2} + n \\\\
& = c(n + 34 - 2a)\lg(n + 34 - 2a) - c(n + 34 ... | [] | false | [] |
04-4.3-7 | 04 | 4.3 | 4.3-7 | docs/Chap04/4.3.md | Using the master method in Section 4.5, you can show that the solution to the recurrence $T(n) = 4T(n / 3) + n$ is $T(n) = \Theta(n^{\log_3 4})$. Show that a substitution proof with the assumption $T(n) \le cn^{\log_3 4}$ fails. Then show how to subtract off a lower-order term to make the substitution proof work. | We guess $T(n) \le cn^{\log_3 4}$ first,
$$
\begin{aligned}
T(n) & \le 4c(n / 3)^{\log_3 4} + n \\\\
& = cn^{\log_3 4} + n.
\end{aligned}
$$
We stuck here.
We guess $T(n) \le cn^{\log_3 4} - dn$ again,
$$
\begin{aligned}
T(n) & \le 4(c(n / 3)^{\log_3 4} - dn / 3) + n \\\\
& = 4(cn^{\log_3 4} / 4 - dn ... | [] | false | [] |
04-4.3-8 | 04 | 4.3 | 4.3-8 | docs/Chap04/4.3.md | Using the master method in Section 4.5, you can show that the solution to the recurrence $T(n) = 4T(n / 2) + n$ is $T(n) = \Theta(n^2)$. Show that a substitution proof with the assumption $T(n) \le cn^2$ fails. Then show how to subtract off a lower-order term to make the substitution proof work. | First, let's try the guess $T(n) \le cn^2$. Then, we have
$$
\begin{aligned}
T(n) &= 4T(n / 2) + n \\\\
&\le 4c(n / 2)^2 + n \\\\
&= cn^2 + n.
\end{aligned}
$$
We can't proceed any further from the inequality above to conclude $T(n) \le cn^2$.
Alternatively, let us try the guess
$$T(n) \le cn^2 - cn,$... | [] | false | [] |
04-4.3-9 | 04 | 4.3 | 4.3-9 | docs/Chap04/4.3.md | Solve the recurrence $T(n) = 3T(\sqrt n) + \log n$ by making a change of variables. Your solution should be asymptotically tight. Do not worry about whether values are integral. | First,
$$
\begin{aligned}
T(n) & = 3T(\sqrt n) + \lg n & \text{ let } m = \lg n \\\\
T(2^m) & = 3T(2^{m / 2}) + m \\\\
S(m) & = 3S(m / 2) + m.
\end{aligned}
$$
Now we guess $S(m) \le cm^{\lg 3} + dm$,
$$
\begin{aligned}
S(m) & \le 3\Big(c(m / 2)^{\lg 3} + d(m / 2)\Big) + m \\\\
& \le cm^{\lg 3} + (\frac{3}{... | [] | false | [] |
04-4.4-1 | 04 | 4.4 | 4.4-1 | docs/Chap04/4.4.md | Use a recursion tree to determine a good asymptotic upper bound on the recurrence $T(n) = 3T(\lfloor n / 2 \rfloor) + n$. Use the substitution method to verify your answer. | - The subproblem size for a node at depth $i$ is $n / 2^i$.
Thus, the tree has $\lg n + 1$ levels and $3^{\lg n} = n^{\lg 3}$ leaves.
The total cost over all nodes at depth $i$, for $i = 0, 1, 2, \ldots, \lg n - 1$, is $3^i(n / 2^i) = (3 / 2)^i n$.
$$
\begin{aligned}
T(n) & = n + \frac{3}{2}n + \... | [] | false | [] |
04-4.4-2 | 04 | 4.4 | 4.4-2 | docs/Chap04/4.4.md | Use a recursion tree to determine a good asymptotic upper bound on the recurrence $T(n) = T(n / 2) + n^2$. Use the substitution method to verify your answer. | - The subproblem size for a node at depth $i$ is $n / 2^i$.
Thus, the tree has $\lg n + 1$ levels and $1^{\lg n} = 1$ leaf.
The total cost over all nodes at depth $i$, for $i = 0, 1, 2, \ldots, \lg{n - 1}$, is $1^i (n / 2^i)^2 = (1 / 4)^i n^2$.
$$
\begin{aligned}
T(n) & = \sum_{i = 0}^{\lg n - 1}... | [] | false | [] |
04-4.4-3 | 04 | 4.4 | 4.4-3 | docs/Chap04/4.4.md | Use a recursion tree to determine a good asymptotic upper bound on the recurrence $T(n) = 4T(n / 2 + 2) + n$. Use the substitution method to verify your answer. | - The subproblem size for a node at depth $i$ is $n / 2^i$.
Thus, the tree has $\lg n + 1$ levels and $4^{\lg n} = n^2$ leaves.
The total cost over all nodes at depth $i$, for $i = 0, 1, 2, \ldots, \lg n - 1$, is $4^i(n / 2^i + 2) = 2^i n + 2 \cdot 4^i$.
$$
\begin{aligned}
T(n) & = \sum_{i = 0}^{... | [] | false | [] |
04-4.4-4 | 04 | 4.4 | 4.4-4 | docs/Chap04/4.4.md | Use a recursion tree to determine a good asymptotic upper bound on the recurrence $T(n) = 2T(n - 1) + 1$. Use the substitution method to verify your answer. | - The subproblem size for a node at depth $i$ is $n - i$.
Thus, the tree has $n + 1$ levels ($i = 0, 1, 2, \dots, n$) and $2^n$ leaves.
The total cost over all nodes at depth $i$, for $i = 0, 1, 2, \ldots, n - 1$, is $2^i$.
The $n$-th level has $2^n$ leaves each with cost $\Theta(1)$, so the total cost o... | [] | false | [] |
04-4.4-5 | 04 | 4.4 | 4.4-5 | docs/Chap04/4.4.md | Use a recursion tree to determine a good asymptotic upper bound on the recurrence $T(n) = T(n - 1) + T(n / 2) + n$. Use the substitution method to verify your answer. | This is a curious one. The tree makes it look like it is exponential in the worst case. The tree is not full (not a complete binary tree of height $n$), but it is not polynomial either. It's easy to show $O(2^n)$ and $\Omega(n^2)$.
To justify that this is a pretty tight upper bound, we'll show that we can't have any o... | [] | false | [] |
04-4.4-6 | 04 | 4.4 | 4.4-6 | docs/Chap04/4.4.md | Argue that the solution to the recurrence $T(n) = T(n / 3) + T(2n / 3) + cn$, where $c$ is a constant, is $\Omega(n\lg n)$ by appealing to the recursion tree. | We know that the cost at each level of the tree is $cn$ by examining the tree in figure 4.6. To find a lower bound on the cost of the algorithm, we need a lower bound on the height of the tree.
The shortest simple path from root to leaf is found by following the leftest child at each node. Since we divide by $3$ at ea... | [] | false | [] |
04-4.4-7 | 04 | 4.4 | 4.4-7 | docs/Chap04/4.4.md | Draw the recursion tree for $T(n) = 4T(\lfloor n / 2 \rfloor) + cn$, where $c$ is a constant, and provide a tight asymptotic bound on its solution. Verify your answer with the substitution method. | - The subproblem size for a node at depth $i$ is $n / 2^i$.
Thus, the tree has $\lg n + 1$ levels and $4^{\lg n} = n^{\lg 4} = n^2$ leaves.
The total cost over all nodes at depth $i$, for $i = 0, 1, 2, \ldots, \lg n - 1$, is $4^i(cn / 2^i) = 2^icn$.
$$
\begin{aligned}
T(n) & = \sum_{i = 0}^{\lg n... | [] | false | [] |
04-4.4-8 | 04 | 4.4 | 4.4-8 | docs/Chap04/4.4.md | Use a recursion tree to give an asymptotically tight solution to the recurrence $T(n) = T(n - a) + T(a) + cn$, where $a \ge 1$ and $c > 0$ are constants. | - The tree has $n / a + 1$ levels.
The total cost over all nodes at depth $i$, for $i = 0, 1, 2, \ldots, n / a - 1$, is $c(n - ia)$.
$$
\begin{aligned}
T(n) & = \sum_{i = 0}^{n / a} c(n - ia) + (n / a)ca \\\\
& = \sum_{i = 0}^{n / a} cn - \sum_{i = 0}^{n / a} cia + (n / a)ca \\\\
& =... | [] | false | [] |
04-4.4-9 | 04 | 4.4 | 4.4-9 | docs/Chap04/4.4.md | Use a recursion tree to give an asymptotically tight solution to the recurrence $T(n) = T(\alpha n) + T((1 - \alpha)n) + cn$, where $\alpha$ is a constant in the range $0 < \alpha < 1$, and $c > 0$ is also a constant. | We can assume that $0 < \alpha \le 1 / 2$, since otherwise we can let $\beta = 1 − \alpha$ and solve it for $\beta$.
Thus, the depth of the tree is $\log_{1 / \alpha} n$ and each level costs $cn$. And let's guess that the leaves are $\Theta(n)$,
$$
\begin{aligned}
T(n) & = \sum_{i = 0}^{\log_{1 / \alpha} n} cn + \The... | [] | false | [] |
04-4.5-1 | 04 | 4.5 | 4.5-1 | docs/Chap04/4.5.md | Use the master method to give tight asymptotic bounds for the following recurrences:
**a.** $T(n) = 2T(n / 4) + 1$.
**b.** $T(n) = 2T(n / 4) + \sqrt n$.
**c.** $T(n) = 2T(n / 4) + n$.
**d.** $T(n) = 2T(n / 4) + n^2$. | **a.** $\Theta(n^{\log_4 2}) = \Theta(\sqrt n)$.
**b.** $\Theta(n^{\log_4 2}\lg n) = \Theta(\sqrt n\lg n)$.
**c.** $\Theta(n)$.
**d.** $\Theta(n^2)$. | [] | false | [] |
04-4.5-2 | 04 | 4.5 | 4.5-2 | docs/Chap04/4.5.md | Professor Caesar wishes to develop a matrix-multiplication algorithm that is asymptotically faster than Strassen's algorithm. His algorithm will use the divide-and-conquer method, dividing each matrix into pieces of size $n / 4 \times n / 4$, and the divide and combine steps together will take $\Theta(n^2)$ time. He ne... | Strassen's algorithm has running time of $\Theta(n^{\lg 7})$.
The largest integer $a$ such that $\log_4 a < \lg 7$ is $a = 48$. | [] | false | [] |
04-4.5-3 | 04 | 4.5 | 4.5-3 | docs/Chap04/4.5.md | Use the master method to show that the solution to the binary-search recurrence $T(n) = T(n / 2) + \Theta(1)$ is $T(n) = \Theta(\lg n)$. (See exercise 2.3-5 for a description of binary search.) | $$
\begin{aligned}
a & = 1, b = 2, \\\\
f(n) & = \Theta(n^{\lg 1}) = \Theta(1), \\\\
T(n) & = \Theta(\lg n).
\end{aligned}
$$ | [] | false | [] |
04-4.5-4 | 04 | 4.5 | 4.5-4 | docs/Chap04/4.5.md | Can the master method be applied to the recurrence $T(n) = 4T(n / 2) + n^2\lg n$? Why or why not? Give an asymptotic upper bound for this recurrence. | With $a = 4$, $b = 2$, we have $f(n) = n^2\lg n \ne O(n^{2 - \epsilon}) \ne \Omega(n^{2 + \epsilon})$, so we cannot apply the master method.
We guess $T(n) \le cn^2\lg^2 n$, subsituting $T(n/2) \le c(n/2)^2\lg^2 (n/2)$ into the recurrence yields
$$
\begin{aligned}
T(n) & = 4T(n / 2) + n^2\lg n \\\\
& \le 4c(n ... | [] | false | [] |
04-4.5-5 | 04 | 4.5 | 4.5-5 $\star$ | docs/Chap04/4.5.md | Consider the regularity condition $af(n / b) \le cf(n)$ for some constant $c < 1$, which is part of case 3 of the master theorem. Give an example of constants $a \ge 1$ and $b > 1$ and a function $f(n)$ that satisfies all the conditions in case 3 of the master theorem, except the regularity condition. | $a = 1$, $b = 2$ and $f(n) = n(2 - \cos n)$.
If we try to prove it,
$$
\begin{aligned}
\frac{n}{2}(2 - \cos\frac{n}{2}) & < cn \\\\
\frac{1 - cos(n / 2)}{2} & < c \\\\
1 - \frac{cos(n / 2)}{2} & \le c.
\end{aligned}
$$
Since $\min\cos(n / 2) = -1$, this implies that $c \ge 3 / 2$. But $c < 1$. | [] | false | [] |
04-4.6-1 | 04 | 4.6 | 4.6-1 $\star$ | docs/Chap04/4.6.md | Give a simple and exact expression for $n_j$ in equation $\text{(4.27)}$ for the case in which $b$ is a positive integer instead of an arbitrary real number. | We state that $\forall{j \ge 0}, n_j = \left \lceil \frac{n}{b^j} \right \rceil$.
Indeed, for $j = 0$ we have from the recurrence's base case that $n_0 = n = \left \lceil \frac{n}{b^0} \right \rceil$.
Now, suppose $n_{j - 1} = \left \lceil \frac{n}{b^{j - 1}} \right \rceil$ for some $j > 0$. By definition, $n_j = ... | [] | false | [] |
04-4.6-2 | 04 | 4.6 | 4.6-2 $\star$ | docs/Chap04/4.6.md | Show that if $f(n) = \Theta(n^{\log_b a}\lg^k{n})$, where $k \ge 0$, then the master recurrence has solution $T(n) = \Theta(n^{\log_b a}\lg^{k + 1}n)$. For simplicity, confine your analysis to exact powers of $b$. | $$
\begin{aligned}
g(n) & = \sum_{j = 0}^{\log_b n - 1} a^j f(n / b^j) \\\\
f(n / b^j) & = \Theta\Big((n / b^j)^{\log_b a} \lg^k(n / b^j) \Big) \\\\
g(n) & = \Theta\Big(\sum_{j = 0}^{\log_b n - 1}a^j\big(\frac{n}{b^j}\big)^{\log_b a}\lg^k\big(\frac{n}{b^j}\big)\Big) \\\\
&... | [] | false | [] |
04-4.6-3 | 04 | 4.6 | 4.6-3 $\star$ | docs/Chap04/4.6.md | Show that case 3 of the master method is overstated, in the sense that the regularity condition $af(n / b) \le cf(n)$ for some constant $c < 1$ implies that there exists a constant $\epsilon > 0$ such that $f(n) = \Omega(n^{\log_b a + \epsilon})$. | $$
\begin{aligned}
af(n / b) & \le cf(n) \\\\
\Rightarrow f(n / b) & \le \frac{c}{a} f(n) \\\\
\Rightarrow f(n) & \le \frac{c}{a} f(bn) \\\\
& = \frac{c}{a} \left(\frac{c}{a} f(b^2n)\right) \\\\
& = \frac{c}{a} \left(\frac{c}{a}\left(\frac{c}{a} f(b^3n)\right... | [] | false | [] |
04-4-1 | 04 | 4-1 | 4-1 | docs/Chap04/Problems/4-1.md | Give asymptotic upper and lower bound for $T(n)$ in each of the following recurrences. Assume that $T(n)$ is constant for $n \le 2$. Make your bounds as tight as possible, and justify your answers.
**a.** $T(n) = 2T(n / 2) + n^4$.
**b.** $T(n) = T(7n / 10) + n$.
**c.** $T(n) = 16T(n / 4) + n^2$.
**d.** $T(n) = 7T(n... | **a.** By master theorem, $T(n) = \Theta(n^4)$.
**b.** By master theorem, $T(n) = \Theta(n)$.
**c.** By master theorem, $T(n) = \Theta(n^2\lg n)$.
**d.** By master theorem, $T(n) = \Theta(n^2)$.
**e.** By master theorem, $T(n) = \Theta(n^{\lg 7})$.
**f.** By master theorem, $T(n) = \Theta(\sqrt n \lg n)$.
**g.** ... | [] | false | [] |
04-4-2 | 04 | 4-2 | 4-2 | docs/Chap04/Problems/4-2.md | Throughout this book, we assume that parameter passing during procedure calls takes constant time, even if an $N$-element array is being passed. This assumption is valid in most systems because a pointer to the array is passed, not the array itself. This problem examines the implications of three parameter-passing stra... | **a.**
1. $T(n) = T(n / 2) + c = \Theta(\lg n)$. (master method)
2. $\Theta(n\lg n)$.
$$
\begin{aligned}
T(n) & = T(n / 2) + cN \\\\
& = 2cN + T(n / 4) \\\\
& = 3cN + T(n / 8) \\\\
& = \sum_{i = 0}^{\lg n - 1}(2^icN / 2^i) \\\\
& = cN\lg n \\\\
& = \Theta(n\lg ... | [] | false | [] |
04-4-3 | 04 | 4-3 | 4-3 | docs/Chap04/Problems/4-3.md | Give asymptotic upper and lower bounds for $T(n)$ in each of the following recurrences. Assume that $T(n)$ is constant for sufficiently small $n$. Make your bounds as tight as possible, and justify your answers.
**a.** $T(n) = 4T(n / 3) + n\lg n$.
**b.** $T(n) = 3T(n / 3) + n / \lg n$.
**c.** $T(n) = 4T(n / 2) + n^2... | **a.** By master theorem, $T(n) = \Theta(n^{\log_3 4})$.
**b.**
By the recursion-tree method, we can guess that $T(n) = \Theta(n\log_3\log_3 n)$.
We start by proving the upper bound.
Suppose $k < n \implies T(k) \le ck \log_3\log_3 k - k$, where we subtract a lower order term to strengthen our induction hypothesis.... | [] | false | [] |
04-4-4 | 04 | 4-4 | 4-4 | docs/Chap04/Problems/4-4.md | This problem develops properties of the Fibonacci numbers, which are defined by recurrence $\text{(3.22)}$. We shall use the technique of generating functions to solve the Fibonacci recurrence. Define the **_generating function_** (or **_formal power series_**) $\mathcal F$ as
$$
\begin{aligned}
\mathcal F(z)
& = \sum... | **a.**
$$
\begin{aligned} z + z\mathcal F(z) + z^2\mathcal F(Z)
& = z + z\sum_{i = 0}^{\infty} F_iz^i + z^2\sum_{i = 0}^{\infty}F_i z^i \\\\
& = z + \sum_{i = 1}^{\infty} F_{i - 1}z^i + \sum_{i = 2}^{\infty}F_{i - 2} z^i \\\\
& = z + F_1z + \sum_{i = 2}^{\infty}(F_{i - 1} + F_{i - 2})z^i \\\\
& = z + F_1z ... | [] | false | [] |
04-4-5 | 04 | 4-5 | 4-5 | docs/Chap04/Problems/4-5.md | Professor Diogenes has $n$ supposedly identical integrated-circuit chips that in principle are capable of testing each other. The professor's test jig accommodates two chips at a time. When the jig is loaded, each chip tests the other and reports whether it is good or bad. A good chip always reports accurately whether ... | **a.** Lets say that there are $g < n / 2$ good chips and $n - g$ bad chips.
From this assumption, we can always find a set of good chips $G$ and a set of bad chips $B$ of equal size $g$ since $n - g \ge g$.
Now, assume that chips in $B$ always conspire to fool the professor in the following:
"for any test made by t... | [] | false | [] |
04-4-6 | 04 | 4-6 | 4-6 | docs/Chap04/Problems/4-6.md | An $m \times n$ array $A$ of real numbers is a **Monge array** if for all $i$, $j$, $k$, and $l$ such that $1 \le i < k \le m$ and $1 \le j < l \le n$, we have
$$A[i, j] + A[k, l] \le A[i, l] + A[k, j].$$
In other words, whenever we pick two rows and two columns of a Monge array and consider the four elements at the ... | **a.** The "only if" part is trivial, it follows form the definition of Monge array.
As for the "if" part, let's first prove that
$$
\begin{aligned}
A[i, j] + A[i + 1, j + 1] & \le A[i, j + 1] + A[i + 1, j] \\\\
\Rightarrow A[i, j] + A[k, j + 1] & \le A[i, j + 1] + A[k, j],
\end{aligned}
$$
where $i < k$.
... | [] | false | [] |
05-5.1-1 | 05 | 5.1 | 5.1-1 | docs/Chap05/5.1.md | Show that the assumption that we are always able to determine which candidate is best in line 4 of procedure $\text{HIRE-ASSISTANT}$ implies that we know a total order on the ranks of the candidates. | A total order is a partial order that is a total relation $(\forall a, b \in A:aRb \text{ or } bRa)$.
A relation is a partial order if it is reflexive, antisymmetric and transitive.
Assume that the relation is good or better.
- **Reflexive:** This is a bit trivial, but everybody is as good or better as themselves.
- ... | [] | false | [] |
05-5.1-2 | 05 | 5.1 | 5.1-2 $\star$ | docs/Chap05/5.1.md | Describe an implementation of the procedure $\text{RANDOM}(a, b)$ that only makes calls to $\text{RANDOM}(0, 1)$. What is the expected running time of your procedure, as a function of $a$ and $b$? | As $(b - a)$ could be any number, we need at least $\lceil \lg(b - a) \rceil$ bits to represent the number. We set $\lceil \lg(b - a) \rceil$ as $k$. Basically, we need to call $\text{RANDOM}(0, 1)$ $k$ times. If the number represented by binary is bigger than $b - a$, it's not valid number and we give it another try, ... | [
{
"lang": "cpp",
"code": "RANDOM(a, b)\n range = b - a\n bits = ceil(log(2, range))\n result = 0\n for i = 0 to bits - 1\n r = RANDOM(0, 1)\n result = result + r << i\n if result > range\n return RANDOM(a, b)\n else return a + result"
}
] | false | [] |
05-5.1-3 | 05 | 5.1 | 5.1-3 $\star$ | docs/Chap05/5.1.md | Suppose that you want to output $0$ with probability $1 / 2$ and $1$ with probability $1 / 2$. At your disposal is a procedure $\text{BIASED-RANDOM}$, that outputs either $0$ or $1$. It outputs $1$ with some probability $p$ and $0$ with probability $1 - p$, where $0 < p < 1$, but you do not know what $p$ is. Give an al... | There are 4 outcomes when we call $\text{BIASED-RANDOM}$ twice, i.e., $00$, $01$, $10$, $11$.
The strategy is as following:
- $00$ or $11$: call $\text{BIASED-RANDOM}$ twice again
- $01$: output $0$
- $10$: output $1$
We can calculate the probability of each outcome:
- $\Pr\\{00 | 11\\} = p^2 + (1 - p)^2$
- $\Pr\\{... | [
{
"lang": "cpp",
"code": "UNBIASED-RANDOM\n while true\n x = BIASED-RANDOM\n y = BIASED-RANDOM\n if x != y\n return x"
}
] | false | [] |
05-5.2-1 | 05 | 5.2 | 5.2-1 | docs/Chap05/5.2.md | In $\text{HIRE-ASSISTANT}$, assuming that the candidates are presented in a random order, what is the probability that you hire exactly one time? What is the probability you hire exactly $n$ times? | You will hire exactly one time if the best candidate is at first. There are $(n − 1)!$ orderings with the best candidate being at first, so the probability that you hire exactly one time is $\frac{(n - 1)!}{n!} = \frac{1}{n}$.
You will hire exactly $n$ times if the candidates are presented in increasing order. There i... | [] | false | [] |
CLRS Solutions QA
Short description.
A compact Q&A dataset distilled from the community-maintained CLRS solutions project. Each row contains:
- the exercise question (markdown),
- the answer (markdown),
- book chapter/section metadata,
- optional code blocks (language-tagged),
- optional image references (relative paths from the source repo).
This set is useful for building retrieval, RAG, tutoring, and evaluation pipelines for classic algorithms & data structures topics.
⚠️ Attribution: This dataset is derived from the open-source repository walkccc/CLRS (MIT license). Credit belongs to @walkccc and all contributors. This packaging only restructures their content into a machine-friendly format.
Contents & Stats
- Split(s):
train - Rows: ~1,016
- Source: Parsed from markdown files in
walkccc/CLRS(third-edition exercises/solutions)
Note: A small number of rows reference images present in the original repo (
docs/img/...). This dataset includes the image references (paths) as metadata; actual image files are not bundled here.
Also available (human-readable copies):
# JSONL
ds_json = load_dataset(
"json",
data_files="hf://datasets/Siddharth899/clrs-qa/data/train.jsonl.gz",
token=True, # needed if the repo is private
)
# CSV
ds_csv = load_dataset(
"csv",
data_files="hf://datasets/Siddharth899/clrs-qa/data/train.csv.gz",
token=True,
)
Data Fields
| Field | Type | Description |
|---|---|---|
id |
string |
Stable row id composed from chapter/section/title (e.g., 02-2.3-5). |
chapter |
string |
Chapter number as a zero-padded string (e.g., "02"). |
section |
string |
Section identifier as in the source (e.g., "2.3" or "2-1"). |
title |
string |
Exercise/problem label (e.g., "2.3-5" or "2-1"). |
source_file |
string |
Original markdown relative path in the source repo. |
question_markdown |
string |
Exercise prompt in markdown. |
answer_markdown |
string |
Solution/answer in markdown (often includes LaTeX). |
code_blocks |
list of objects {lang, code} |
Zero or more language-tagged code snippets extracted from the answer. |
has_images |
bool |
Whether this item references images. |
image_refs |
list[string] |
Relative paths to referenced images in the original repo. |
Example code_blocks entry:
[
{"lang": "cpp", "code": "INSERTION-SORT(A)\n ..."},
{"lang": "python", "code": "def merge(...):\n ..."}
]
Data Construction
Source:
walkccc/CLRSLicense upstream: MIT
Method: A small script parses chapter/section markdown files, extracts headings, prompts, answers, fenced code blocks, and image references, and emits JSONL → uploaded to the Hub (Parquet auto-materialized).
Known quirks:
- Some answers are brief/telegraphic (mirroring the original).
- Image references point to paths in the upstream repo; not all images are bundled here.
- Math is plain markdown with LaTeX snippets (
$...$,$$...$$); rendering depends on your viewer.
License
- This dataset (packaging): MIT
- Upstream content: MIT (from
walkccc/CLRS)
You must preserve the original MIT license notice and attribute @walkccc and contributors when using this dataset.
MIT License
Copyright (c) walkccc
... (see upstream repository for the full license text)
Additionally, include attribution similar to:
“Portions of the content are derived from walkccc/CLRS (MIT). © The respective contributors.”
Citation
If you use this dataset, please cite both the dataset and the upstream project:
Dataset (this repo):
@misc{clrs_qa_dataset_2025,
title = {CLRS Solutions QA (walkccc-derived)},
author = {Siddharth899},
year = {2025},
howpublished = {\url{https://huggingface.co/datasets/Siddharth899/clrs-qa}},
note = {Derived from walkccc/CLRS (MIT)}
}
Upstream CLRS solutions:
@misc{walkccc_clrs,
title = {Solutions to Introduction to Algorithms (Third Edition)},
author = {walkccc and contributors},
howpublished = {\url{https://github.com/walkccc/CLRS}},
license = {MIT}
}
Contact & Maintenance
- Maintainer of this dataset packaging: @Siddharth899
- Issues / requests: open an issue on the HF dataset repo.
- Downloads last month
- 6