01 Vectors and matrices, part 1
Exercise 1: Vector algebra
A vector in \(\mathbb{R}^n\) is an ordered list of \(n\) real numbers, which we typically write in boldface, such as \(\mathbf{v}\). For example, when \(n=2\), we might have the \(2\)-dimensional vector
\[
\mathbf{v} = \begin{bmatrix} 3 \\ -2 \end{bmatrix},
\]
which we visualize as an arrow with its tail at the origin and its head at the point \((3, -2)\). Technically, though, a vector can be moved and still be the same vector, just as long as it keeps its direction and length intact. So, while it is often convenient to draw vectors with their tails at the origin, they could have their tails anywhere and still be the same vector.
Similarly, when \(n=3\), a \(3\)-dimensional vector might look like
\[
\mathbf{w} = \begin{bmatrix} 1 \\ 4 \\ -3 \end{bmatrix}.
\]
Vectors can be added, subtracted, and scaled (multiplied by real numbers) componentwise. For example, if
\[
\mathbf{u} = \begin{bmatrix} 2 \\ 5 \end{bmatrix} \quad \text{and} \quad \mathbf{v} = \begin{bmatrix} 3 \\ -1 \end{bmatrix},
\]
then
\[
\mathbf{u} + \mathbf{v} = \begin{bmatrix} 2 + 3 \\ 5 + (-1) \end{bmatrix} = \begin{bmatrix} 5 \\ 4 \end{bmatrix},
\]
\[
\mathbf{u} - \mathbf{v} = \begin{bmatrix} 2 - 3 \\ 5 - (-1) \end{bmatrix} = \begin{bmatrix} -1 \\ 6 \end{bmatrix},
\]
and for a scalar \(c = 3\),
\[
3\mathbf{u} = \begin{bmatrix} 3 \cdot 2 \\ 3 \cdot 5 \end{bmatrix} = \begin{bmatrix} 6 \\ 15 \end{bmatrix}.
\]
Let \(\mathbf{u} = \begin{bmatrix} 4 \\ -1 \end{bmatrix}\), \(\mathbf{v} = \begin{bmatrix} -2 \\ 3 \end{bmatrix}\), \(\mathbf{a} = \begin{bmatrix} 2 \\ -1 \\ 3 \end{bmatrix}\), and \(\mathbf{b} = \begin{bmatrix} 4 \\ 0 \\ -2 \end{bmatrix}\).
Compute \(\mathbf{u} + \mathbf{v}\).
Compute \(\mathbf{u} - \mathbf{v}\).
Compute \(2\mathbf{u} + 3\mathbf{v}\).
Compute \(-\mathbf{u}\).
Compute \(\mathbf{a} + \mathbf{b}\).
Compute \(\mathbf{a} - 2\mathbf{b}\).
\(\mathbf{u} + \mathbf{v} = \begin{bmatrix} 4 + (-2) \\ -1 + 3 \end{bmatrix} = \begin{bmatrix} 2 \\ 2 \end{bmatrix}\)
\(\mathbf{u} - \mathbf{v} = \begin{bmatrix} 4 - (-2) \\ -1 - 3 \end{bmatrix} = \begin{bmatrix} 6 \\ -4 \end{bmatrix}\)
\(2\mathbf{u} + 3\mathbf{v} = 2\begin{bmatrix} 4 \\ -1 \end{bmatrix} + 3\begin{bmatrix} -2 \\ 3 \end{bmatrix} = \begin{bmatrix} 8 \\ -2 \end{bmatrix} + \begin{bmatrix} -6 \\ 9 \end{bmatrix} = \begin{bmatrix} 2 \\ 7 \end{bmatrix}\)
\(-\mathbf{u} = \begin{bmatrix} -4 \\ 1 \end{bmatrix}\)
\(\mathbf{a} + \mathbf{b} = \begin{bmatrix} 2 + 4 \\ -1 + 0 \\ 3 + (-2) \end{bmatrix} = \begin{bmatrix} 6 \\ -1 \\ 1 \end{bmatrix}\)
\(\mathbf{a} - 2\mathbf{b} = \begin{bmatrix} 2 \\ -1 \\ 3 \end{bmatrix} - 2\begin{bmatrix} 4 \\ 0 \\ -2 \end{bmatrix} = \begin{bmatrix} 2 \\ -1 \\ 3 \end{bmatrix} - \begin{bmatrix} 8 \\ 0 \\ -4 \end{bmatrix} = \begin{bmatrix} -6 \\ -1 \\ 7 \end{bmatrix}\)
Exercise 2: Dot products, angles, and orthogonality
The dot product (also called the inner product or scalar product) of two vectors \(\mathbf{u}\) and \(\mathbf{v}\) in \(\mathbb{R}^n\) is a real number obtained by multiplying corresponding components and summing the results. For example, if
\[
\mathbf{u} = \begin{bmatrix} 2 \\ 3 \\ -1 \end{bmatrix} \quad \text{and} \quad \mathbf{v} = \begin{bmatrix} 4 \\ 0 \\ 5 \end{bmatrix},
\]
then the dot product of \(\mathbf{u}\) and \(\mathbf{v}\) is
\[
\mathbf{u} \cdot \mathbf{v} = (2)(4) + (3)(0) + (-1)(5) = 8 + 0 - 5 = 3.
\]
We also write this using angle bracket notation as
\[
\langle \mathbf{u}, \mathbf{v} \rangle = 3.
\]
The length (or magnitude, or norm) of a vector \(\mathbf{v}\) is defined as
\[
\|\mathbf{v}\| = \sqrt{\mathbf{v} \cdot \mathbf{v}} = \sqrt{\langle \mathbf{v}, \mathbf{v} \rangle}.
\]
For example, if \(\mathbf{u} = \begin{bmatrix} 3 \\ 4 \end{bmatrix}\), then
\[
\|\mathbf{u}\| = \sqrt{3^2 + 4^2} = \sqrt{9 + 16} = \sqrt{25} = 5.
\]
The dot product is closely related to the angle between two vectors. Specifically, if \(\mathbf{u}\) and \(\mathbf{v}\) are two nonzero vectors in \(\mathbb{R}^n\) and \(\theta\) is the angle between them (with \(0 \leq \theta \leq \pi\)), then
\[
\mathbf{u} \cdot \mathbf{v} = \|\mathbf{u}\| \|\mathbf{v}\| \cos{\theta}.
\]
Rearranging, we get
\[
\cos{\theta} = \frac{\mathbf{u} \cdot \mathbf{v}}{\|\mathbf{u}\| \|\mathbf{v}\|}.
\]
Two vectors \(\mathbf{u}\) and \(\mathbf{v}\) are said to be orthogonal (or perpendicular) if their dot product is zero, i.e., \(\mathbf{u} \cdot \mathbf{v} = 0\). This corresponds to the angle between them being \(\theta = \frac{\pi}{2}\).
Let \(\mathbf{u} = \begin{bmatrix} 1 \\ 2 \end{bmatrix}\) and \(\mathbf{v} = \begin{bmatrix} 3 \\ -4 \end{bmatrix}\). Compute \(\mathbf{u} \cdot \mathbf{v}\).
Compute \(\langle \mathbf{u}, \mathbf{u} \rangle\) for \(\mathbf{u} = \begin{bmatrix} 2 \\ -1 \\ 3 \end{bmatrix}\).
Find the length of the vector \(\mathbf{v} = \begin{bmatrix} 1 \\ 2 \\ 2 \end{bmatrix}\).
Let \(\mathbf{a} = \begin{bmatrix} 1 \\ 1 \end{bmatrix}\) and \(\mathbf{b} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}\). Find the angle \(\theta\) between \(\mathbf{a}\) and \(\mathbf{b}\).
Determine whether the vectors \(\mathbf{p} = \begin{bmatrix} 2 \\ 3 \end{bmatrix}\) and \(\mathbf{q} = \begin{bmatrix} 6 \\ -4 \end{bmatrix}\) are orthogonal.
Find a vector in \(\mathbb{R}^2\) that is orthogonal to \(\mathbf{w} = \begin{bmatrix} 5 \\ 2 \end{bmatrix}\).
\(\mathbf{u} \cdot \mathbf{v} = (1)(3) + (2)(-4) = 3 - 8 = -5\)
\(\langle \mathbf{u}, \mathbf{u} \rangle = 2^2 + (-1)^2 + 3^2 = 4 + 1 + 9 = 14\)
\(\|\mathbf{v}\| = \sqrt{1^2 + 2^2 + 2^2} = \sqrt{1 + 4 + 4} = \sqrt{9} = 3\)
First, we compute the dot product and the magnitudes: \[
\mathbf{a} \cdot \mathbf{b} = (1)(1) + (1)(0) = 1,
\] \[
\|\mathbf{a}\| = \sqrt{1^2 + 1^2} = \sqrt{2},
\] \[
\|\mathbf{b}\| = \sqrt{1^2 + 0^2} = 1.
\] Therefore, \[
\cos{\theta} = \frac{\mathbf{a} \cdot \mathbf{b}}{\|\mathbf{a}\| \|\mathbf{b}\|} = \frac{1}{\sqrt{2} \cdot 1} = \frac{1}{\sqrt{2}} = \frac{\sqrt{2}}{2}.
\] Thus, \(\theta = \arccos\left(\frac{\sqrt{2}}{2}\right) = \frac{\pi}{4}\).
We compute \(\mathbf{p} \cdot \mathbf{q} = (2)(6) + (3)(-4) = 12 - 12 = 0\). Since the dot product is zero, the vectors are orthogonal.
There are infinitely many correct answers. One simple choice is \(\begin{bmatrix} -2 \\ 5 \end{bmatrix}\), since \[
\begin{bmatrix} 5 \\ 2 \end{bmatrix} \cdot \begin{bmatrix} -2 \\ 5 \end{bmatrix} = (5)(-2) + (2)(5) = -10 + 10 = 0.
\] Another choice is \(\begin{bmatrix} 2 \\ -5 \end{bmatrix}\).
Exercise 3: Matrix algebra
A matrix is a rectangular array of numbers arranged in rows and columns. For example, the matrix
\[
A = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix}
\]
has 2 rows and 3 columns, so we say \(A\) is a \(2 \times 3\) matrix. The entry in row \(i\) and column \(j\) is often denoted \(a_{ij}\).
Matrices can be added and subtracted (if they have the same number of rows and columns) by adding or subtracting corresponding entries. For example,
\[
\begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} + \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} = \begin{bmatrix} 6 & 8 \\ 10 & 12 \end{bmatrix}.
\]
Matrices can also be multiplied by scalars by multiplying each entry:
\[
3\begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} = \begin{bmatrix} 3 & 6 \\ 9 & 12 \end{bmatrix}.
\]
Matrix multiplication is more intricate. If \(A\) is an \(m \times n\) matrix and \(B\) is an \(n \times p\) matrix, then their product \(AB\) is an \(m \times p\) matrix. The entry in row \(i\) and column \(j\) of \(AB\) is obtained by taking the dot product of the \(i\)-th row of \(A\) with the \(j\)-th column of \(B\). For example,
\[
\begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} = \begin{bmatrix} (1)(5) + (2)(7) & (1)(6) + (2)(8) \\ (3)(5) + (4)(7) & (3)(6) + (4)(8) \end{bmatrix} = \begin{bmatrix} 19 & 22 \\ 43 & 50 \end{bmatrix}.
\]
Note: Matrix multiplication is not commutative in general, meaning that \(AB \neq BA\) in most cases.
Let \(A = \begin{bmatrix} 2 & -1 \\ 0 & 3 \end{bmatrix}\) and \(B = \begin{bmatrix} 4 & 1 \\ -2 & 5 \end{bmatrix}\). Compute \(A + B\).
Compute \(2A - B\).
Compute the matrix product \(AB\).
Compute the matrix product \(BA\).
Does \(AB = BA\) for these matrices? What does this tell you about matrix multiplication?
Let \(C = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}\). Compute \(AC\) and \(CA\). What do you notice about the matrix \(C\)?
\(A + B = \begin{bmatrix} 2 + 4 & -1 + 1 \\ 0 + (-2) & 3 + 5 \end{bmatrix} = \begin{bmatrix} 6 & 0 \\ -2 & 8 \end{bmatrix}\)
\(2A - B = 2\begin{bmatrix} 2 & -1 \\ 0 & 3 \end{bmatrix} - \begin{bmatrix} 4 & 1 \\ -2 & 5 \end{bmatrix} = \begin{bmatrix} 4 & -2 \\ 0 & 6 \end{bmatrix} - \begin{bmatrix} 4 & 1 \\ -2 & 5 \end{bmatrix} = \begin{bmatrix} 0 & -3 \\ 2 & 1 \end{bmatrix}\)
\(AB = \begin{bmatrix} 2 & -1 \\ 0 & 3 \end{bmatrix} \begin{bmatrix} 4 & 1 \\ -2 & 5 \end{bmatrix} = \begin{bmatrix} (2)(4) + (-1)(-2) & (2)(1) + (-1)(5) \\ (0)(4) + (3)(-2) & (0)(1) + (3)(5) \end{bmatrix} = \begin{bmatrix} 10 & -3 \\ -6 & 15 \end{bmatrix}\)
\(BA = \begin{bmatrix} 4 & 1 \\ -2 & 5 \end{bmatrix} \begin{bmatrix} 2 & -1 \\ 0 & 3 \end{bmatrix} = \begin{bmatrix} (4)(2) + (1)(0) & (4)(-1) + (1)(3) \\ (-2)(2) + (5)(0) & (-2)(-1) + (5)(3) \end{bmatrix} = \begin{bmatrix} 8 & -1 \\ -4 & 17 \end{bmatrix}\)
No, \(AB \neq BA\). This demonstrates that matrix multiplication is not commutative—the order of multiplication matters.
\(AC = \begin{bmatrix} 2 & -1 \\ 0 & 3 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} 2 & -1 \\ 0 & 3 \end{bmatrix} = A\)
\(CA = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 2 & -1 \\ 0 & 3 \end{bmatrix} = \begin{bmatrix} 2 & -1 \\ 0 & 3 \end{bmatrix} = A\)
The matrix \(C\) is the identity matrix, which has the property that \(CA = AC = A\) for any matrix \(A\) (of compatible shape). It acts as the “multiplicative identity” for matrices, analogous to the number 1 for real numbers.
Exercise 4: Determinants and areas
For a \(2 \times 2\) matrix
\[
A = \begin{bmatrix} a & b \\ c & d \end{bmatrix},
\]
the determinant of \(A\) is defined as
\[
\det(A) = ad - bc.
\]
The determinant has many important interpretations and applications in mathematics. One of the most geometric interpretations is that \(|\det(A)|\) (the absolute value of the determinant) represents the area of the parallelogram formed by the column vectors of \(A\).
Specifically, if we think of the matrix \(A\) above as having column vectors
\[
\mathbf{v}_1 = \begin{bmatrix} a \\ c \end{bmatrix} \quad \text{and} \quad \mathbf{v}_2 = \begin{bmatrix} b \\ d \end{bmatrix},
\]
then the parallelogram with sides \(\mathbf{v}_1\) and \(\mathbf{v}_2\) (with one vertex at the origin) has area \(|\det(A)|\).
For example, the matrix
\[
A = \begin{bmatrix} 3 & 1 \\ 0 & 2 \end{bmatrix}
\]
has determinant \(\det(A) = (3)(2) - (1)(0) = 6\), so the parallelogram formed by the vectors \(\begin{bmatrix} 3 \\ 0 \end{bmatrix}\) and \(\begin{bmatrix} 1 \\ 2 \end{bmatrix}\) has area 6.
Note: If \(\det(A) = 0\), the matrix is called singular, and geometrically this means the two column vectors are parallel (or one is the zero vector), so they don’t span a two-dimensional region.
Compute the determinant of \(A = \begin{bmatrix} 2 & 3 \\ 1 & 4 \end{bmatrix}\).
Compute the determinant of \(B = \begin{bmatrix} 5 & -2 \\ 3 & 1 \end{bmatrix}\).
Find the area of the parallelogram formed by the vectors \(\mathbf{v}_1 = \begin{bmatrix} 4 \\ 1 \end{bmatrix}\) and \(\mathbf{v}_2 = \begin{bmatrix} 2 \\ 3 \end{bmatrix}\).
Determine whether the matrix \(C = \begin{bmatrix} 6 & 2 \\ 9 & 3 \end{bmatrix}\) is singular.
Find a value of \(k\) such that the matrix \(D = \begin{bmatrix} k & 4 \\ 3 & 6 \end{bmatrix}\) is singular.
\(\det(A) = (2)(4) - (3)(1) = 8 - 3 = 5\)
\(\det(B) = (5)(1) - (-2)(3) = 5 + 6 = 11\)
The area is the absolute value of the determinant of the matrix with columns \(\mathbf{v}_1\) and \(\mathbf{v}_2\): \[
\left|\det\begin{bmatrix} 4 & 2 \\ 1 & 3 \end{bmatrix}\right| = |(4)(3) - (2)(1)| = |12 - 2| = 10.
\]
\(\det(C) = (6)(3) - (2)(9) = 18 - 18 = 0\). Since the determinant is zero, the matrix is singular. (Note: The second column is exactly \(\frac{1}{3}\) times the first column, so the column vectors are parallel.)
For \(D\) to be singular, we need \(\det(D) = 0\): \[
(k)(6) - (4)(3) = 0
\] \[
6k - 12 = 0
\] \[
k = 2.
\]
Exercise 5: The derivative as a difference quotient and linear approximation
Recall from single-variable calculus that the derivative of a function \(f\) at a point \(x = a\) is defined as the limit
\[
f'(a) = \lim_{h \to 0} \frac{f(a + h) - f(a)}{h},
\]
provided this limit exists. This ratio \(\frac{f(a + h) - f(a)}{h}\) is called a difference quotient, and it represents the slope of the secant line connecting the points \((a, f(a))\) and \((a + h, f(a + h))\) on the graph of \(f\). As \(h\) approaches zero, this secant line approaches the tangent line to the graph of \(f\) at the point \((a, f(a))\).
The tangent line at \(x = a\) has equation
\[
y = f(a) + f'(a)(x - a).
\]
This is called the linear approximation (or tangent line approximation) to \(f\) at \(x = a\). For values of \(x\) close to \(a\), we have
\[
f(x) \approx f(a) + f'(a)(x - a).
\]
This approximation says that, near the point \(x = a\), the function \(f\) is approximately equal to a linear function with slope \(f'(a)\) passing through the point \((a, f(a))\). In fact, the tangent line is the best linear approximation to \(f\) near \(x = a\).
Let \(f(x) = x^2\). Write out the difference quotient \(\frac{f(3 + h) - f(3)}{h}\) explicitly (without taking the limit).
Simplify your expression from part 1 as much as possible.
Now compute \(f'(3)\) by taking the limit as \(h \to 0\) of your simplified expression.
Use the definition of the derivative as a limit to compute \(f'(2)\) for \(f(x) = \frac{1}{x}\).
Find the equation of the tangent line to \(f(x) = x^3\) at the point \(x = 1\).
Use the linear approximation to estimate \(f(1.1)\) for \(f(x) = x^3\) at \(x = 1\).
Compare your estimate from part 6 to the actual value \(f(1.1) = (1.1)^3 = 1.331\).
We have \[
\frac{f(3 + h) - f(3)}{h} = \frac{(3 + h)^2 - 3^2}{h} = \frac{(3 + h)^2 - 9}{h}.
\]
Expanding the numerator: \[
\frac{(3 + h)^2 - 9}{h} = \frac{9 + 6h + h^2 - 9}{h} = \frac{6h + h^2}{h} = \frac{h(6 + h)}{h} = 6 + h.
\]
Taking the limit: \[
f'(3) = \lim_{h \to 0} (6 + h) = 6.
\]
We compute the difference quotient: \[
\frac{f(2 + h) - f(2)}{h} = \frac{\frac{1}{2 + h} - \frac{1}{2}}{h} = \frac{\frac{2 - (2 + h)}{2(2 + h)}}{h} = \frac{-h}{2h(2 + h)} = \frac{-1}{2(2 + h)}.
\] Taking the limit: \[
f'(2) = \lim_{h \to 0} \frac{-1}{2(2 + h)} = \frac{-1}{2(2)} = -\frac{1}{4}.
\]
First, we need \(f(1) = 1^3 = 1\) and \(f'(1) = 3(1)^2 = 3\). The tangent line has equation \[
y = f(1) + f'(1)(x - 1) = 1 + 3(x - 1) = 1 + 3x - 3 = 3x - 2.
\]
Using the linear approximation at \(x = 1\): \[
f(1.1) \approx f(1) + f'(1)(1.1 - 1) = 1 + 3(0.1) = 1 + 0.3 = 1.3.
\]
The actual value is \(f(1.1) = 1.331\), so our linear approximation gave \(1.3\), which is quite close. The error is \(|1.331 - 1.3| = 0.031\).
Exercise 6: Integrals and signed area
Recall from single-variable calculus that the definite integral of a function \(f\) over an interval \([a, b]\) is defined as
\[
\int_a^b f(x)\, dx,
\]
and it represents the signed area between the graph of \(f\) and the \(x\)-axis over the interval \([a, b]\). Here, “signed” means that:
- Areas above the \(x\)-axis (where \(f(x) > 0\)) contribute positively to the integral.
- Areas below the \(x\)-axis (where \(f(x) < 0\)) contribute negatively to the integral.
The Fundamental Theorem of Calculus states that if \(F\) is an antiderivative of \(f\) (meaning \(F'(x) = f(x)\)), then
\[
\int_a^b f(x)\, dx = F(b) - F(a).
\]
This theorem connects the two main concepts of calculus: derivatives and integrals.
For example, if \(f(x) = x\) and we integrate from \(x = 0\) to \(x = 2\), we get
\[
\int_0^2 x\, dx = \left[\frac{x^2}{2}\right]_0^2 = \frac{4}{2} - \frac{0}{2} = 2.
\]
Geometrically, this represents the area of a triangle with base 2 and height 2, which is \(\frac{1}{2}(2)(2) = 2\).
Compute \(\displaystyle\int_1^3 2x\, dx\) using the Fundamental Theorem of Calculus.
Verify your answer from part 1 by interpreting the integral as the area of a trapezoid.
Compute \(\displaystyle\int_0^\pi \sin{x}\, dx\).
What is the geometric interpretation of your answer from part 3?
Compute \(\displaystyle\int_{-1}^1 x^3\, dx\).
Explain why the answer to part 5 is zero using the concept of signed area.
An antiderivative of \(f(x) = 2x\) is \(F(x) = x^2\). Therefore, \[
\int_1^3 2x\, dx = F(3) - F(1) = 3^2 - 1^2 = 9 - 1 = 8.
\]
The region under the curve \(y = 2x\) from \(x = 1\) to \(x = 3\) is a trapezoid with parallel sides of heights \(f(1) = 2\) and \(f(3) = 6\), and base width \(3 - 1 = 2\). The area is \[
\frac{1}{2}(2 + 6)(2) = \frac{1}{2}(8)(2) = 8.
\] This matches our answer from part 1.
An antiderivative of \(\sin{x}\) is \(-\cos{x}\). Therefore, \[
\int_0^\pi \sin{x}\, dx = [-\cos{x}]_0^\pi = -\cos{\pi} - (-\cos{0}) = -(-1) - (-1) = 1 + 1 = 2.
\]
This represents the total area between the sine curve and the \(x\)-axis from \(x = 0\) to \(x = \pi\). Since \(\sin{x} > 0\) for all \(x \in (0, \pi)\), the entire region is above the \(x\)-axis, so the signed area equals the actual geometric area.
An antiderivative of \(x^3\) is \(\frac{x^4}{4}\). Therefore, \[
\int_{-1}^1 x^3\, dx = \left[\frac{x^4}{4}\right]_{-1}^1 = \frac{1^4}{4} - \frac{(-1)^4}{4} = \frac{1}{4} - \frac{1}{4} = 0.
\]
The function \(f(x) = x^3\) is an odd function, meaning \(f(-x) = -f(x)\). This means the graph is symmetric about the origin. For \(x \in (0, 1)\), we have \(f(x) > 0\), contributing positive area. For \(x \in (-1, 0)\), we have \(f(x) < 0\), contributing negative area. Because of the symmetry, these two areas are equal in magnitude but opposite in sign, so they cancel out, giving a total signed area of zero.
Exercise 7: Rectangular boxes
In this problem, we will work with rectangular boxes in \(\mathbb{R}^3\). We assume that all boxes have eight vertices (i.e., corners), and that their edges are aligned with the coordinate axes. In other words, every edge of a box is parallel to either the \(x\)-, \(y\)-, or \(z\)-axis.
A rectangular box has two opposite corners at \(P = (0, 0, 0)\) and \(Q = (3, 2, 4)\). Find the coordinates of all eight vertices of the box.
A rectangular box has three vertices at \(A = (1, 1, 1)\), \(B = (5, 1, 1)\), and \(C = (1, 3, 4)\). Find the coordinates of all eight vertices of the box.
The four vertices of the bottom of the box are at \[
(0,0,0), \quad (3,0,0), \quad (0,2,0), \quad (3,2,0).
\] The four vertices of the top of the box are at \[
(0,0,4), \quad (3,0,4), \quad (0,2,4), \quad (3,2,4).
\]
The four vertices of the bottom of the box are at \[
(1, 1, 1), \quad (5, 1, 1), \quad (5, 3, 1), \quad (1, 3, 1).
\] The four vertices of the top of the box are at \[
(1, 1, 4), \quad (5, 1, 4), \quad (5, 3, 4), \quad (1, 3, 4).
\]
Exercise 8: Practice moving in \(\mathbb{R}^3\)
Recall that in \(\mathbb{R}^3\), there are three coordinate planes:
- The \(xy\)-plane, where \(z = 0\).
- The \(xz\)-plane, where \(y = 0\).
- The \(yz\)-plane, where \(x = 0\).
Other planes that are parallel with one of these coordinate planes can be described by equations of the form:
- \(x = k\) for planes parallel to the \(yz\)-plane,
- \(y = k\) for planes parallel to the \(xz\)-plane,
- \(z = k\) for planes parallel to the \(xy\)-plane,
where \(k\) is a constant.
Similarly, the three coordinate axes are:
- The \(x\)-axis, where \(y = 0\) and \(z = 0\).
- The \(y\)-axis, where \(x = 0\) and \(z = 0\).
- The \(z\)-axis, where \(x = 0\) and \(y = 0\).
Other lines that are parallel with one of these coordinate axes can be described by pairs of equations of the form:
- \(y= k_1\) and \(z = k_2\) for lines parallel to the \(x\)-axis,
- \(x = k_1\) and \(z = k_2\) for lines parallel to the \(y\)-axis,
- \(x = k_1\) and \(y = k_2\) for lines parallel to the \(z\)-axis,
where \(k_1\) and \(k_2\) are constants.
You are standing at a point in the \(yz\)-plane and you walk \(3\) units in the negative \(x\)-direction to arrive at the point \((-3, 1, 2)\). What point were you at to begin with?
You start at a point on the \(xy\)-plane and walk \(4\) units in the positive \(z\)-direction to arrive at \((2, -1, 4)\). Find your starting point.
You begin at a point on the \(x\)-axis and walk \(2\) units in the positive \(y\)-direction and \(3\) units in the negative \(z\)-direction to arrive at \((5, 2, -3)\). Where did you start?
You start at the point \((1, 4, 2)\) and walk \(5\) units in the negative \(y\)-direction. At what point do you arrive?
Starting from the origin, you walk \(3\) units in the positive \(x\)-direction, then \(2\) units in the negative \(y\)-direction, and finally \(4\) units in the positive \(z\)-direction. What is your final position?
You begin at a point on the plane \(x=2\) and walk \(4\) units in the negative \(x\)-direction, \(2\) units in the positive \(y\)-direction, and \(3\) units in the negative \(z\)-direction to arrive at \((-2, 4, -2)\). What was your starting point?
You begin at a point on the line defined by \(y=3\) and \(z=-1\). You walk \(1\) unit in the negative \(y\)-direction, then \(4\) units in the negative \(x\)-direction, and finally \(6\) units in the positive \(z\)-direction to arrive at \((-1, 2, 5)\). Where did you start?
\((0, 1, 2)\)
\((2, -1, 0)\)
\((5, 0, 0)\)
\((1, -1, 2)\)
\((3, -2, 4)\)
\((2, 2, 1)\)
\((3, 3, -1)\)
Exercise 9: Basis vectors and vector decomposition
Recall from class that the standard basis vectors in \(\mathbb{R}^2\) are
\[
\mathbf{e}_1 = \begin{bmatrix} 1 \\ 0 \end{bmatrix} \quad \text{and} \quad \mathbf{e}_2 = \begin{bmatrix} 0 \\ 1 \end{bmatrix}.
\]
Similarly, in \(\mathbb{R}^3\), the standard basis vectors are
\[
\mathbf{e}_1 = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}, \quad \mathbf{e}_2 = \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}, \quad \mathbf{e}_3 = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}.
\]
We saw that any vector in \(\mathbb{R}^n\) can be written as a linear combination of the standard basis vectors. For example, in \(\mathbb{R}^3\),
\[
\mathbf{v} = \begin{bmatrix} 2 \\ -3 \\ 5 \end{bmatrix} = 2\mathbf{e}_1 - 3\mathbf{e}_2 + 5\mathbf{e}_3.
\]
One important property of the standard basis vectors is that they are orthonormal, meaning:
- They are all unit vectors: \(\|\mathbf{e}_i\| = 1\) for all \(i\).
- They are mutually orthogonal: \(\mathbf{e}_i \cdot \mathbf{e}_j = 0\) whenever \(i \neq j\).
Write the vector \(\mathbf{v} = \begin{bmatrix} 4 \\ -2 \\ 7 \end{bmatrix}\) as a linear combination of the standard basis vectors \(\mathbf{e}_1\), \(\mathbf{e}_2\), and \(\mathbf{e}_3\).
Let \(\mathbf{w} = 3\mathbf{e}_1 - 5\mathbf{e}_2 + 2\mathbf{e}_3\). Write \(\mathbf{w}\) in column vector form.
Verify that the standard basis vectors in \(\mathbb{R}^3\) are orthonormal by computing all pairwise dot products and all norms.
Let \(\mathbf{u} = \begin{bmatrix} -1 \\ 6 \\ 2 \end{bmatrix}\). Compute \(\mathbf{u} \cdot \mathbf{e}_1\), \(\mathbf{u} \cdot \mathbf{e}_2\), and \(\mathbf{u} \cdot \mathbf{e}_3\). What do you notice?
Express the zero vector \(\mathbf{0}\) in \(\mathbb{R}^3\) as a linear combination of the standard basis vectors.
\(\mathbf{v} = 4\mathbf{e}_1 - 2\mathbf{e}_2 + 7\mathbf{e}_3\)
\(\mathbf{w} = \begin{bmatrix} 3 \\ -5 \\ 2 \end{bmatrix}\)
Computing the norms: \[
\|\mathbf{e}_1\| = \sqrt{1^2 + 0^2 + 0^2} = 1, \quad \|\mathbf{e}_2\| = \sqrt{0^2 + 1^2 + 0^2} = 1, \quad \|\mathbf{e}_3\| = \sqrt{0^2 + 0^2 + 1^2} = 1.
\] Computing the pairwise dot products: \[
\mathbf{e}_1 \cdot \mathbf{e}_2 = (1)(0) + (0)(1) + (0)(0) = 0,
\] \[
\mathbf{e}_1 \cdot \mathbf{e}_3 = (1)(0) + (0)(0) + (0)(1) = 0,
\] \[
\mathbf{e}_2 \cdot \mathbf{e}_3 = (0)(0) + (1)(0) + (0)(1) = 0.
\] Since all the norms are 1 and all pairwise dot products are 0, the vectors are orthonormal.
\(\mathbf{u} \cdot \mathbf{e}_1 = (-1)(1) + (6)(0) + (2)(0) = -1\)
\(\mathbf{u} \cdot \mathbf{e}_2 = (-1)(0) + (6)(1) + (2)(0) = 6\)
\(\mathbf{u} \cdot \mathbf{e}_3 = (-1)(0) + (6)(0) + (2)(1) = 2\)
The dot products give exactly the components of \(\mathbf{u}\)! This shows that \(\mathbf{u} = (\mathbf{u} \cdot \mathbf{e}_1)\mathbf{e}_1 + (\mathbf{u} \cdot \mathbf{e}_2)\mathbf{e}_2 + (\mathbf{u} \cdot \mathbf{e}_3)\mathbf{e}_3\).
\(\mathbf{0} = 0\mathbf{e}_1 + 0\mathbf{e}_2 + 0\mathbf{e}_3\)
Exercise 10: Unit vectors and scaling
Recall that a unit vector is a vector with length (norm) equal to 1. Given any nonzero vector \(\mathbf{v}\), we can always construct a unit vector in the same direction, often denoted \(\mathbf{\hat{v}}\), by dividing \(\mathbf{v}\) by its length:
\[
\mathbf{\hat{v}} = \frac{\mathbf{v}}{\|\mathbf{v}\|}.
\]
This process is called normalization of \(\mathbf{v}\).
Find the length of the vector \(\mathbf{v} = \begin{bmatrix} 3 \\ -4 \end{bmatrix}\).
Find a unit vector pointing in the same direction as \(\mathbf{v} = \begin{bmatrix} 3 \\ -4 \end{bmatrix}\).
Find a vector with length \(10\) pointing in the same direction as \(\mathbf{v} = \begin{bmatrix} 3 \\ -4 \end{bmatrix}\).
Find a vector with length \(2\) pointing in the same direction as \(\mathbf{u} = \mathbf{e}_1 - \mathbf{e}_2 + 2\mathbf{e}_3\).
Let \(\mathbf{a} = \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}\). Find a unit vector in the direction of \(\mathbf{a}\).
Find a vector of length \(5\) pointing in the opposite direction of \(\mathbf{b} = \begin{bmatrix} 2 \\ -1 \\ 2 \end{bmatrix}\).
\(\|\mathbf{v}\| = \sqrt{3^2 + (-4)^2} = \sqrt{9 + 16} = \sqrt{25} = 5\)
\(\mathbf{\hat{v}} = \frac{\mathbf{v}}{\|\mathbf{v}\|} = \frac{1}{5}\begin{bmatrix} 3 \\ -4 \end{bmatrix} = \begin{bmatrix} 3/5 \\ -4/5 \end{bmatrix}\)
The vector is \(10 \cdot \mathbf{\hat{v}} = 10 \cdot \frac{1}{5}\begin{bmatrix} 3 \\ -4 \end{bmatrix} = 2\begin{bmatrix} 3 \\ -4 \end{bmatrix} = \begin{bmatrix} 6 \\ -8 \end{bmatrix}\)
First, we write \(\mathbf{u}\) in column form: \(\mathbf{u} = \begin{bmatrix} 1 \\ -1 \\ 2 \end{bmatrix}\). Then we compute its length: \[
\|\mathbf{u}\| = \sqrt{1^2 + (-1)^2 + 2^2} = \sqrt{1 + 1 + 4} = \sqrt{6}.
\] The desired vector is \[
2 \cdot \frac{\mathbf{u}}{\|\mathbf{u}\|} = \frac{2}{\sqrt{6}}\begin{bmatrix} 1 \\ -1 \\ 2 \end{bmatrix} = \begin{bmatrix} 2/\sqrt{6} \\ -2/\sqrt{6} \\ 4/\sqrt{6} \end{bmatrix}.
\]
\(\|\mathbf{a}\| = \sqrt{1^2 + 1^2 + 1^2} = \sqrt{3}\). The unit vector is \[
\mathbf{\hat{a}} = \frac{1}{\sqrt{3}}\begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} = \begin{bmatrix} 1/\sqrt{3} \\ 1/\sqrt{3} \\ 1/\sqrt{3} \end{bmatrix} = \begin{bmatrix} \sqrt{3}/3 \\ \sqrt{3}/3 \\ \sqrt{3}/3 \end{bmatrix}.
\]
First, \(\|\mathbf{b}\| = \sqrt{2^2 + (-1)^2 + 2^2} = \sqrt{4 + 1 + 4} = \sqrt{9} = 3\). To point in the opposite direction, we need a negative scalar: \[
-5 \cdot \frac{\mathbf{b}}{\|\mathbf{b}\|} = -\frac{5}{3}\begin{bmatrix} 2 \\ -1 \\ 2 \end{bmatrix} = \begin{bmatrix} -10/3 \\ 5/3 \\ -10/3 \end{bmatrix}.
\]
Exercise 11: Parallel vectors
Two nonzero vectors \(\mathbf{u}\) and \(\mathbf{v}\) are said to be parallel if one is a scalar multiple of the other. That is, \(\mathbf{u}\) and \(\mathbf{v}\) are parallel if there exists a scalar \(c\) such that \(\mathbf{u} = c\mathbf{v}\) (or \(\mathbf{v} = c\mathbf{u}\), it doesn’t matter which way you write it).
Geometrically, parallel vectors point in the same direction (if \(c > 0\)) or in opposite directions (if \(c < 0\)).
Determine whether the vectors \(\mathbf{u} = \begin{bmatrix} 2 \\ 4 \end{bmatrix}\) and \(\mathbf{v} = \begin{bmatrix} 3 \\ 6 \end{bmatrix}\) are parallel.
Find a vector parallel to \(\mathbf{w} = \begin{bmatrix} 1 \\ -3 \\ 2 \end{bmatrix}\) with length \(7\).
Find the values of \(a\) that make \(\mathbf{v} = 5a\mathbf{e}_1 - 3\mathbf{e}_2\) parallel to \(\mathbf{w} = a^2\mathbf{e}_1 + 6\mathbf{e}_2\).
Let \(\mathbf{p} = \begin{bmatrix} 4 \\ -2 \\ 6 \end{bmatrix}\) and \(\mathbf{q} = \begin{bmatrix} -6 \\ 3 \\ k \end{bmatrix}\). Find the value of \(k\) such that \(\mathbf{p}\) and \(\mathbf{q}\) are parallel.
Explain why the zero vector \(\mathbf{0}\) is considered parallel to every vector.
We check if \(\mathbf{v} = c\mathbf{u}\) for some scalar \(c\): \[
\begin{bmatrix} 3 \\ 6 \end{bmatrix} = c\begin{bmatrix} 2 \\ 4 \end{bmatrix} = \begin{bmatrix} 2c \\ 4c \end{bmatrix}.
\] From the first component, \(3 = 2c\), so \(c = 3/2\). From the second component, \(6 = 4c\), so \(c = 6/4 = 3/2\). Since both components give the same value of \(c\), the vectors are parallel.
First, we need the length of \(\mathbf{w}\): \(\|\mathbf{w}\| = \sqrt{1^2 + (-3)^2 + 2^2} = \sqrt{1 + 9 + 4} = \sqrt{14}\). A parallel vector with length \(7\) is \[
7 \cdot \frac{\mathbf{w}}{\|\mathbf{w}\|} = \frac{7}{\sqrt{14}}\begin{bmatrix} 1 \\ -3 \\ 2 \end{bmatrix}.
\]
Writing in column form: \(\mathbf{v} = \begin{bmatrix} 5a \\ -3 \end{bmatrix}\) and \(\mathbf{w} = \begin{bmatrix} a^2 \\ 6 \end{bmatrix}\). For these to be parallel, we need \(\mathbf{v} = c\mathbf{w}\) for some scalar \(c\): \[
\begin{bmatrix} 5a \\ -3 \end{bmatrix} = c\begin{bmatrix} a^2 \\ 6 \end{bmatrix} = \begin{bmatrix} ca^2 \\ 6c \end{bmatrix}.
\] From the first component: \(5a = ca^2\), so \(c = 5a/a^2 = 5/a\) (assuming \(a \neq 0\)).
From the second component: \(-3 = 6c\), so \(c = -1/2\).
Setting these equal: \(5/a = -1/2\), which gives \(a = -10\).
We should also check the case \(a = 0\): if \(a = 0\), then \(\mathbf{v} = \begin{bmatrix} 0 \\ -3 \end{bmatrix}\) and \(\mathbf{w} = \begin{bmatrix} 0 \\ 6 \end{bmatrix}\), which are parallel (one is \(-1/2\) times the other). So \(a = 0\) and \(a = -10\) both work.
For \(\mathbf{p}\) and \(\mathbf{q}\) to be parallel, we need \(\mathbf{q} = c\mathbf{p}\) for some scalar \(c\): \[
\begin{bmatrix} -6 \\ 3 \\ k \end{bmatrix} = c\begin{bmatrix} 4 \\ -2 \\ 6 \end{bmatrix} = \begin{bmatrix} 4c \\ -2c \\ 6c \end{bmatrix}.
\] From the first component: \(-6 = 4c\), so \(c = -3/2\).
From the second component: \(3 = -2c\), so \(c = -3/2\). (Consistent!)
From the third component: \(k = 6c = 6(-3/2) = -9\).
The zero vector can be written as \(\mathbf{0} = 0 \cdot \mathbf{v}\) for any vector \(\mathbf{v}\). Since \(\mathbf{0}\) is always a scalar multiple of any vector (with scalar \(c = 0\)), it is parallel to every vector by definition.
Exercise 12: Orthogonal projections
Given two vectors \(\mathbf{u}\) and \(\mathbf{v}\) (with \(\mathbf{v} \neq \mathbf{0}\)), we can decompose \(\mathbf{u}\) into two parts:
- A component parallel to \(\mathbf{v}\), called the projection of \(\mathbf{u}\) onto \(\mathbf{v}\)
- A component perpendicular (orthogonal) to \(\mathbf{v}\)
The projection of \(\mathbf{u}\) onto \(\mathbf{v}\) is denoted \(\text{proj}_{\mathbf{v}}\mathbf{u}\) and is defined as
\[
\text{proj}_{\mathbf{v}}\mathbf{u} = \frac{\mathbf{u} \cdot \mathbf{v}}{\mathbf{v} \cdot \mathbf{v}} \mathbf{v} = \frac{\mathbf{u} \cdot \mathbf{v}}{\|\mathbf{v}\|^2} \mathbf{v}.
\]
The component of \(\mathbf{u}\) perpendicular to \(\mathbf{v}\) is the vector
\[
\mathbf{u}_{\perp} = \mathbf{u} - \text{proj}_{\mathbf{v}}\mathbf{u}.
\]
This gives us the decomposition: \(\mathbf{u} = \text{proj}_{\mathbf{v}}\mathbf{u} + \mathbf{u}_{\perp}\).
Let \(\mathbf{u} = \begin{bmatrix} 3 \\ 4 \end{bmatrix}\) and \(\mathbf{v} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}\). Compute the projection of \(\mathbf{u}\) onto \(\mathbf{v}\).
For the same vectors as in part a, find the component of \(\mathbf{u}\) perpendicular to \(\mathbf{v}\).
Verify that \(\mathbf{u} = \text{proj}_{\mathbf{v}}\mathbf{u} + \mathbf{u}_{\perp}\) for the vectors in parts a and b.
Let \(\mathbf{u} = \begin{bmatrix} 3 \\ 1 \end{bmatrix}\) and \(\mathbf{v} = \begin{bmatrix} 2 \\ 1 \end{bmatrix}\). Compute \(\text{proj}_{\mathbf{v}}\mathbf{u}\) and \(\mathbf{u}_{\perp}\). Then draw all four vectors (\(\mathbf{u}\), \(\mathbf{v}\), \(\text{proj}_{\mathbf{v}}\mathbf{u}\), and \(\mathbf{u}_{\perp}\)) in the plane, and verify geometrically that \(\mathbf{u} = \text{proj}_{\mathbf{v}}\mathbf{u} + \mathbf{u}_{\perp}\).
Let \(\mathbf{a} = \begin{bmatrix} 2 \\ 1 \\ 3 \end{bmatrix}\) and \(\mathbf{b} = \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}\). Find \(\text{proj}_{\mathbf{b}}\mathbf{a}\).
For the vectors in part e, verify that \(\mathbf{u}_{\perp} = \mathbf{a} - \text{proj}_{\mathbf{b}}\mathbf{a}\) is orthogonal to \(\mathbf{b}\).
Compute \(\text{proj}_{\mathbf{e}_1}\mathbf{u}\) for an arbitrary vector \(\mathbf{u} = \begin{bmatrix} u_1 \\ u_2 \\ u_3 \end{bmatrix}\). What do you notice?
We compute: \[
\mathbf{u} \cdot \mathbf{v} = (3)(1) + (4)(0) = 3,
\] \[
\mathbf{v} \cdot \mathbf{v} = 1^2 + 0^2 = 1.
\] Therefore, \[
\text{proj}_{\mathbf{v}}\mathbf{u} = \frac{3}{1}\begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 3 \\ 0 \end{bmatrix}.
\]
\(\mathbf{u}_{\perp} = \mathbf{u} - \text{proj}_{\mathbf{v}}\mathbf{u} = \begin{bmatrix} 3 \\ 4 \end{bmatrix} - \begin{bmatrix} 3 \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ 4 \end{bmatrix}\)
\(\text{proj}_{\mathbf{v}}\mathbf{u} + \mathbf{u}_{\perp} = \begin{bmatrix} 3 \\ 0 \end{bmatrix} + \begin{bmatrix} 0 \\ 4 \end{bmatrix} = \begin{bmatrix} 3 \\ 4 \end{bmatrix} = \mathbf{u}\)
We compute: \[
\mathbf{u} \cdot \mathbf{v} = (3)(2) + (1)(1) = 6 + 1 = 7,
\] \[
\mathbf{v} \cdot \mathbf{v} = 2^2 + 1^2 = 4 + 1 = 5.
\] Therefore, \[
\text{proj}_{\mathbf{v}}\mathbf{u} = \frac{7}{5}\begin{bmatrix} 2 \\ 1 \end{bmatrix} = \begin{bmatrix} 14/5 \\ 7/5 \end{bmatrix}.
\] And \[
\mathbf{u}_{\perp} = \mathbf{u} - \text{proj}_{\mathbf{v}}\mathbf{u} = \begin{bmatrix} 3 \\ 1 \end{bmatrix} - \begin{bmatrix} 14/5 \\ 7/5 \end{bmatrix} = \begin{bmatrix} 15/5 - 14/5 \\ 5/5 - 7/5 \end{bmatrix} = \begin{bmatrix} 1/5 \\ -2/5 \end{bmatrix}.
\]
In a diagram, you should see that:
- \(\mathbf{u}\) and \(\mathbf{v}\) both start at the origin
- \(\text{proj}_{\mathbf{v}}\mathbf{u}\) lies along the line defined by \(\mathbf{v}\)
- \(\mathbf{u}_{\perp}\) is perpendicular to \(\mathbf{v}\)
- When you add \(\text{proj}_{\mathbf{v}}\mathbf{u}\) and \(\mathbf{u}_{\perp}\) using the “tip-to-tail” rule, you get \(\mathbf{u}\)
We compute: \[
\mathbf{a} \cdot \mathbf{b} = (2)(1) + (1)(1) + (3)(1) = 2 + 1 + 3 = 6,
\] \[
\mathbf{b} \cdot \mathbf{b} = 1^2 + 1^2 + 1^2 = 3.
\] Therefore, \[
\text{proj}_{\mathbf{b}}\mathbf{a} = \frac{6}{3}\begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} = 2\begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} = \begin{bmatrix} 2 \\ 2 \\ 2 \end{bmatrix}.
\]
First, we compute: \[
\mathbf{u}_{\perp} = \mathbf{a} - \text{proj}_{\mathbf{b}}\mathbf{a} = \begin{bmatrix} 2 \\ 1 \\ 3 \end{bmatrix} - \begin{bmatrix} 2 \\ 2 \\ 2 \end{bmatrix} = \begin{bmatrix} 0 \\ -1 \\ 1 \end{bmatrix}.
\] Now we check orthogonality: \[
\mathbf{u}_{\perp} \cdot \mathbf{b} = (0)(1) + (-1)(1) + (1)(1) = 0 - 1 + 1 = 0.
\] Since the dot product is zero, \(\mathbf{u}_{\perp}\) is indeed orthogonal to \(\mathbf{b}\).
We have: \[
\mathbf{u} \cdot \mathbf{e}_1 = u_1(1) + u_2(0) + u_3(0) = u_1,
\] \[
\mathbf{e}_1 \cdot \mathbf{e}_1 = 1.
\] Therefore, \[
\text{proj}_{\mathbf{e}_1}\mathbf{u} = \frac{u_1}{1}\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} = \begin{bmatrix} u_1 \\ 0 \\ 0 \end{bmatrix}.
\] The projection onto \(\mathbf{e}_1\) simply extracts the first component of \(\mathbf{u}\) and zeros out the other components. This makes sense geometrically: we’re finding the part of \(\mathbf{u}\) that lies along the \(x\)-axis.