The sum of subsets is the definition of addition upon two subsets.
Apparently, the unions of subsets are almost never subspaces (they don’t produce linearity?) Therefore, we like to work with sum of subsets more.
Remember this has arbitrarily many things!! as a part of the content. When defining, remember to open that possibility.
constituents
Sub-sets of \(V\) named \(U_1, U_2, \dots, U_{m}\)
requirements
The sum of subsets \(U_1, \dots, U_{m}\) is defined as:
\begin{equation} U_1, \dots, U_{m} = \{u_1+\dots+u_{m}: u_1\in U_1, \dots, u_{m} \in U_{m}\} \end{equation}
“all elements formed by taking one element from each and add it.”
additional information
sum of subspaces is the smallest subspace with both subspaces
Suppose \(U_1, \dots U_{m}\) are subspaces of \(V\), then \(U_1+\dots +U_{m}\) is the smallest subspace of \(V\) containing \(U_1, \dots, U_{m}\).
Proof:
Is a subspace—
- clearly \(0\) is in the sum. (taking \(0\) from each subspace and adding)
- addition and scalar multiplication inherits (closed in each subspace, then, reapplying definition of sum of subsets)
Smallest containing subspace—
Because a subspace is closed under addition, if a subspace contains \(U_{1}, \dots, U_{m}\) you can always add each of the constituent elements manually to form every \(U_1+\dots+U_{m}\).
Conversely, the subspace \(U_1+\dots +U_{m}\) should contain \(U_1, \dots, U_{m}\) by simply setting the coefficients except for the one you are interested in to \(0\).
Therefore, as both subsets contain each other; they are equivalent.
dimension of sums
Let there be two finite-dimensional subspaces: \(U_1\) and \(U_2\). Then:
\begin{equation} \dim(U_1+U_2)=\dim U_1+\dim U_{2} - \dim(U_1 \cap U_2) \end{equation}
Proof:
let us form an basis of \(U_1 \cap U_{2}\): \(u_1, \dots u_{m}\); this indicates to us that \(\dim(U_1 \cap U_{2}) = m\). Being a basis of \(U_1 \cap U_{2}\), it is linearly independent in \(U_1\) (which forms a part of the intersection.
As any linearly independent list (in this case, in \(U_1\)) can be expanded into a basis of \(U_1\). Let’s say by some vectors \(v_1 \dots v_{j}\). Therefore, we have that:
The new basis is \(u_1, \dots u_{m}, v_1, \dots v_{m}\), and so:
\begin{equation} \dim U_1 = m+j \end{equation}
By the same token, let’s just say some \(w_1, \dots w_{k}\) can be used to extend \(u_1, \dots u_{m}\) into a basis of \(U_2\) (as \(u_1, \dots u_{m}\) is also an linearly independent list in \(U_2\)). So:
\begin{equation} \dim U_{2} = m+k \end{equation}
We desire that \(\dim(U_1+U_2)=\dim U_1+\dim U_{2} - \dim(U_1 \cap U_2)\). Having constructed all three of the elements, we desire to find a list that is length \((m+j)+(m+k)-m = m+j+k\) that forms a basis of \(U_1+U_2\), which will complete the proof.
Conveniently, \(u_1, \dots u_{m}, v_1, \dots v_{j}, w_1, \dots w_{k}\) nicely is list of length \(m+j+k\). Therefore, we desire that that list forms a basis of \(U_1+U_{2}\).
As pairwise in this list are the basis of \(U_1\) and \(U_2\), this list can span both \(U_1\) and \(U_2\) (just zero out the “other” sublist—zero \(w\) if desiring a basis of \(U_1\), \(v\) if \(U_2\) —and you have a basis of each space. As \(U_1+U_2\) requires plucking a member from each and adding, as this list spans \(U_1\) and \(U_2\) separately (again, it forms the basis of the each space), we can just use this list to construct individually each component of \(U_1+U_2\) then adding it together. Hence, that long combo list spans \(U_1+U_2\).
The only thing left is to show that the giant list there is linearly independent. Let’s construct:
\begin{equation} a_1u_1+ \dots + a_{m}u_{m} + b_1v_1 + \dots + b_{j}v_{j} + c_1w_1 + \dots + c_{k}w_{k} = 0 \end{equation}
to demonstrate linearly independence,
Moving the \(w\) to the right, we have that:
\begin{equation} a_1u_1+ \dots + a_{m}u_{m} + b_1v_1 + \dots + b_{j}v_{j} =-(c_1w_1 + \dots + c_{k}w_{k}) \end{equation}
Recall that \(u_1 \dots v_{j}\) are all vectors in \(U_1\). Having written \(-(c_1w_1 + \dots + c_{k}w_{k})\) as a linear combination thereof, we say that \(-(c_1w_1 + \dots + c_{k}w_{k}) \in U_1\) due to closure. But also, \(w_1 \dots w_{k} \in U_2\) as they form a basis of \(U_2\). Hence, \(-(c_1w_1 + \dots + c_{k}w_{k}) \in U_2\). So, \(-(c_1w_1 + \dots + c_{k}w_{k}) \in U_1 \cap U_2\).
And we said that \(u_1, \dots u_{m}\) are a basis for \(U_1 \cap U_{2}\). Therefore, we can write the \(c_{i}\) sums as a linear combination of \(u\):
\begin{equation} d_1u_1 \dots + \dots + d_{m}u_{m} = (c_1w_1 + \dots + c_{k}w_{k}) \end{equation}
Now, moving the right to the left again:
\begin{equation} d_1u_1 \dots + \dots + d_{m}u_{m} - (c_1w_1 + \dots + c_{k}w_{k}) = 0 \end{equation}
We have established before that \(u_1 \dots w_{k}\) is a linearly independent list (it is the basis of \(U_2\).) So, to write \(0\), \(d_1 = \dots = c_{k} = 0\).
Substituting back to the original:
\begin{align} a_1u_1+ \dots + a_{m}u_{m} + b_1v_1 + \dots + b_{j}v_{j} &=-(c_1w_1 + \dots + c_{k}w_{k}) \\ &= 0 \end{align}
recall \(u_1 \dots v_{j}\) is the basis of \(U_1\), meaning they are linearly independent. The above expression makes \(a_1 = \dots b_{j} = 0\). Having shown that, to write \(0\) via \(u, v, \dots w\) requires all scalars \(a,b,c=0\), the list is linearly independent.
Having shown that the list of \(u_1, \dots v_1, \dots w_1 \dots w_{k}\) spans \(U_1+U_2\) and is linearly independent within it, it is a basis.
It does indeed have length \(m+j+k\), completing the proof. \(\blacksquare\)