I've been presented to a proof that having $Ax=b$, one could have elementary row operations seen as certain special matrices $E_i$. And then we can prove that applying the same sequence of matrix products to the identity, we obtain the inverse. Suppose that $A$ is invertible:
\begin{eqnarray*}
{AB}&=&{I} \\
{E_n\dots E_2E_1AB}&=&{E_n\dots E_2E_1I} \\
{IB}&=&{E_n\dots E_2E_1I} \\
\end{eqnarray*}
For each $E_i$, there is the requirement that each one of them is invertible. Why is that needed? My guess is that if some of the $E_i$ is not invertible, then we could go back to different $A$'s and then, applying $E_i$'s again, we could go to a different $B$?
Answer
If we can use elementary matrices to start from $A$ and find $B=A^{-1}$, we should be able to reverse the process starting with $B$ and finding $A=B^{-1}$. The reverse of each step in the process is just applying the inverse elementary matrix. If an elementary matrix is not invertible, then we cannot reverse the step.
Anther reason that each elementary matrix must be invertible is that the determinant of noninvertible matrices is zero. Furthermore, invertible matrices have nonzero determinant. Therefore, if even one of the $E_i$ is not inertible, then $$\det(B)=\det(E_n...E_1 I)=\det(E_n)...\det(E_1)\det(I)=0.$$ Thus, $B$ is not invertible. But we know that $B^{-1}=A$. That is a contradiction.
No comments:
Post a Comment