Where do matrices come into play? Well, as you know (or maybe not, I
don't know) a linear system can be seen in matrix-vector form as
Ax−=b−
where x− contains the unknowns, A the coefficients of the equations and b− contains the values of the right hand sides of the equations.
For instance for the system
{2x1+x2=3x1−x2=1
we have
A=[211−1],x−=[x1x2]b−=[31]
For what I said so far, in this context matrices look just like a
fancy and compact way to write down a system of equations, mere tables
of numbers.
However, in order to solve this system fast is not enough to use a calculator with a big RAM and/or a high clock rate (CPU). Of course, the more powerful the calculator is, the faster you will get the solution. But sometimes, faster might still mean days (or more) if you tackle the problem in the wrong way, even if you are on a Blue Gene.
So, to reduce computational cost, you have to come up with a good algorithm, a smart idea. But in order to do so, you need to exploit some property or some structure of your linear system. These properties are encoded somehow in the coefficients of the matrix A. Therefore, studying matrices and their properties is of crucial importance in trying to improve linear solvers efficiency. Recognizing that the matrix enjoys a particular property might be crucial to develop a fast algorithm or even to prove that a solution exists, or that the solution has some nice property.
For instance, consider the linear system
⎡⎣⎢⎢⎢2−100−12−100−12−100−12⎤⎦⎥⎥⎥⎡⎣⎢⎢⎢x1x2x3x4⎤⎦⎥⎥⎥=⎡⎣⎢⎢⎢1111⎤⎦⎥⎥⎥
which corresponds (in equation form) to
⎧⎩⎨⎪⎪⎪⎪2x1−x2=1−x1+2x2−x3=1−x2+2x3−x4=1−x3+2x4=1
Just giving a quick look to the matrix, I can claim that this system
has a solution and, moreover, the solution is non-negative (meaning that
all the components of the solution are non-negative). I'm pretty sure
you wouldn't be able to draw this conclusion just looking at the system
without trying to solve it. I can also claim that to solve this system
you need only 25 operations (one operation being a single
addition/subtraction/division/multiplication). If you construct a larger
system with the same pattern (2 on the diagonal, -1 on the upper and
lower diagonal) and put a right hand side with only positive entries, I
can still claim that the solution exists and it's positive and the
number of operations needed to solve it is only 8n−7 , where n is the size of the system.
Moreover, people already pointed out other fields where matrices are important bricks and plays an important role. I hope this thread gave you an idea of why it is worth it to study matrices. =)
However, in order to solve this system fast is not enough to use a calculator with a big RAM and/or a high clock rate (CPU). Of course, the more powerful the calculator is, the faster you will get the solution. But sometimes, faster might still mean days (or more) if you tackle the problem in the wrong way, even if you are on a Blue Gene.
So, to reduce computational cost, you have to come up with a good algorithm, a smart idea. But in order to do so, you need to exploit some property or some structure of your linear system. These properties are encoded somehow in the coefficients of the matrix A. Therefore, studying matrices and their properties is of crucial importance in trying to improve linear solvers efficiency. Recognizing that the matrix enjoys a particular property might be crucial to develop a fast algorithm or even to prove that a solution exists, or that the solution has some nice property.
For instance, consider the linear system
Moreover, people already pointed out other fields where matrices are important bricks and plays an important role. I hope this thread gave you an idea of why it is worth it to study matrices. =)