Matrix Algebra: System of Equations

Many university STEM major programs have reduced the credit hours for a course in Matrix Algebra or have simply dropped the course from their curriculum.   The content of Matrix Algebra in many cases is taught just in time where needed.  This approach can leave a student with many conceptual holes in the required knowledge of matrix algebra. In this series of blogs, we bring to you ten topics that are of immediate and intermediate interest for Matrix Algebra. Here is the fifth topic where we talk about setting up simultaneous linear equations in matrix form, consistent and inconsistent system of equations, the rank of a matrix, conditions when unique, infinite number or no solutions exist, and finding the inverse of a square matrix.  Get all the resources in form of textbook content, lecture videos, multiple choice test, problem set, and PowerPoint presentation. System of Equations
This post is brought to you by

So what does this mean that the computational time is proportional to some power of n in Gaussian Elimination method?

In a previous post, we talked about why LU Decomposition is computationally more efficient than Gaussian Elimination in some cases. The argument was based on how much computational time does each of the methods take. For example, we said that for back substitution in both methods, the computational time is approximately proportional to $latex n^2/2$.

How did we find that for back substitution the computational time is approximately proportional to $latex n^2/2$?

The amount of time it takes to conduct back substitution depends on the number of floating point operations (FLOPs) needed. Depending on how many FLOPs the computer can execute in a second called FLOPS (note the upper case S to distinguish between FLOPs and FLOPS), that will the determine the actual computational time. (A typical Pentium 4 PC conducts to the order of $latex 10^{9}$ FLOPS; a state-of-art supercomputer conducts to the order of $latex 10^{15}$ FLOPS; in 1983 the PC with a 8087 chip may have conducted to the order of $latex 10^{5}$ FLOPS).

To keep things simple, let’s only count the multiplication/division FLOPs in back substitution as time used by multiplication and division is higher than addition and subtraction (Multiplication may take twice and division as much as thrice the time it takes for addition and subtraction).

In back substitution, we start with the last equation. The last equation involves one division, second last equation involves one multiplication and one division, the third last equation involves two multiplications and one division, and so on. So the number of multiplication/divisions FLOPs is 1 for last equation, 2 for second last equation, 3 for third last equation, that is, for all equations, $latex 1+2….+n=n^2/2+n/2 $. For large n, this number is approximately $latex n^2/2$.

This post brought to you by Holistic Numerical Methods: Numerical Methods for the STEM undergraduate at http://nm.mathforcollege.com