In a previous post, we talked about why LU Decomposition is computationally more efficient than Gaussian Elimination in some cases. The argument was based on how much computational time does each of the methods take. For example, we said that for back substitution in both methods, the computational time is approximately proportional to $latex n^2/2$.
How did we find that for back substitution the computational time is approximately proportional to $latex n^2/2$?
The amount of time it takes to conduct back substitution depends on the number of floating point operations (FLOPs) needed. Depending on how many FLOPs the computer can execute in a second called FLOPS (note the upper case S to distinguish between FLOPs and FLOPS), that will the determine the actual computational time. (A typical Pentium 4 PC conducts to the order of $latex 10^{9}$ FLOPS; a state-of-art supercomputer conducts to the order of $latex 10^{15}$ FLOPS; in 1983 the PC with a 8087 chip may have conducted to the order of $latex 10^{5}$ FLOPS).
To keep things simple, let’s only count the multiplication/division FLOPs in back substitution as time used by multiplication and division is higher than addition and subtraction (Multiplication may take twice and division as much as thrice the time it takes for addition and subtraction).
In back substitution, we start with the last equation. The last equation involves one division, second last equation involves one multiplication and one division, the third last equation involves two multiplications and one division, and so on. So the number of multiplication/divisions FLOPs is 1 for last equation, 2 for second last equation, 3 for third last equation, that is, for all equations, $latex 1+2….+n=n^2/2+n/2 $. For large n, this number is approximately $latex n^2/2$.
This post brought to you by Holistic Numerical Methods: Numerical Methods for the STEM undergraduate at http://nm.mathforcollege.com