Processing math: 42%

Complexity of linear algebra

We want to understand how many additions and multiplications are required to solve linear systems. We won't count the complexity of conditional statements to check the value of rows of the matrix. We will regard these operations as necessary and unavoidable.

Matrix-vector multiplication

Recall that matrix-vector multiplication y=Ax (here A is an n×m matrix and x is a m-dimensional column vector) is given as

yj=mi=1ajixi,1jn.

For simplicity, let's assume that m=n (A is square). Then the above combination requires n multiplications and n1 additions. We have to do this for each of the n entries in y:

  • n2 multiplications, and
  • n(n1) additions.

Naive row operations

Assume that A is an n×n+1 augmented matrix and we want to perform the row operation R2αR1R2. The multiplication αR2 takes n+1 multiplications and then we must add two vectors: n+1 additions.

  • n+1+1 multiplications, and
  • n+1 additions.

Row operations with triangular structure

Now, let us consider what happens in the course of Gaussian elimination. Consider the first step:

R2R1a21/a11R2.

It takes one operation to compute the fraction a21/a11. From then, you might say that it takes n+1 multiplications and n+1 additions, as in the previous example. This is almost correct, but we KNOW that everything is chosen so that the (2,1) entry of the new matrix will be zero. We do not have to do this computation:

  • n+1 multiplications/divisions, and
  • n additions.

Now, assume R1 and R2 begin with k zeros (we know this and don't have to check). We want to perform a row operation that leaves R1 alone but makes it so that R2 begins with k+1 zeros. We would perform

R2R1a2,k+1/a1,k+1R2.

Again, we have to perform one division to compute the ratio. But then we KNOW that the (k+1)-th entry in the second row will be zero. And we only need to compute to find the remaining nk entries:

  • nk+1 multiplications/divisions, and
  • nk additions.

I will call this a k-sparse row operation.

Side calculations:

n1j=11=n1n1j=0xj=1xn1xddxn1j=0xj=ddx1xn1xn1j=1jxj1=nxn1(1x)+1xn(1x)2=1nxn1+(n1)xn(1x)2

Use l'Hospital's Rule twice to evaluate at x=1

n1j=1j=lim

Differentiating again:

\sum_{j=2}^{n-1} j(j-1) x^{j-2} = \frac{(- n(n-1) x^{n-2} + n(n-1) x^{n-1})(1-x) + 2(1 - n x^{n-1} + (n-1) x^n) }{(1-x)^3}.

Using l'Hospital's Rule three times to evaluate at x = 1

\sum_{j=2}^{n-1} j(j-1) =\frac{n(n-1)(n-2)}{3}.

Also, note that

\sum_{j=1}^{n-1} j = 1 + \sum_{j=2}^{n-1} j.

Complexity of Gaussian elimination

In the first round of Gaussian elimination, we (typically) perform n-1, 0-sparse row operations --- eliminating all entries below the (1,1) entry. Then we perform n-2, 1-sparse row operations --- eliminating all entries below the (2,2) entry. These operations continue until we perform one (n-2)-sparse row operation to eliminate the entry below the (n-1,n-1) entry. For n-k - 1, kth-sparse row operations:

  • (n-k-1)(n-k+1) multiplications/division, and
  • (n-k-1)(n-k) additions.

Then, the total number of operations is the sum of this from k = 0,1,\ldots,n-2. We use j = n-k to simplify the sum:

\sum_{k=0}^{n-2} (n-k-1)(n-k+1) = \sum_{k=0}^{n-2} (n-k-1)(n-k) + \sum_{k=0}^{n-2} (n-k-1) \\ = \sum_{j=2}^{n} j(j-1) + \sum_{j=2}^{n} j - \sum_{j=2}^{n} 1 - 1\\ = \frac{(n+1)n(n-1)}{3} + \frac{n(n+1)}{2} - n\\ = \frac{2n^3 + 3n^2 - 5n}{6} \quad\text{multiplications/divisions, and}\\ \sum_{k=0}^{n-2} (n-k-1)(n-k) = \sum_{j=2}^{n}j(j-1) = \frac{(n+1)n(n-1)}{3} \quad \text{additions.}

Complexity of backward substitution

Assume we are given a system of the form

u_{11} x_1 + u_{12} x_2 + u_{13} x_3 + \cdots + u_{1,n-1} x_{n-1} + u_{1n} x_n = y_1\\ 0~~ x_1 + u_{22} x_2 + u_{23} x_3 + \cdots + u_{2,n-1} x_{n-1} + u_{2n} x_n = y_2\\ 0~~ x_1 + 0~~ x_2 + u_{33} x_3 + \cdots + u_{3,n-1} x_{n-1} + u_{3n} x_n = y_3\\ \vdots ~~~~~~~~~~~~~~~ \vdots\\ ~0~~ x_1 + ~0~~ x_2 + ~0~~ x_3 + \cdots + ~~0~~ x_{n-1} + u_{nn} x_n = y_n

The solution is then given by:

x_i = \frac{y_i - \sum_{j = i+1}^n u_{ij}x_j}{u_{ii}},\quad i = n, n-1, \ldots, 2,1

To compute y_i - \sum_{j = i+1}^n u_{ij}x_j it requires n - i multiplications and additions. Once, we account for the final division we have n-i+1 multiplications/divisions and n-i additions:

\sum_{i=1}^n (n-i+1) = \sum_{j = 1}^n (j+1) = \frac{n(n+1)}{2} + n = \frac{n^2+3n}{2} \quad \text{multiplications, and}\\ \sum_{i=1}^n (n-i) = \sum_{j = 1}^n j = \frac{n(n+1)}{2} = \frac{n^2+n}{2} \quad \text{additions.}

Therefore the total operation count for Gaussian elimination with backward substitution is

\frac{2n^3 + 3n^2 - 5n}{6} + \frac{n^2+3n}{2} = \frac{2n^3+6n^2 + 4n}{6} \quad \text{multiplications, and}\\ \frac{(n+1)n(n-1)}{3}+ \frac{n^2+n}{2} = \frac{2n^3 + 3n^2 +n }{6}\quad \text{additions.}

Complexity of solving an equation with matrix inversion

To invert a matrix, we need to reduce the augmented matrix [A,I] to [I,B]. First, we reduce [A,I] \to [U,K] where U is upper triangular. Following the calculations for Gaussian elimination, each k-sparse row operation will require 2n-k multiplication/divisions and 2n-k-1 additions. So the complexity for this first step is

\sum_{k=0}^{n-2} (n-k-1)(2n-k) = \sum_{k=0}^{n-2} (n-k-1)(n-k) + n\sum_{k=0}^{n-2} (n-k-1) \\ = \sum_{j=2}^{n} j(j-1) + n\sum_{j=2}^{n} j - n\sum_{j=2}^{n} 1 - n\\ = \frac{(n+1)n(n-1)}{3} + \frac{n^2(n+1)}{2} - n^2\\ = \frac{5n^3+3n^2-8n}{6} \quad\text{multiplications/divisions, and}\\ \sum_{k=0}^{n-2} (n-k-1)(2n-k-1) = \sum_{j=2}^{n}j(j-1) + n\sum_{k=0}^{n-2} (n-k-1) \\ = \frac{(n+1)n(n-1)}{3} + n\sum_{\ell=1}^{n-1} \ell \quad \text{using} \quad \ell = j-1\\ = \frac{(n+1)n(n-1)}{3} + n \frac{n(n-1)}{2}\\ = \frac{5 n^3 - 3n^2 - 2n}{6}\quad \text{additions.}

Then we must multiply each row by a constant so turn all of the diagonal entries of U to be one. This requires (we don't need to multiply the diagonal entries, because we KNOW they go to one).

\sum_{k=1}^n (n+k -1 ) = n^2 + \sum_{k=1}^n k -n = n^2 + \frac{n(n+1)}{2} -n = \frac{3 n^2 - n}{2} \quad \text{multiplications}.

At this point our system is of the form [I + U',M] where U' is is strictly upper-triangular. We need to then use row operations to eliminate all the elements in the last column of I + U' and in the first n-1 rows. Other than this elimination, this process will only affect M in this augmented matrix. We need n(n-1) multiplications and n(n-1) additions. We must multiply the last row of M by a constant, and add it to the second-to-last row (n multiplications and n additions). This has to be done for all of the n-1 rows above the last.

Continuing, we must eliminate all elements that are above the (n,n) element, requiring n(n-2) operations. This is because we know that this operation (aside from the intended elimination) only affects the rows of M above the second-to-last row.

The number of operations for this portion is (for both multiplications and additions)

\sum_{k=1}^{n-1} n(n-k) = n^2(n-1) - n \sum_{k=1}^{n-1} k = n^2 (n-1) - n \frac{n(n-1)}{2} = \frac{n^2(n-1)}{2}.

Finally, after computing A^{-1} we need to compute the product A^{-1}b =x which, as calculated above, requires n^2 multipliations and n(n-1) additions.

So, altogether, to solve an equation by inverting a matrix it requires

\frac{5n^3+3n^2-8n}{6} + \frac{3 n^2 - n}{2} + \frac{n^2(n-1)}{2} = \frac{8n^3 + 9n^2 -11n}{6} \quad \text{multiplications, and}\\ \frac{5 n^3 - 3n^2 - 2n}{6} + \frac{n^2(n-1)}{2} = \frac{8n^3-6n^2-2n}{6}\quad\text{additions.}

The leading coefficient of n^3 for solution via matrix inversion is 4/3 and the leading coefficient for Gaussian elimination with backward substitution is 1/6, so for large n, matrix inversion requires a factor of 8 more operations. DON'T COMPUTE THE INVERSE OF A MATRIX TO SOLVE SYSTEMS!

In [ ]: