An n×n matrix A is said to be diagonally dominant if
|aii|>n∑j=1, i≠j|aij|, for all 1≤i≤n.Jacobi's method and the Gauss-Seidel method converge if A is diagonally dominant.
We know that for an iterative method with x(k)=g(x(k−1))=Tx(k−1)+c, it follows that
‖in any norm. We choose the l_\infty norm. We prove this only for the Gauss-Seidel method (the Jacobi method is easier). We show that
\|(D-L)^{-1}U \|_\infty < 1.Let \|x\|_\infty =1 and let y solve
(D-L)y = Ux, \quad y = (D-L)^{-1}Ux.Then we need to show that \|y\|_\infty < 1 (which implies that \|(D-L)^{-1}U \|_\infty < 1). The components of x and y solve
a_{ii} y_i + \sum_{j = 1}^{i-1} a_{ij} y_j = -\sum_{j=i+1}^n a_{ij} x_j, \quad i =1,2,\ldots,n.There exists (at least) one component of y_p of y such that |y_p| = \|y\|_\infty \neq 0. We consider this equation
a_{pp} y_p + \sum_{j = 1}^{p-1} a_{pj} y_j = -\sum_{j=p+1}^n a_{pj} x_j.We use the (reverse) triangle inequality
|a_{pp} y_p| - \left| \sum_{j = 1}^{p-1} a_{pj} y_j \right| \leq \left| \sum_{j=p+1}^n a_{pj} x_j \right|.|a_{pp}|\|y\|_\infty - \sum_{j = 1}^{p-1} |a_{pj}| \|y\|_\infty \leq |a_{pp} y_p| - \left| \sum_{j = 1}^{p-1} a_{pj} y_j \right| \leq \sum_{j=p+1}^n |a_{pj}|. \|y\|_\infty \leq \frac{\sum_{j=p+1}^n |a_{pj}|}{|a_{pp}| - \sum_{j = 1}^{p-1} |a_{pj}|}.Because of diagonal dominance
|a_{pp} | = \sum_{j=p+1}^n |a_{pj}| + \sum_{j = 1}^{p-1} |a_{pj}| + c_p, c_p > 0, |a_{pp} |- \sum_{j = 1}^{p-1} |a_{pj}| = \sum_{j=p+1}^n |a_{pj}| + c_p,This implies \|y\|_\infty \leq \frac{\sum_{j=p+1}^n |a_{pj}|}{|a_{pp}| - \sum_{j = 1}^{p-1} |a_{pj}|} = \frac{\sum_{j=p+1}^n |a_{pj}|}{\sum_{j=p+1}^n |a_{pj}| + c_p} < 1,
showing that \|(D-L)^{-1}U\|_\infty < 1 and convergence follows.
For an n \times n linear system Ax=b if
c = \min_{1 \leq i \leq n} \left( 1 - \sum_{j=1}^n \left|\frac{a_{ij}}{a_{ii}} \right| \right) > 0,then the iterates \{x^{(k)}\} of either Jacobi's method or the Gauss-Seidel method satisfy
\|x^{(k)}-x\|_\infty \leq (1+c)^{-k} \|x^{(0)} -x \|_\infty.From the proof of the theorem above, (choosing p in the same way) it follows that
1 = \sum_{j=p+1}^n \left| \frac{a_{pj}}{a_{pp}} \right| + \sum_{j = 1}^{p-1} \left| \frac{a_{pj}}{a_{pp}} \right| + c_p, c_p \geq c > 0,Use the notation
S_p^+ = \sum_{j=p+1}^n \left| \frac{a_{pj}}{a_{pp}} \right|, \quad S_p^- = \sum_{j = 1}^{p-1} \left| \frac{a_{pj}}{a_{pp}} \right|.Then
\|(D-L)^{-1}U\|_\infty \leq \frac{S_p^+}{S_p^+ + c_p} \leq \frac{S_p^+}{S_p^+ + c}.
The for any c > 0, it follows that the function x/(x+c) is monotone increasing for x > 0. And, because S_p^+ <1 we have
\|(D-L)^{-1}U\|_\infty < \frac{1}{1+c}.The same calculations can be performed for Jacobi's method to show
\|(D-L)^{-1}U\|_\infty \leq \frac{S_p^+ + S_p^-}{S_p^+ + S_p^- + c_p} \leq \frac{S_p^+ + S_p^-}{S_p^+ + S_p^- + c} \leq \frac{1}{1+c}.We note that
\frac{S_p^+}{S_p^+ + c} \leq \frac{S_p^+ + S_p^-}{S_p^+ + S_p^- + c},which is demonstrates that for diagonally dominant matrices the Gauss-Seidel method converges more rapidly than does Jacobi's method.
Suppose that \hat x is an approximation to a the solution of the linear system Ax = b. The corresponding residual vector is \hat r = b - A \hat x.
The goal of any interative method that generates a sequence x^{(0)}, x^{(1)},\ldots, x^{(k)}, \ldots is to make the residual vectors
r^{(k)} = b - A x^{(k)} \to 0rapidly as k \to \infty. The relaxation methods we now describe introduce a parameter \omega that gives flexibility to increase this convergence rate.
Recall that the iteration for the Gauss-Seidel method is given by
g_{\mathrm{GS}}(x) = (D - L)^{-1}U x + (D - L)^{-1}b.We can choose our iteration to be an average of the current iterate and the next iterate. Let \omega > 0, and consider
x^{(k)} = \omega g_{\mathrm{GS}} (x^{(k-1)}) + (1-\omega) x^{(k-1)}.Pulling from the Gauss-Seidel iteration equation it follows that
x_i^{(k)} = (1-\omega) x_i^{(k-1)} + \frac{\omega}{a_{ii}} \left(b_i - \sum_{j < i} a_{ij} x_j^{(k)} - \sum_{j > i} a_{ij} x_j^{(k-1)} \right), \quad i = 1,2,\ldots,n.But to understand the convergence of the method we need it to be in the form x^{(k)} = T x^{(k-1)} + c. So, we rearrange this
x_i^{(k)} + \frac{\omega}{a_{ii}} \sum_{j < i} a_{ij} x_j^{(k)}= (1-\omega) x_i^{(k-1)} + \frac{\omega}{a_{ii}} \left(b_i - \sum_{j > i} a_{ij} x_j^{(k-1)} \right), \quad i = 1,2,\ldots,n.a_{ii} x_i^{(k)} + {\omega}\sum_{j < i} a_{ij} x_j^{(k)}= a_{ii}(1-\omega) x_i^{(k-1)} + {\omega}- \sum_{j > i} a_{ij} x_j^{(k-1)} + \omega b_i, \quad i = 1,2,\ldots,n.(D - \omega L) x^{(k)} = [(1-\omega) D + \omega U] x^{(k-1)} + \omega b.Therefore, the the iteration is given by
x^{(k)} = (D - \omega L)^{-1}[(1-\omega) D + \omega U] x^{(k-1)} + \omega (D - \omega L)^{-1}b.If \omega = 0, x^{(k)} = x^{(k-1)} and the iteration does nothing. If \omega = 1, the iteration is just the Gauss-Seidel method. For 0 < \omega < 1, is a more conservative approach to a Gauss-Seidel-type iteration. This is called an under-relaxation method. If \omega > 1 then, in some sense, the method is a more ambitious Gauss-Seidel-type method, and the method is called an over-relaxation method. When 1 < \omega < 2 we call the resulting method Successive Over-Relaxation or SOR.
Choosing \omega is a difficult task. A good value is often found by experimentation. Some theorems about choosing \omega can be derived in very special cases:
If A is a symmetric (A^T = A), tridiagonal (a_{ij} = 0 if |i-j| > 1) with positive eigenvalues then one should choose
\omega = \frac{2}{1 + \sqrt{1 - r^2}},where r = \rho(D^{-1}(L+U)) is the spectral radius of the Jacobi's method matrix.