Before we discuss our first numerical algorithm, we have to give a definition that will allow us to understand accuracy.
Suppose {βn}∞n=1, βn>0 is a sequence known to converge to zero, and {αn}∞n=1 converges to a number α. If a positive constant K exists such that
|αn−α|≤Kβn,the we say that {αn}∞n=1 converges to α with rate (or order) O(βn).
In this case we also write
αn=α+O(βn)to indicate that the error decays like βn.
Consider αn=n+1n+2. Then, we know that αn→1 as n→∞. Find the rate of convergence. Note: When we want to find the rate, we really want the "best possible rate".
Consider the sum αn=n∑k=012k. Using that αn→2 as n→∞, find the rate of convergence of αn.
We can also talk about order of convergence when we do not have a sequence:
Suppose that for a postive function g(h)>0 we have lim. We say that f(h) converges to L at O(g(h)) (equiv. f(h) = L + O(g(h))) if
|f(h) - L| \leq K g(h)for some positive constant K.
An approximation to the derivative of \sin(x) at x = 0 is given by
f(h) = \frac{\sin(h) - \sin(0)}{h}.Show that f(h) = 1 + O(h^2).
We now consider the root-finding problem. Given a continuous function f, we what to find points x (called roots) where f(x) = 0. The Bisection method is a method for doing this.
Assume f \in C[a,b] and f(a) < 0 and f(b) > 0. The case where f(a) > 0 and f(b) < 0 is analogous. What we really require here is that f(a)f(b) < 0.
From the intermediate value theorem, we are guaranteed to have at least one (could be many!) value c such that f(c) = 0.
We have that f(a) < 0 and f(b) > 0. Set a_1 = a and b_1 = b. Let p_1 = (a_1+b_1)/2.
Then we look at the sign of f(p_1).
After this process, we've narrowed our search for a root from the interval [a,b] (has length b-a)) to either the interval
\left[a, \frac{a+b}{2} \right] \quad \text{or} \quad \left[ \frac{a+b}{2}, b \right].These both have lengths (b-a)/2. Assume f(p_1) > 0. Then set a_2 = a_1, b_2 = p_1, p_2 = (a_2+b_2)/2 and repeat.
f = @(x) exp(-x) - x;
x = linspace(-1,1,100);
plot(x,f(x));
a = -1; b = 1;
p = (a+b)/2;
a = p;
p = (a+b)/2;
a = p;
p = (a+b)/2;
b = p;
p
INPUT endpoints a,b; tolerance TOL; maximum number of iterations Nmax
OUTPUT approximate solution x of f(x) = 0; or failure message
Step 1: Set n = 1;
A = f(a);
Step 2: While n <= Nmax do 3-6:
Step 3: Set p = (a+b)/2; P = f(p);
Step 4: If P = 0 or (b-a)/2 < TOL then
Step 5: OUTPUT(p);
STOP.
Step 6: If A*P > 0 then set a = p;
A = P;
Else set b = p;
Step 7: OUPUT('Method failed after Nmax iterations, Nmax = ', Nmax);
STOP.
We will now determine the order of convergence of the Bisection method. In the algorithm we take p_n (the definition of p in the nth iteration) to be the approximation of the root:
p_1 = \frac{a+b}{2}.Let p^* be the actual root. We then estimate
|p_1 - p^*| \leq \frac{b-a}{2}.So, the error in one step the bisection algorithm is bounded by half of the length of the interval that is being searched.
In the second step of the Bisection method we search an interval of length (b-a)/2 and so
|p_2 - p^*| \leq \frac{b-a}{4} = \frac{b-a}{2^2}.From this, the general formula follows:
Suppose that f \in C[a,b] and f(a)\cdot f(b) < 0. The Bisection method generates a sequence \{p_n\}_{n=1}^\infty, approximating a root p^* such that
|p_n - p^*| \leq \frac{b-a}{2^n}.Said another way, p_n = p + O(2^{-n}).
f = @(x) exp(-x) - x;
p = bisection(f,-1,1,.000000001,100)
f(p)
Consider the function
f(x) = \frac{1}{4} - \frac{\sin(4x)}{x}.We want to compute all the roots of this function. From the plot it appears there are 10.
f = @(x) 1/4 - sin(4*x)./x;
x = linspace(-6,6,100);
plot(x,f(x))
hold on
plot(x,0*x)
f = @(x) 1/4 - sin(4*x)./x; % Our function
x = linspace(-6,6,100); % place a grid on [-6,6]
roots = 0; % a counter for the number of roots found
p = []; % a vector to collect the roots as they are computed
for i = 1:length(x)-1 %loop through the grid
if f(x(i))*f(x(i+1)) < 0 % if f(x) changes sign, there must be a root
% and we use bisection
roots = roots + 1;
p(roots) = bisection(f,x(i),x(i+1),.00000001,100)
end
end
p
max(abs(f(p)))