SECANT METHOD
The NewtonRaphson algorithm requires the evaluation of two
functions (the function and its derivative) per each iteration. If they
are complicated expressions it will take considerable amount of effort
to do hand calculations or large amount of CPU time for machine calculations.
Hence it is desirable to have a method that converges (please see the section
order of the numerical methods for theoretical details) as fast as Newton's
method yet involves only the evaluation of the function. Let
x_{0} and x_{1} are two initial approximations for the
root 's' of f(x) = 0 and f(x_{0}) & f(x_{1}) respectively,
are their function values. If x_{ 2} is the point of intersection
of xaxis and the linejoining the points (x_{0}, f(x_{0}))
and (x_{1}, f(x_{1})) then x_{2} is closer to 's'
than x_{0} and x_{1}. The equation relating x_{0},
x_{1} and x_{2} is found by considering the slope 'm'
m _{ }= 
f(x_{1})  f(x_{0}) 

f(x_{2})  f(x_{1}) 

0  f(x_{1}) 

= 

= 


x_{1 } x_{0} 

x_{2 } x_{1} 

x_{2 } x_{1} 

x_{2 } x_{1 }= 
 f(x_{1}) * (x_{1}x_{0}) 


f(x_{1})  f(x_{0}) 

x_{2 }= x_{1}  
f(x_{1}) * (x_{1}x_{0}) 


f(x_{1})  f(x_{0}) 

or in general the iterative process can be written as
x_{i+1}= x_{i}  
f(x_{i}) * (x_{i } x_{i1}
) 

i = 1,2,3... 
f(x_{i})  f(x_{i1}) 

This formula is similar to Regulafalsi scheme of root bracketing
methods but differs in the implementation. The Regulafalsi method begins
with the two initial approximations 'a' and 'b' such that a <
s < b where s is the root of f(x)
= 0. It proceeds to the next iteration by calculating c(x_{2})
using the above formula and then chooses one of the interval (a,c) or (c,h)
depending on f(a) * f(c) < 0 or > 0 respectively. On the other hand
secant method starts with two initial approximation x_{0} and x_{1}
(they may not bracket the root) and then calculates the x_{2} by
the same formula as in Regulafalsi method but proceeds to the next iteration
without bothering about any root bracketing.
Algorithm  Secant Method
Given an equation f(x) = 0
Let the initial guesses be x_{0} and x_{1 }
Do
x_{i+1}= x_{i}  
f(x_{i}) * (x_{i } x_{i1}) 

i = 1,2,3... 
f(x_{i})  f(x_{i1}) 

while (none of the convergence criterion C1 or C2 is met) 
C1. Fixing apriori the total number of iterations N.
C2. By testing the condition  x_{i+1}  x_{i} 
(where i is the iteration number) less than some tolerance limit, say epsilon,
fixed apriori.
Numerical Example :
Find the root of 3x+sin[x]exp[x]=0
[
Graph]
Let the initial guess be 0.0 and 1.0
f(x) = 3x+sin[x]exp[x]
i

0

1

2

3

4

5 
6 
x_{i}

0

1

0.471

0.308

0.363

0.36 
0.36 
So the iterative process converges to 0.36 in six iterations.
Worked out problems
Exapmple 1 
Find a root of cos(x)  x * exp(x) = 0 
Solution 
Exapmple 2 
Find a root of x^{4}x10 = 0 
Solution 
Exapmple 3 
Find a root of xexp(x) = 0 
Solution 
Exapmple 4 
Find a root of exp(x) * (x^{2}5x+2)
+ 1= 0 
Solution 
Exapmple 5 
Find a root of xsin(x)(1/2)= 0 
Solution 
Exapmple 6 
Find a root of exp(x) = 3log(x) 
Solution 
Problems
to workout

Work out with the SECANT
method here
Note : Few
examples of how to enter equations are given below . . .
(i) exp[x]*(x^2+5x+2)+1 (ii)
x^4x10 (iii)
xsin[x](1/2)
(iv) exp[(x+212+1)]*(x^2+5x+2)+1
(v) (x+10) ^ (1/4) 