NEWTON'S DIVIDED DIFFERENCE FORMULA
Let us assume that the function f(x) is linear then
we have 
f(x_{i})  f(x_{j})



(x_{i}  x_{j})

where x_{i} and x_{j} are any
two tabular points, is independent of x_{i} and x_{j}.
This ratio is called the first divided difference of f(x) relative
to
x_{i }and
x_{j} and is denoted by f
[x_{i}, x_{j}]. That is
f [x_{i}, x_{j}] = 
f(x_{i})  f(x_{j})

= f [x_{j}, x_{i}] 

(x_{i}  x_{j})

Since the ratio is independent of x_{i }and
x_{j}
we can write f [x_{0}, x] = f [x_{0}, x_{1}]
f(x)  f(x_{0})




= 
f [x_{0}, x_{1}]

(x  x_{0})



f(x) = f(x_{0}) + (x  x_{0})
f [x_{0}, x_{1}]
= 
1

 
f(x_{0}) 
x_{0}  x

 

f_{1}  f_{0}


f_{0}x_{1}  f_{1}x_{0}




= 

x + 

x  x_{0}

f(x_{1}) 
x_{1}  x


x_{1}  x_{0}


x_{1}  x_{0}

So if f(x) is approximated with a linear polynomial
then the function value at any point x can be calculated by using
f(x)
@ P_{1}(x) = f(x_{0}) + (x 
x_{1}) f [x_{0}, x_{1}]
where f [x_{0}, x_{1}] is the first
divided difference of f relative to x_{0}
and x_{1}.
Similarly if f(x) is a second degree polynomial then
the secant slope defined above is not constant but a linear function of
x.
Hence we have
f [x_{1}, x_{2}]  f [x_{0}, x_{1}]


x_{2}  x_{0}

is independent of x_{0}, x_{1 }and
x_{2}. This ratio is defined as second divided difference of
f
relative
to x_{0}, x_{1 }and x_{2}. The secind
divided difference are denoted as


f [x_{1}, x_{2}]  f [x_{0}, x_{1}]

f [x_{0}, x_{1}, x_{2}] 
= 



x_{2}  x_{0}

Now again since f [x_{0}, x_{1},x_{2}]
is independent of x_{0}, x_{1 }and x_{2}
we have
f [x_{1}, x_{0}, x] = f [x_{0}, x_{1},
x_{2}]
f [x_{0}, x]  f [x_{1}, x_{0}] 



= 
f [x_{0}, x_{1}, x_{2}] 
x  x_{1}



f [x_{0}, x] = f [x_{0}, x_{1}] + (x  x_{1})
f [x_{0}, x_{1}, x_{2}]
f [x]  f [x_{0}] 



= 
f [x_{0}, x_{1}] + (x  x_{1}) f [x_{0},
x_{1}, x_{2}] 
x  x_{0}



f(x) = f [x_{0}] + (x  x_{0}) f [x_{0},
x_{1}] + (x  x_{0}) (x  x_{1}) f [x_{0},
x_{1}, x_{2}]
This is equivalent to the second degree polynomial approximation
passing through three data points
x_{0}

x_{1}

x_{2}

f_{0}

f_{1}

f_{2}

So whenever f(x) is approximated with a second degree
polynomial, the value of f(x) at any point x can be computed
using the above polynomial.
In the same way if we define recursively k^{th} divided
difference by the relation


f [x_{1}, x_{2}, . . ., x_{k}]  f [x_{0},
x_{1}, . . ., x_{k1}]

f [x_{0}, x_{1}, . . ., x_{k}] 
= 



x_{k}  x_{0}

The k^{th} degree polynomial approximation to
f(x)
can be written as
f(x) = f [x_{0}] + (x  x_{0}) f [x_{0},
x_{1}] + (x  x_{0}) (x  x_{1}) f [x_{0},
x_{1}, x_{2}]
+ . . . + (x  x_{0}) (x  x_{1}) . . . (x  x_{k1})
f [x_{0}, x_{1}, . . ., x_{k}].
This formula is called Newton's Divided Difference Formula.
Once we have the divided differences of the function f relative
to the tabular points then we can use the above formula to compute f(x)
at any non tabular point.
Computing divided differences using divided difference table:
Let us consider the points (x_{1}, f_{1}), (x_{2},
f_{2}), (x_{3}, f_{3}) and (x_{4},
f_{4}) where x_{1}, x_{2}, x_{3}
and x_{4} are not necessarily equidistant points then the
divided difference table can be written as
x_{i}

f_{i}

f [x_{i}, x_{j}]

f [x_{i}, x_{j}, x_{k} ]

f [x_{i}, x_{j}, x_{k}, x_{l}]






x_{1}

f_{1}






f [x_{1}, x_{2}] = 
f_{2}  f_{1}


x_{2}  x_{1}




x_{2}

f_{2}




f [x_{2}, x_{3}]  f [x_{1}, x_{2}]

f [x_{1}, x_{2},x_{3}] 
= 



x_{3}  x_{1}





f [x_{2}, x_{3}] = 
f_{3}  f_{2}


x_{3}  x_{2}





f [x_{2}, x_{3},x_{4}]  f [x_{1},
x_{2},x_{3}]

f [x_{1}, x_{2},x_{3},x_{4}] 
= 



x_{4}  x_{1}


x_{3}

f_{3}




f [x_{3}, x_{4}]  f [x_{2}, x_{3}]

f [x_{2}, x_{3},x_{4}] 
= 



x_{4}  x_{2}





f [x_{3}, x_{4}] = 
f_{4}  f_{3}


x_{4}  x_{3}




x_{4}

f_{4}




Example : Compute
f(0.3)
for the data
x

0

1

3

4

7

f

1

3

49

129

813

using Newton's divided difference formula.
Solution : Divided
difference table
x_{i}

f_{i}




0

1






2



1

3


7




23


3

3

49


19




80


3

4

129


37




228



7

813




Now Newton's divided difference formula is
f(x) = f [x_{0}] + (x  x_{0}) f [x_{0},
x_{1}] + (x  x_{0}) (x  x_{1}) f [x_{0},
x_{1}, x_{2}] + (x  x_{0}) (x  x_{1})
(x  x_{2})f [x_{0}, x_{1}, x_{2},
x_{3}]
f(0.3) = 1 + (0.3  0) 2 + (0.3)(0.3  1) 7 + (0.3) (0.3  1) (0.3
 3) 3
= 1.831
Since the given data is for the polynomial f(x) = 3x^{3}
 5x^{2} + 4x +1 the analytical value is f(0.3) = 1.831
The analytical value is matched with the computed value because
the given data is for a third degree polynomial and there are five data
points available using which one can approximate any data exactly upto
fourth degree polynomial.
Properties
:
1. If f(x) is a polynomial
of degree
N, then the N^{th} divided difference of
f(x)
is a constant.
Proof : Consider
the divided difference of x^{n}


(x_{1}+ h)^{n}  x^{n}


n h x^{n1}+ . . .

Dx^{n} 
= 

= 



x + h  x


h

= a polynomial
of degree (n  1)
Also since divided difference operator
is a linear operator, D of any N^{th}
degree polynomial is an (N1)^{th} degree polynomial
and second D is an (N2)
degree polynomial, so on the N^{th}
divided difference of an N^{th}
degree polynomial is a constant.
2. If x_{0}, x_{1}, x_{2}
. . . x_{n} are the (n+1) discrete points then the N^{th}
divided difference is equal to

f_{0}


f_{n}

f[x_{0}, x_{1}, x_{2} . . . x_{n}]
= 

+ . . . + 


(x_{0}  x_{1}) . . . (x_{0}  x_{n})


(x_{n}  x_{0}) . . . (x_{n}  x_{n1})

Proof : If n = 0 Þ
f(x_{0}) = f(x_{0}) hence the result is true let us
assume that the result is valid upto n = k

f_{0}


f_{k}

Þ f[x_{0}, x_{1},
. . . x_{k}] = 

+ . . . + 


(x_{0}  x_{1}) . . . (x_{0}  x_{k})


(x_{k}  x_{0}) . . . (x_{k}  x_{k1})

Consider the case n = k + 1

f[x_{1}, x_{2}, . . . x_{k+1}]
 f[x_{0}, x_{1}, . . . x_{k}]

Þ f[x_{0}, x_{1},
. . . x_{k}, x_{k+1}] = 


(x_{k+1}  x_{0})


1

[

f_{1}


f_{k+1}

]

=



+ . . . +



(x_{k+1}  x_{0})

(x_{1}  x_{2}) . . . (x_{1}  x_{k+1})


(x_{k+1}  x_{1}) . . . (x_{k+1}  x_{k})


1

[

f_{0}


f_{k}

]





+ . . . +



(x_{k+1}  x_{0})

(x_{0}  x_{1}) . . . (x_{0}  x_{k})


(x_{k}  x_{0}) . . . (x_{k}  x_{k1})


f_{0}

+

f_{1}

( 
1


1

) 

f_{k+1}

=




 

+...+ 


(x_{0}x_{1})...(x_{0}x_{k+1})

(x_{1}x_{2})...(x_{1}  x_{k})(x_{k+1}
 x_{0})

x_{1}x_{k+1} 

x_{1}x_{0} 

(x_{k+1}x_{0})...(x_{k+1}x_{k})


f_{0}

+

f_{1}


f_{k+1}

=



+ . . . +



(x_{0 } x_{1}) . . . (x_{0 } x_{k+1})

(x_{1 } x_{0}) (x_{1 } x_{2})
. . . (x_{1}  x_{k+1})


(x_{k+1 } x_{0}) . . . (x_{k+1 } x_{k})

3. Sheppard Zigzag rule :
Consider the divided difference table for the data points (x_{0},
f_{0}), (x_{1}, f_{1}), (x_{2}, f_{2})
and (x_{3}, f_{3})
In the difference table the dotted line and the solid line
give two differenct paths starting from the function values to the higher
divided difference's posssible to the function values. The Sheppard's zigzag
rule says the function value at any nontabulated from the dotted line
or from the solid line are same provided the order of x_{i}
are taken in the direction of the zigzag line. That is any f(x) through
the dotted line can be approximated as
f(x) = f_{0} + (x  x_{0}) f [x_{0},
x_{1}] + (x  x_{0}) (x  x_{1}) f [x_{0},
x_{1},x_{2}] + (x  x_{0}) (x  x_{1})
(x  x_{2})f [x_{0}, x_{1}, x_{2}, x_{3}].
Similarly f(x) over the solid line is euivalent to
f(x) = f_{2} + (x  x_{2}) f [x_{1},
x_{2}] + (x  x_{2}) (x  x_{1}) f [x_{1},
x_{2},x_{3}] + (x  x_{2}) (x  x_{1})
(x  x_{3})f [x_{0}, x_{1}, x_{2}, x_{3}].
Example : Find f(1.5) from the data points
x

0

0.5

1

2

f(x)

1

1.8987

3.7183

11.3891

f(1.5) along the dotted line is
f(1.5) = 1 + 1.5 x 1.7974 + 1.5
(1) x 1.8418 + (1.5) (1) (0.5) x
0.4229
= 6.770
Similarly f(1.5) along the solid line is
f(1.5) = 3.7183+(1.5  1)x3.6392+(1.5
 1)(1.5  0.5)x2.6877+(1.5 1)(1.5 
0.5)(1.5  2)x0.4229
= 6.770
The data is given for f(x) = x^{2} + e^{x} and
the analytical value for f(1.5) = 6.7317
