Zeroing in on Zeros

Given a function {f(x)} of a real variable, we often have to find the values of {x} for which the function is zero. A simple iterative method was devised by Isaac Newton and refined by Joseph Raphson. It is known either as Newton’s method or as the Newton-Raphson method. It usually produces highly accurate approximations to the roots of the equation {f(x) = 0}.


A rational function with five real zeros and a pole at x = 1.

We start with {f(x)}, a function of a single variable {x}. We assume that its derivative {f'(x)} is known or can be calculated. An initial value, or first guess, {x_0} is selected. This may be a completely arbitrary value, but it is better to use any available knowledge of the function to make an informed guess. Provided the initial value {x_0} is close to a root, the value {x_1} given by

\displaystyle x_{1} = x_{0} - \frac {f(x_{0})}{f'(x_{0})}

is a better approximation of the root than {x_0}. Geometrically, {(x_1, 0)} is the point where the tangent of the graph of {f(x)} at {(x_0, f(x_0))} intersects the {x}-axis (see figure).


The Newton-Raphson method: the first estimate is {x_0}. The tangent at {(x_0,f(x_0))} cuts the {x}-axis at {x_1}, the next estimate. The process is then iterated to convergence.

This process may be iterated to obtain successively better approximations:

\displaystyle x_{n+1} = x_{n} - \frac {f(x_{n})}{f'(x_{n})}

until the required accuracy is achieved. Normally, the procedure converges very quickly. Indeed, convergence is quadratic, with the number of accurate digits doubling at each stage.

A Simple Example: Roots of Numbers

Suppose we want the square root of a number {N}. We consider the function

\displaystyle f(x) = x^2 - N

and seek the value of {x} for which this vanishes. Let us put {x_0 = 1}. The derivative of {f(x)} is immediately available: {f^\prime(x) = 2x}. Then the iterative procedure is

\displaystyle x_{n+1} = x_{n} - \frac{f(x_{n})}{f'(x_{n})} = x_{n} - \frac{x_n^2-N}{2 x_n} \,.

Simplifying, we get the algorithm

\displaystyle x_{n+1} = \frac{1}{2}\left(x_{n} + \frac{N}{x_n}\right) \,.

We can easily iterate this.

For example, if we take {N=17}, the successive approximations, to five decimal places, are {\{1.00000, 9.00000, 5.44444, 4.28345, 4.12611, 4.12311, 4.12311 \}}. The exact value is {\sqrt{17}=4.123105626}.

Wider Applications

Newton’s method can be applied to find the zeroes of functions of a complex variable. Each zero has a basin of attraction in the complex plane, comprising the set of all initial values for which the method converges to that zero. For many complex functions, the boundaries of the basins of attraction are fractal in nature.


Basins of attraction for Newton-Raphson method for the mapping {f : z \rightarrow z^3-1}.

This figure shows the three basins of attraction for the function {f:z\rightarrow z^3 - 1}. The polynomial equation {z^3 - z - 1 = 0} has three roots, at {z_1 = 1.325}, {z_2 = -0.662 + i \,  0.562} and {z_3 = -0.662 - i \, 0.562}. If the initial value {z_0} is in the red region, the Newton-Raphson iterations converge to {z_1}. The red region is the basin of attraction for {z_1}. Likewise, initial values in the blue and green regions converge to the two complex roots. The figure indicates the intricate nature of the interfaces between the three regions: the interfaces are fractals.

Newton’s method can also be used to solve systems of equations. Suppose {\mathbf{F} : \mathbb{R}^k \rightarrow \mathbb{R}^k}. The inverse of the {k\times k} Jacobian matrix {\boldsymbol{\mathsf{J}} = \partial(F_1, F_2, \dots ,F_k)/\partial(x_1, x_2, \dots ,x_k)} is then used in place of the reciprocal of the derivative, so that

\displaystyle \mathbf{x}_{n+1} = \mathbf{x}_{n} - (\boldsymbol{\mathsf{J}}_{n})^{-1} \mathbf{F}(\mathbf{x}_{n})

Rather than computing the inverse of the Jacobian matrix, we solve the system of linear equations

\displaystyle \boldsymbol{\mathsf{J}}_n (\mathbf{x}_{n+1} - \mathbf{x}_{n} ) = - \mathbf{F}(\mathbf{x}_{n})

for the change {(\mathbf{x}_{n+1}-\mathbf{x}_{n})}.

For several other applications of the Newton-Raphson method, see the Wikipedia articles referenced below.

Origins of the Method

A special case of Newton’s method, the Babylonian root-finding method, has been known since ancient times. It was used for calculating square roots:

\displaystyle x_{n+1} = \frac{x_{n}}{2} + \frac{1}{x_n} \,.

Newton’s method was first published in 1685 in A Treatise of Algebra both Historical and Practical by John Wallis. Earlier, in 1669, Isaac Newton had considered a special case of the method.

In 1690, Joseph Raphson published a simplified description in his Analysis Aequationum Universalis. Raphson described the method in terms of the successive approximations {x_n} instead of the more complicated sequence of polynomials used by Newton. In 1740, Thomas Simpson described Newton’s method as an iterative method for solving general nonlinear equations — not just polynomials — essentially in the form presented above (see Wikipedia article “Newton’s Method’).

Joseph Raphson

Very few details about Joseph Raphson are available. The following remarks are based on a cursory glance at a few websites, and may not be completely accurate. The Mathematical Gazetteer of the British Isles states that Raphson studied at Jesus College Oxford and was conferred with a Master of Arts degree in 1692. His writings indicate a high level of scholarship. It seems that he was wealthy, but no will has been found on record. There are some indications that he may have had Irish origins. Raphson published his Analysis Aequationum Universalis in 1690. In this work he presented his simplification of Newton’s iterative method for finding the roots of equations.


{\bullet} Wikipedia article Newton’s Method .

{\bullet} Wikipedia article Newton Fractal .

Notice: An evening course on recreational maths at UCD, Awesums: Marvels and Mysteries of Mathematics, is now open for booking online ( or by phone (01 716 7123). HURRY: Course starts next Monday (30/09/19).

Last 50 Posts