In numerical analysis, the Clenshaw algorithm, also called Clenshaw summation, is a recursive method to evaluate a linear combination of Chebyshev polynomials.[1][2] The method was published by Charles William Clenshaw in 1955. It is a generalization of Horner's method for evaluating a linear combination of monomials.
It generalizes to more than just Chebyshev polynomials; it applies to any class of functions that can be defined by a three-term recurrence relation.[3]
Clenshaw algorithm
In full generality, the Clenshaw algorithm computes the weighted sum of a finite series of functions
:
![{\displaystyle S(x)=\sum _{k=0}^{n}a_{k}\phi _{k}(x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ea3f9b8f8db32267ac4c0f21b318d8350ac62599)
where
![{\displaystyle \phi _{k},\;k=0,1,\ldots }](https://wikimedia.org/api/rest_v1/media/math/render/svg/650360a5f576470d98a0d5d8f60f946447789286)
is a sequence of functions that satisfy the linear recurrence relation
![{\displaystyle \phi _{k+1}(x)=\alpha _{k}(x)\,\phi _{k}(x)+\beta _{k}(x)\,\phi _{k-1}(x),}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b64a32390de3ae0411134b3833591667aea5a61b)
where the coefficients
![{\displaystyle \alpha _{k}(x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ed623fbfa54ca215e7adba3823ec241fb9d847fd)
and
![{\displaystyle \beta _{k}(x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/25aa81e95befdd716fc246ec3130b44bc05becea)
are known in advance.
The algorithm is most useful when
are functions that are complicated to compute directly, but
and
are particularly simple. In the most common applications,
does not depend on
, and
is a constant that depends on neither
nor
.
To perform the summation for given series of coefficients
, compute the values
by the "reverse" recurrence formula:
![{\displaystyle {\begin{aligned}b_{n+1}(x)&=b_{n+2}(x)=0,\\b_{k}(x)&=a_{k}+\alpha _{k}(x)\,b_{k+1}(x)+\beta _{k+1}(x)\,b_{k+2}(x).\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5405509757ee21c340542264133dea27b8d1a0ab)
Note that this computation makes no direct reference to the functions
. After computing
and
,the desired sum can be expressed in terms of them and the simplest functions
and
:
![{\displaystyle S(x)=\phi _{0}(x)\,a_{0}+\phi _{1}(x)\,b_{1}(x)+\beta _{1}(x)\,\phi _{0}(x)\,b_{2}(x).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ee8be3bec786365786b88d26924a7a5cb620d9e3)
See Fox and Parker[4] for more information and stability analyses.
Examples
Horner as a special case of Clenshaw
A particularly simple case occurs when evaluating a polynomial of the form
![{\displaystyle S(x)=\sum _{k=0}^{n}a_{k}x^{k}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ecb1839af392d7f49c888bb0f4b183000e87bd85)
The functions are simply
![{\displaystyle {\begin{aligned}\phi _{0}(x)&=1,\\\phi _{k}(x)&=x^{k}=x\phi _{k-1}(x)\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/130d778ed02903be8f148791b4e0bc8c52cf67c9)
and are produced by the recurrence coefficients
![{\displaystyle \alpha (x)=x}](https://wikimedia.org/api/rest_v1/media/math/render/svg/938b2abc2d8c615e798ab802bd6bf7e060566818)
and
![{\displaystyle \beta =0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/60b5e78663eba7ba08e0dd4915251e6261f4f35c)
.
In this case, the recurrence formula to compute the sum is
![{\displaystyle b_{k}(x)=a_{k}+xb_{k+1}(x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/339fcb837b65a19cdd4d6551cd80058640cbaa7c)
and, in this case, the sum is simply
![{\displaystyle S(x)=a_{0}+xb_{1}(x)=b_{0}(x),}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a9d7f27ae92a3c786e2b25e818627c68421081c2)
which is exactly the usual
Horner's method.
Special case for Chebyshev series
Consider a truncated Chebyshev series
![{\displaystyle p_{n}(x)=a_{0}+a_{1}T_{1}(x)+a_{2}T_{2}(x)+\cdots +a_{n}T_{n}(x).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2dc1b099ea11f67fcf307e2c85fa37e69c340dd4)
The coefficients in the recursion relation for the Chebyshev polynomials are
![{\displaystyle \alpha (x)=2x,\quad \beta =-1,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1dc0ebc158b99af6c8cc6b61a24168d015d3dd85)
with the initial conditions
![{\displaystyle T_{0}(x)=1,\quad T_{1}(x)=x.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e60be1e60364b657c59623cbd701e3cf9d4f0a18)
Thus, the recurrence is
![{\displaystyle b_{k}(x)=a_{k}+2xb_{k+1}(x)-b_{k+2}(x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ec6f37aaca86530e4faaef15433c5e2cc416f2d1)
and the final sum is
![{\displaystyle p_{n}(x)=a_{0}+xb_{1}(x)-b_{2}(x).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b06f2656d742eb40e225402e420c20f6a4a1094b)
One way to evaluate this is to continue the recurrence one more step, and compute
![{\displaystyle b_{0}(x)=a_{0}+2xb_{1}(x)-b_{2}(x),}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a102a3c2d3b8c3c4a817cc53b6b5acae9c7434bc)
(note the doubled
a0 coefficient) followed by
![{\displaystyle p_{n}(x)={\tfrac {1}{2}}\left[a_{0}+b_{0}(x)-b_{2}(x)\right].}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7fbaa72b9278a45699dd9756328f7a6c18b65bf9)
Meridian arc length on the ellipsoid
Clenshaw summation is extensively used in geodetic applications.[2] A simple application is summing the trigonometric series to compute the meridian arc distance on the surface of an ellipsoid. These have the form
![{\displaystyle m(\theta )=C_{0}\,\theta +C_{1}\sin \theta +C_{2}\sin 2\theta +\cdots +C_{n}\sin n\theta .}](https://wikimedia.org/api/rest_v1/media/math/render/svg/075d25d4f5ad9ec1101721620ff1c46ff8537b3c)
Leaving off the initial
term, the remainder is a summation of the appropriate form. There is no leading term because
.
The recurrence relation for
is
![{\displaystyle \sin(k+1)\theta =2\cos \theta \sin k\theta -\sin(k-1)\theta ,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5041fe3cfd04ea4e34f9f0ae70f6680a390b44a1)
making the coefficients in the recursion relation
![{\displaystyle \alpha _{k}(\theta )=2\cos \theta ,\quad \beta _{k}=-1.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/43ecd4fcc45f96814a94f3c64354777e4d0662e8)
and the evaluation of the series is given by
![{\displaystyle {\begin{aligned}b_{n+1}(\theta )&=b_{n+2}(\theta )=0,\\b_{k}(\theta )&=C_{k}+2\cos \theta \,b_{k+1}(\theta )-b_{k+2}(\theta ),\quad \mathrm {for\ } n\geq k\geq 1.\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/73dc97e4d5f8ad2cba99fcc375ea5990c153242c)
The final step is made particularly simple because
![{\displaystyle \phi _{0}(\theta )=\sin 0=0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/417ec5be5620f76ea365aa4858f51cf89fc821d1)
, so the end of the recurrence is simply
![{\displaystyle b_{1}(\theta )\sin(\theta )}](https://wikimedia.org/api/rest_v1/media/math/render/svg/069fcfde71d744503de91f0c514d5e8c3ec2f7d6)
; the
![{\displaystyle C_{0}\,\theta }](https://wikimedia.org/api/rest_v1/media/math/render/svg/d714e6d60e715bccc9d696a482173244abadc427)
term is added separately:
![{\displaystyle m(\theta )=C_{0}\,\theta +b_{1}(\theta )\sin \theta .}](https://wikimedia.org/api/rest_v1/media/math/render/svg/eb28943fc074dd27fea11178260983ca664f4bf6)
Note that the algorithm requires only the evaluation of two trigonometric quantities
and
.
Difference in meridian arc lengths
Sometimes it necessary to compute the difference of two meridian arcs in a way that maintains high relative accuracy. This is accomplished by using trigonometric identities to write
![{\displaystyle m(\theta _{1})-m(\theta _{2})=C_{0}(\theta _{1}-\theta _{2})+\sum _{k=1}^{n}2C_{k}\sin {\bigl (}{\textstyle {\frac {1}{2}}}k(\theta _{1}-\theta _{2}){\bigr )}\cos {\bigl (}{\textstyle {\frac {1}{2}}}k(\theta _{1}+\theta _{2}){\bigr )}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4449736911d8e8359623c319b6ebc82c11a98613)
Clenshaw summation can be applied in this case
[5]provided we simultaneously compute
![{\displaystyle m(\theta _{1})+m(\theta _{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8e8563cba1014fab9395bae94d5d4626e2c4ca9d)
and perform a matrix summation,
![{\displaystyle {\mathsf {M}}(\theta _{1},\theta _{2})={\begin{bmatrix}(m(\theta _{1})+m(\theta _{2}))/2\\(m(\theta _{1})-m(\theta _{2}))/(\theta _{1}-\theta _{2})\end{bmatrix}}=C_{0}{\begin{bmatrix}\mu \\1\end{bmatrix}}+\sum _{k=1}^{n}C_{k}{\mathsf {F}}_{k}(\theta _{1},\theta _{2}),}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8ab794efdcf2b7996882819f6152e9548be2080d)
where
![{\displaystyle {\begin{aligned}\delta &={\tfrac {1}{2}}(\theta _{1}-\theta _{2}),\\[1ex]\mu &={\tfrac {1}{2}}(\theta _{1}+\theta _{2}),\\[1ex]{\mathsf {F}}_{k}(\theta _{1},\theta _{2})&={\begin{bmatrix}\cos k\delta \sin k\mu \\{\dfrac {\sin k\delta }{\delta }}\cos k\mu \end{bmatrix}}.\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f1e326c30c4478baf1107fd9fd70378db09f7742)
The first element of
![{\displaystyle {\mathsf {M}}(\theta _{1},\theta _{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd222e99910539025ea8ad267dab15e6fcbc991)
is the averagevalue of
![{\displaystyle m}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0a07d98bb302f3856cbabc47b2b9016692e3f7bc)
and the second element is the average slope.
![{\displaystyle {\mathsf {F}}_{k}(\theta _{1},\theta _{2})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/47f251c2e3d8e4c170f6ae56bffe2b7bf8bc99ff)
satisfies the recurrencerelation
![{\displaystyle {\mathsf {F}}_{k+1}(\theta _{1},\theta _{2})={\mathsf {A}}(\theta _{1},\theta _{2}){\mathsf {F}}_{k}(\theta _{1},\theta _{2})-{\mathsf {F}}_{k-1}(\theta _{1},\theta _{2}),}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f75dd21f733093a25284d0841520bde381d477b5)
where
![{\displaystyle {\mathsf {A}}(\theta _{1},\theta _{2})=2{\begin{bmatrix}\cos \delta \cos \mu &-\delta \sin \delta \sin \mu \\-\displaystyle {\frac {\sin \delta }{\delta }}\sin \mu &\cos \delta \cos \mu \end{bmatrix}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/484024cf218e1a59465bc57b1384c40c2a03d051)
takes the place of
![{\displaystyle \alpha }](https://wikimedia.org/api/rest_v1/media/math/render/svg/b79333175c8b3f0840bfb4ec41b8072c83ea88d3)
in the recurrence relation, and
![{\displaystyle \beta =-1}](https://wikimedia.org/api/rest_v1/media/math/render/svg/eebc0b3c576d5448d3bdf0d2c360514ecfa7d57c)
.The standard Clenshaw algorithm can now be applied to yield
![{\displaystyle {\begin{aligned}{\mathsf {B}}_{n+1}&={\mathsf {B}}_{n+2}={\mathsf {0}},\\[1ex]{\mathsf {B}}_{k}&=C_{k}{\mathsf {I}}+{\mathsf {A}}{\mathsf {B}}_{k+1}-{\mathsf {B}}_{k+2},\qquad \mathrm {for\ } n\geq k\geq 1,\\[1ex]{\mathsf {M}}(\theta _{1},\theta _{2})&=C_{0}{\begin{bmatrix}\mu \\1\end{bmatrix}}+{\mathsf {B}}_{1}{\mathsf {F}}_{1}(\theta _{1},\theta _{2}),\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/16631720cd2ef43fa954497b1cde822af18fcc46)
where
![{\displaystyle {\mathsf {B}}_{k}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1b4a9b2c592d6e6f97530e008405c9b6ff31e124)
are 2×2 matrices. Finally we have
![{\displaystyle {\frac {m(\theta _{1})-m(\theta _{2})}{\theta _{1}-\theta _{2}}}={\mathsf {M}}_{2}(\theta _{1},\theta _{2}).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/601f8e89477ba7fa9b75f52e90f2aaf2db4f2f29)
This technique can be used in the
limit ![{\displaystyle \theta _{2}=\theta _{1}=\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/51dff71a4eb599abff4480075120fa2bd2be8904)
and
![{\displaystyle \delta =0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8bbc4a4d0a8fcabdf770691af61b994e64b81468)
to simultaneously compute
![{\displaystyle m(\mu )}](https://wikimedia.org/api/rest_v1/media/math/render/svg/78c3c34de5fd0d66f9939fc984422b97bc0329f2)
and the
derivative ![{\displaystyle dm(\mu )/d\mu }](https://wikimedia.org/api/rest_v1/media/math/render/svg/20e006f4ff92e8e3d27c14ba1aa8392ef777c070)
, provided that, in evaluating
![{\displaystyle {\mathsf {F}}_{1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ba85ee3158798cbe72ab0fe65031e8e4b455d899)
and
![{\displaystyle {\mathsf {A}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/102043e55e1bdbf81aad8c9c1419ba91d44d6755)
, we take
![{\displaystyle \lim _{\delta \to 0}(\sin k\delta )/\delta =k}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f2bd4fcdaa82184d398babb6e32b04eb7442e2ea)
.
See also
References