Introduction to Taylor series

Approximating a function is an interesting problem with a lot of applications. Taylor series approximates a function at a given point aa using the (infinite) sum:

Basic example

For example, check the function:

Once we expand the Taylor series, we get:

If we assume a=0a=0, the Taylor expansion will evaluate to:

If we evaluate it for different values of xx, we get:

x

Taylor Approximation

Actual Value

-1

18

12

-0.5

11.25

9.75

0

9

9

0.5

11.25

9.75

1

18

12

As we can clearly see, approximation keeps getting better and better as xax \to a.

Some interesting applications

Taylor series finds numerous applications. Let’s check one.

Euler’s number

We apply Taylor expansion on the function, exe^{x}for a=0a=0.

Note: Taylor series for cases where a=0a=0 is known as Mclauren series as well.

We all know that e=2.7182818e=2.7182818. Now, let’s check how well does the Taylor expansion approximate it.

Putting x=1x=1, if we take just one term, it equates to e=1e=1; for the first two terms, the answer is 22. Similarly, it gives 2.52.5 for 3 terms and continues converging to the exact value as the number of terms increase.

import math
def CalculateExponentialApproximation(noOfTerms, x):
finalSum = 0
for i in range(noOfTerms):
#print((x**i)/math.factorial(i)) – feel free to uncomment this line to check the series progressively
finalSum+= (x**i)/math.factorial(i)
print("Taylor approximation of e after", i+1, "term(s) is: ", finalSum)
return finalSum
ans = CalculateExponentialApproximation(10, 1)
#print("Taylor approximation of e after 10 terms is: ", ans)

Feel free to try this code with different values of noOfTerms and x.

Euler’s formula

Let’s replace xx with ixix in the above formula and using the periodicity of ii, we get:

It would be nice to segregate the real and imaginary terms:

Now, let’s keep equation (1)(1) aside for a while and expand sin(x)\sin(x) again at a=0a=0.

Similarly, for cos(x)\cos(x), it will be:

Now, both these expansions are looking quite familiar to the expansion in (1)(1) above. Substituting their values, we get:

Taylor approximation for vectors

Usually, we rarely deal with scalar values for real-world problems. Just consider the example of a neural network with just a couple of layers; it would have a considerable number of inputs and weights. In such a cases, it’s useful to apply it to the vectors. It takes the vector form as:

Here, both xx and aa are vectors. Therefore, we will use the gradient vector for the first order, Hessian for the second, and so on. Computing Hessian (and onwards) is computationally inefficient/expensive, especially when the number of data points increases beyond hundreds. So, usually, we prefer the first-order approximation as a tradeoff between accuracy and performance. All SGD-based optimizers are good examples of first-order methods.

Example

Let’s round it off with another example. Here, we will use the jax library to use its autograd feature.

Let’s suppose we have a function,

import jax
import jax.numpy as jnp
from jax import grad
FuncA = lambda x : 2*x[0] + x[1]**2
Gradient = lambda x : grad(FuncA)(x)
TaylorApprox = lambda x:FuncA(z) + Gradient(z)@(z-x)
z = jnp.array([0.0,0.0])
#Point is same – nothing special.
print("Actual function value at z is:",FuncA(z))
print("Taylor Approximation at same value is:",TaylorApprox(z))
z1 = jnp.array([0.1, 0.1])
#Point z1 is close to z.
print("Actual function value at z is:",FuncA(z1))
print("Taylor Approximation at same value is:",TaylorApprox(z1))
z2 = jnp.array([0.7, 1.1])
#Point z2 is farther away from z.
print("Actual function value at z is:",FuncA(z2))
print("Taylor Approximation at same value is:",TaylorApprox(z2))

Explanation

  • Lines 67: We use jax's autograd feature to automatically calculate its gradient.

  • Lines 12–13: We approximate it for the same point.

  • Lines 17–18: We approximate it for the close point.

  • Lines 22–23: We approximate it for the distant point.

Free Resources

Copyright ©2025 Educative, Inc. All rights reserved