Approximating a function is an interesting problem with a lot of applications. Taylor series approximates a function at a given point
For example, check the function:
Once we expand the Taylor series, we get:
If we assume
If we evaluate it for different values of
x | Taylor Approximation | Actual Value |
-1 | 18 | 12 |
-0.5 | 11.25 | 9.75 |
0 | 9 | 9 |
0.5 | 11.25 | 9.75 |
1 | 18 | 12 |
As we can clearly see, approximation keeps getting better and better as
Taylor series finds numerous applications. Let’s check one.
We apply Taylor expansion on the function,
Note: Taylor series for cases where
is known as Mclauren series as well.
We all know that
Putting
import mathdef CalculateExponentialApproximation(noOfTerms, x):finalSum = 0for i in range(noOfTerms):#print((x**i)/math.factorial(i)) – feel free to uncomment this line to check the series progressivelyfinalSum+= (x**i)/math.factorial(i)print("Taylor approximation of e after", i+1, "term(s) is: ", finalSum)return finalSumans = CalculateExponentialApproximation(10, 1)#print("Taylor approximation of e after 10 terms is: ", ans)
Feel free to try this code with different values of noOfTerms
and x
.
Let’s replace
It would be nice to segregate the real and imaginary terms:
Now, let’s keep equation
Similarly, for
Now, both these expansions are looking quite familiar to the expansion in
Usually, we rarely deal with scalar values for real-world problems. Just consider the example of a neural network with just a couple of layers; it would have a considerable number of inputs and weights. In such a cases, it’s useful to apply it to the vectors. It takes the vector form as:
Here, both
Let’s round it off with another example. Here, we will use the jax library to use its autograd feature.
Let’s suppose we have a function,
import jaximport jax.numpy as jnpfrom jax import gradFuncA = lambda x : 2*x[0] + x[1]**2Gradient = lambda x : grad(FuncA)(x)TaylorApprox = lambda x:FuncA(z) + Gradient(z)@(z-x)z = jnp.array([0.0,0.0])#Point is same – nothing special.print("Actual function value at z is:",FuncA(z))print("Taylor Approximation at same value is:",TaylorApprox(z))z1 = jnp.array([0.1, 0.1])#Point z1 is close to z.print("Actual function value at z is:",FuncA(z1))print("Taylor Approximation at same value is:",TaylorApprox(z1))z2 = jnp.array([0.7, 1.1])#Point z2 is farther away from z.print("Actual function value at z is:",FuncA(z2))print("Taylor Approximation at same value is:",TaylorApprox(z2))
Lines 6–7: We use jax's autograd feature to automatically calculate its gradient.
Lines 12–13: We approximate it for the same point.
Lines 17–18: We approximate it for the close point.
Lines 22–23: We approximate it for the distant point.
Free Resources