A simple Method to approximate the Gradient.

applied.math.coding
4 min readNov 30, 2022

No doubt, the gradient is a mathematical construct with many applications to real-world problems. Among others, it is the core tool for some optimization techniques like the ‘steepest descent’. But also, it can be used to detect edges in images, or to describe the dynamics of a heat flow.

In this short article I want to quickly show you a way to efficiently approximate the gradient for scenarios where it is not possible to compute it explicitly. Or in other words, where we cannot obtain the gradient in closed form. This scenario can quicker arrive than we might think. Especially those who work in the field of data science and analysis, probably will have faced this problem often enough.

A typical example is edge detection in an image where you are restricted to its data. Another one is when the function is only known at various discrete points obtained from a simulation.

In cases the function is given in closed form, there is actually never an argument to only approximate the gradient instead of computing it exactly. Differentiation can be done either by hand or if too complex by using a system capable of doing symbolic differentiation.

In order to yield some good approximations for the gradient we are going to utilize Taylor’s theorem. For more information on the latter you might find some more details in this story.

Although there exists higher order approximations, the last one is especially desirable since it requires only two valuations of f. Moreover, in case the second…

applied.math.coding