When modeling phenomena by means of physical laws like it is the case in many applications arising from engineering or numerical weather forecasting, the targeted result often not only depends on time but in addition on space.

These models typically lead to partial differential equations which themself are notorious hard to deal with and in particular when they are nonlinear. These harder to solve equations arise quickly. An example is the famous Navier-Stokes equation which intends to describe the evolution of a velocity field. It has the form:

Before continuing let us **roughly** look at how this equation is derived…

As you probably know or have heard of, differential equations are everywhere. A typical example is the law of Newton mechanics or evolution equations arising in biology.

Differential equations is a huge and active field in mathematics and this article aims by no means to introduce into this field. My intention is rather to give an overview of methods and ideas used for solving ordinary differential equations numerically.

So if you already use solvers as provided in various libraries, you may find it interesting to understand in a little more detail on which ideas they are built on. …

Coordinate descent is an optimization algorithm which is leaned very much at the idea of gradient descent but comes out without the computation of the gradient. This at first seems like a big advantage but be aware that for the algorithm convergence only can be assured in case the function is differentiable. In other words, gradient descent could have been used as well. Though, depending on the problem, coordinate descent sometimes turns out to be the faster variant.

In order to better understand this article you might be best when having read already my former article on line search: (but…

This article provides a follow-up of my series about basic nonlinear optimization techniques with sample implementations. Again, we don’t aim to cover all deep aspects, but focus on giving an overview and introduction.

This account introduces another very popular, if not the most popular, algorithm for finding minima of real-valued functions in several variables — the** Gradient Descent**.

Our main task can be formulated as:

The main idea behind the algorithm is to start at an arbitrary point and to move a small step into a direction which deems promising for the function to fall very quickly. …

This article aims to be a follow-up of my introduction into methods of non-linear optimization. The recent was giving an overview about heuristic methods which you can read here

Now we will turn ourselves towards very often used standard algorithms which you can find in action in many applications … especially in our so much loved subject ‘machine learning’.

Let’s start easy:

The main task of line search is this:

In words, we are given a function **f** which is continuous and maps an interval of the real-line into the real-line. …

Machine Learning … probably is the biggest tech and research trend of this century. This article is not about Machine Learning, but it treats one specific subject which is of immense importance to the former, that is Optimization.

One must know, almost every Machine Learning model is relying at the very end at some sort of optimization which intends to calibrate the model parameter.

There exists a huge number of different optimization techniques which apply on different problem settings. …

No question, Go is on the rise and getting super trendy. There are several good reasons for this and one of these I want to point out in this article: Running code in parallel across multiple cores.

Especially in scientific computing or whenever a complex CPU intensive algorithm has to be run, it is always worthwhile to ask oneself if it is possible to distribute the work across all available CPU’s.

But when it comes to the actual coding of parallelization, many languages don’t look so supportive. Some of them even require the part you want to run in parallel…