# Computing the Determinant in Rust: Part 2.

This story is a direct follow-up of the previous account about computing a determinant numerically. As we have pointed out, or as you might have observed when running the code, the given method very quickly ceases to return in a feasible amount of time for larger matrices.

This story is about providing a method and an implementation to effectively compute determinants even for larger matrices. The theoretically tools which we will utilize are in essence given here and here. In particular, the Propositions 11 and 12 will provide the tools for us.

At this point it might be a good opportunity to give an advice for the less mathematical but more practical oriented reader:

For pure practical purposes, it is by far not essential to understand all the details that lead to a result, but it is essential to understand the result itself and under which conditions it may be applied. The main focus should further be on the implementation part to which I always offer a solution in Rust, since I am a strong believer that this language becomes the new standard for scientific computations in production.

## Effective computation:

In order to be capable of computing the determinant for a matrix of size say `1000 x 1000`, we need to find a smarter method than the one considered here.

One simple alternative is to use Propositions 11, 12 (here) and the fact that the determinant of a matrix in upper triangular form is just the product of its diagonal elements (see the example in the same story).

These propositions tells that we are allowed to add linear combinations of other rows to a targeted one without changing the value of the determinant. Moreover, we can swap rows with each other but then must care to multiply the factor `-1` for each such swap.

The strategy to bring the matrix into upper triangular form is to first swap rows to ensure the first row contains no leading zero. That is, `A[0][0] != 0` .