In .NET floating-point values can be easy converted to int by applying (int) type cast. This operation just removes fraction. What can go wrong with this approach? The fact, however, that sometimes some very basic math can give unexpected results.

Quite recently, during one of the debugging sessions what I had seen is very basic operation for converting units behaved incorrectly for some numbers:

public string ToCentimeters(float number) {
return \$"{(int)(number * 100)}cm";
}


For 5 it returned 500cm, for 10 it returned 1000cm. But surprisingly, for 1.8 it returned 179cm. What?

The answer, of course, is that the internal representation of the floating point numbers in computer memory is constructed in a way that not for every integer number corresponding floating-point number exist. Sometimes the nearest floating point number is a bit less or a bit greater than the integer number. When it is equal or greater then (int) value approach works well. When it less, the result is less than expected.

To make things worse, the floats and doubles behave slightly differently. If during computations the floats are converted to doubles and then integers, the result might be different. So actual result depend on the compiler, architecture and optimisations.

There are a few notes to remember while dealing with floating-point numbers:

• any result of the computation can be a bit less or a bit greater then the "perfect" result;
• math operations like Math.Ceiling, Math.Floor and fraction truncating operations like casting to (int) increase this effect;
• adding or subtracting very small number from float or double does not change it;
• due to the accumulating effect of computation errors, the same computations made in different order could result in slightly different values.