In .NET floating-point values can be easy converted to int by applying (int) type cast. This operation just removes fraction. What can go wrong with this approach? The fact, however, that sometimes some very basic math can give unexpected results. Quite recently, during one of the debugging sessions what I