Feb 01 2006

Double Imprecision In .Net

Published by at 7:26 pm under .NET

I’ve spouted on about this previously on my blog but I thought it was about time I posted some code that demonstrates some of the problems we’re up against. The problem seems to be that under certain circumstances the .Net Double data type loose its precision and gives you a number that you weren’t expecting. The chances are that the error will be out by 0.00000000000001, but that might be enough to cause an error.

The simplest way to prove this error is with a simple loop:

Dim x As Double = 0.00
While (x < 7.00)
    Console.Writeline(x)
    x += 0.01
End While

What you would normally expect is a linear progression from 0.00 to 7.00:

0.00
0.01
...
6.09
7.00

However, when the code is run you will find that somewhere in the output is some results like this:

2.28
2.29
2.29999999999999
2.30999999999999

This, as I am sure you will agree, is an unexpected result and could result in errors. If you try this exercise again but using Decimal instead of Double you will find that you get the results you expect. I’m sure there are people out there that will say that all you need to is simple rounding to get rid of the error. Sorry, but it doesn’t quite work like that and the following code will prove it:

Dim x As Double = 0.00
Dim y As Decimal = 0.00
While (x < 7.00)
    If (Math.Round(x) = Math.Round(y)) Then
        Console.Writeline("Error found")
    End If
    x += 0.01
    y += 0.01
End While

If you run this could you should get at least one error message, and probably more.

Is there a solution to this problem? Of a sort, yes. I have found that using the Decimal data type does not give the same rounding problems. Decimal also solves problems that I have had in the past with the Math.Floor() function. However, several people have told me that using the Decimal data types does incur a slight memory cost over using the Double data type. The question you need to ask yourself is “What is more important: memory or precision?“. Personally I now always use the Decimal data type. Maybe one day I’ll get around to doing some benchmarks to find out if there is a performance hit when using Decimal rather than Double… But that’s a job for another day.

3 responses so far