Feb 01 2006

## Double Imprecision In .Net

I’ve spouted on about this previously on my blog but I thought it was about time I posted some code that demonstrates some of the problems we’re up against. The problem seems to be that under certain circumstances the .Net Double data type loose its precision and gives you a number that you weren’t expecting. The chances are that the error will be out by 0.00000000000001, but that might be enough to cause an error.

The simplest way to prove this error is with a simple loop:

`Dim x As Double = 0.00`

While (x < 7.00)

Console.Writeline(x)

x += 0.01

End While

What you would normally expect is a linear progression from 0.00 to 7.00:

`0.00`

0.01

...

6.09

7.00

However, when the code is run you will find that somewhere in the output is some results like this:

`2.28`

2.29

2.29999999999999

2.30999999999999

This, as I am sure you will agree, is an unexpected result and could result in errors. If you try this exercise again but using Decimal instead of Double you will find that you get the results you expect. I’m sure there are people out there that will say that all you need to is simple rounding to get rid of the error. Sorry, but it doesn’t quite work like that and the following code will prove it:

`Dim x As Double = 0.00`

Dim y As Decimal = 0.00

While (x < 7.00)

If (Math.Round(x) = Math.Round(y)) Then

Console.Writeline("Error found")

End If

x += 0.01

y += 0.01

End While

If you run this could you should get at least one error message, and probably more.

Is there a solution to this problem? Of a sort, yes. I have found that using the Decimal data type does not give the same rounding problems. Decimal also solves problems that I have had in the past with the Math.Floor() function. However, several people have told me that using the Decimal data types does incur a slight memory cost over using the Double data type. The question you need to ask yourself is “*What is more important: memory or precision?*“. Personally I now always use the Decimal data type. Maybe one day I’ll get around to doing some benchmarks to find out if there is a performance hit when using Decimal rather than Double… But that’s a job for another day.

This is in fact a not so “unexpected result”, even if it always suprising the first time you encounter it.

It comes from the internal representation of Doubles in the memory, and this is not specific to .Net, all real programming languages have a data type for binary floating point numbers that exhibits this behavior.

The technical standard for representing these numbers (IEEE 754) defines floating points numbers as basically signed integers multiplied by powers of two. When fractional, you can also consider them as integers divided by powers of two. Of course, only a few decimal numbers do have an exact representation in this standard.

Therefore such data types should not be used for financial applications, and typically when decimal calculations must be done and displayed to the user.

This is also the reason why there is usually a more suitable Decimal representation for such numbers in programming languages (Decimal in .NET, BigDecimal in Java, etc.). These alternative data type work with a power of ten and a given “precision”, which allows them to make decimal calculations as a human would do it.

However these data type are more complex, and operations on these types are usually not implemented in hardware (while IEEE754 is).

That is why there is a (small) trade-off in using them, in memory and time.

You can find lots of information and examples about this by googling a bit. Or read the great website about Decimals maintained by Mike Colishaw, which explains it all:

http://www2.hursley.ibm.com/decimal/

(Use google’s cache if it is down)

You said that : “It comes from the internal representation of Doubles in the memory, and this is not specific to .Net, all real programming languages have a data type for binary floating point numbers that exhibits this behavior.”

but If you can find the Double problem in .Net for a sample you can run it in VB6 for example :

4.4 * 3 = 13.2 on VB6 and 13.20000000000000001 in VB.net

132 * 0.1 = 13.2 on VB6 and 13.20000000000000001 in VB.net

the Double has same precision in VB6 and VB.Net this means that the problem is from .Net.

Just because both VB6 and VB.Net implement IEEE 754 doesn’t mean that they will produce the same error on the same calculation. This is because their implementations of IEEE 754 is different.

What it means is that at some point in VB6 you will find similar behaviour to that which I have outlined above in VB.Net.

I think.