Pages

December 9, 2009

A Float is NOT a Currency

Do not use floats when representing currency.This is just asking for trouble.

Anytime you have to deal with floating point representations you can always get unpredictable results. I cannot count the number of times I have seen the "phantom penny" situations where balances are a penny off and you end up spending half a day with your accountant hat on trying to find where the penny went. You look and look and then you realize that somewhere in the code someone used a float inside the calculation. You change the variable type and your penny "magically" appears.

There are many alternatives in the modern programming languages for representing currency and they should be used. If those are not available, you can always go old school and use the int primitive type to represent currency.

6 comments:

  1. Java has BigDecimal type.
    .NET has the Decimal type.
    Or you could use integers to represent the currency. For example, if your currency is US Dollars you could store the values in the number of whole pennies and divide by 100 to convert it to dolars.

    ReplyDelete
  2. Totally agree, just be careful anywhere you're converting from a float type to a BigDecimal in java (e.g. external intefaces to other unenlightened systems). Use the BigDecimal.valueOf(myFloat) rather than new BigDecimal(myFloat). The valueOf method converts the float to the expected (by a human anyway) value of the float - the constructor will give you the true value of the float (as far out as the BigDecimal precision anyway). It's in the javadoc - but nobody seems to read it!!

    Jon

    ReplyDelete
  3. I think more people would use BigDecimal if they could use the normal arithmetic operators intstead of having to call add(), subtract(), multiply(), divide(), etc.

    ReplyDelete
  4. Yeah, perhaps. It would be a funny conversation with a client though:

    "Ahh, well, it's a bit annoying to use methods when adding and subtracting - so i went with a float datatype so i could use + & -, but of course that means there will be inaccuracies in your data'.

    Still, you'd hope most developers would care about the quality of the finished product. I find most people I work with do care (many would say different if you trust the amount of flaming that goes on around the dev blogging space - mostly unfair) - juniors generally are the culprit of this kind of error, and that's fine, they aren't to know - a mentor should pick it up though.

    NB: You can use +, - etc in Scala with BigDecimal.

    Jon

    ReplyDelete
  5. Um, whats wrong with Double?

    ReplyDelete