# Ask HN: How to handle financial calculations precisely?

I am talking of a Java world, but this extends to any language I think.

Basically the problem is the following: given a set of (positive) floating point numbers, which sum up to x, how do I rescale them in the range [0, 1] so that this new set, when multiplied with x, gives the original set, up to the second decimal place? The max x I expect to encounter is probably in the order of 10000000000, just to be sure.

I've run a quick property based test, and the results are precise only if I use BigDecimal with a MathContext with precision of 8, rounding half up.

Is there some mathematical proof of how much precision I need for this problem?

Edit: sorry the precision was 14, 8 was for a later test with smaller amounts

I would encapsulate the set of supported operations in a class along with the rounding mode and scale.

Something like: ------

----- You can then either create a new instance when you need to perform a math operation or just create a singleton instance if scale and rounding mode are always the same.For financial calculations in US, I would recommend using RoundingMode.HALF_EVEN. From the docs "Rounding mode to round towards the "nearest neighbor" unless both neighbors are equidistant, in which case, round towards the even neighbor. Behaves as for RoundingMode.HALF_UP if the digit to the left of the discarded fraction is odd; behaves as for RoundingMode.HALF_DOWN if it's even. Note that this is the rounding mode that statistically minimizes cumulative error when applied repeatedly over a sequence of calculations. It is sometimes known as "Banker's rounding," and is chiefly used in the USA. "

Dinero stores three pieces of data with a dynamic scale: https://v2.dinerojs.com/docs/core-concepts/scale

- amount: expressed in the smallest subdivision of the currency, as an integer.

- currency: unique code, base, and exponent

- scale: the scale is essential to accurately represent monetary values without losing precision. It automatically adapts as needed to ensure you always retain accurate amounts.

---

Multiply[1]: If you need to multiply by a fractional multiplier, you shouldn't use floats, but scaled amounts instead. For example, instead of passing 2.1, you should pass { amount: 21, scale: 1 }. When using scaled amounts, the function converts the returned objects to the safest scale.

Allocate[2] (alternative to dividing): Distribute the amount of a Dinero object across a list of ratios.

[1]: https://v2.dinerojs.com/docs/api/mutations/multiply

[2]: https://v2.dinerojs.com/docs/api/mutations/allocate

What's wrong with UNLIMITED?

https://docs.oracle.com/javase/7/docs/api/java/math/MathCont...

That fails often because the precision is set to 0. That means, as per BigDecimal javadoc, that results not representable exactly give an exception. That can happen in divisions. You don't want that with money since it always ends up rounded to 2 in the end. The goal is to accumulate as less as possible rounding errors along the way.

I think there are mainly 2 scenarios for strict rules. Either you deal with actual financial data and there will be laws around how to do the rounding correctly for each case, or you're doing some analytics and you can just agree on what kind of error is acceptable. You can do the analysis of what the error in the result may be and check with the people who use the result.

So basically there are either rules around the calculation that you're not aware of yet, or you're doing it for some ad-hoc graphs where the answer is "Yolo, just choose some large number for precision".

I'm dealing with actual financial data and I don't think there are laws regarding exactly how you should deal with money in terms of specific algorithms. If you have any pointers...

Talk to a local accountant. Even if there's no law for your specific situation, there will be one applicable for the country, or at least a known rule everyone follows that's accepted by the local tax org.

The answer is never use floating point. Use BigDecimal for everything.

Sure thing. But how should I use BigDecimal optimally when performing operations between them? Which MathContext to set when dealing with money and specifically converting between amounts and percentages? I can't seem to find references online. Ideally I'd like some theorem that proves me I need a precision up to n for say amounts in the order of magnitude of ten billions.

Might just be me, but I always use integers. In calculations with multiplication, I always round immediately to integers again.

I save financial data in cents (or whatever the lowest is), and only show it as a formatted number in templates. Stripe and other payment providers seem to do the same.

How do you handle divisions? Do you have any sources for what payment providers do? Genuinely interested.

There are some libraries for dealing with currency. I don't remember the names.

But more importantly, what exactly are you trying to do? It sounds like you want to convert floats to percents and back again to get the same exact values. Double or BigDecimal should be fine to get back to the second decimal place in your scenario. I don't have a proof for that, but you could generate tests and data to test your system. If you're really concerned about getting the original values back, the. You can create an audit table to preserve them, depending on how this scenario is applied in real life.

division is always going to give something fiddly - if it realy divides and store result (integer cents) and not storing fractions.

(a/3 ) * 3 + (b/3 ) * 3 + (c/3 ) * 3 is not going to be same as a+b+c. (the * 3 , then summing-up parts, can happen years later after the a/3 parts)

So financialy, the last thing in a sum is calculated as remainder. Not as additive. e.g. the (c/3) *3 is "calculated" as a+b+c - the-other-two. i.e. 33+33+33... doesnt do 100. so the last 33 is actualy 100-33-33 = 34. Which is technicaly incorrect - but .. that's life.

Now, which one to choose to be the victim.. there are variants..

This I knew as well and have used in the past. It's perfectly fine if you do calculations in only one direction, say from amounts to percentages. This algorithm guarantees you that the sum of percentages (in this case) is 1 regardless of division errors. It doesn't work if you want coherence between amounts and amounts calculated from percentages derived from those amounts, because you're cumulating errors.

Edit: never mind, the code I tested it was wrong, this way works. However the advantage over just increasing precision is unclear to me (in the coherence between back and forth scenario)

So you need 10 digits to the left of the decimal and 2 to the right, for 12 total. A Double should be fine.

This seems about right. Work with pennies why not. My very basic test suggests that 1 - 2^(-53) is distinct from 1 in double-land. So if x is the total number of pennies and 2^(-53) * x < 0.5 I can't see why any problem would arise. That's assuming a proper rounding back to integers at the end.

I usually used Binary Coded Decimal.