> At the end of the day, 0.1 + 0.2 != 0.3 is a fact you have to live with
That is the one example that floats around a lot, but its also imho not very good one. '0.1', '0.2', and '0.3' are not floating point values, so the premise is flawed.
Also `round(0.1 + 0.2, 15) == 0.3` is true (in python), so being conscious about rounding things appropriately goes long way. And I imagine that correct rounding is relevant in monetary calculations no matter what sort of numbers you are using, so while while floats the situation might be more pronounced I don't see it being such fundamental problem.
> That is the one example that floats around a lot, but its also imho not very good one. '0.1', '0.2', and '0.3' are not floating point values, so the premise is flawed.
No, it’s the entire point. None of the values we deal with day to day are binary floating point, and certainly not currencies. So this sort of representational approximations is a major and constant issue of using floats.
> Also `round(0.1 + 0.2, 15) == 0.3` is true (in python), so being conscious about rounding things appropriately goes long way.
See above, rounding off and collecting error after every arithmetic operation is not the expected norm and what developers are taught.
> And I imagine that correct rounding is relevant in monetary calculations no matter what sort of numbers you are using
While that is true, it does not normally need to be done after every arithmetic operation, especially not after additions of already rounded off values.
> See above, rounding off and collecting error after every arithmetic operation is not the expected norm and what developers are taught.
Question is, is that a problem with developers or floats? :)
Ecosystem and tooling might help here, iirc that is something Kahan himself has been complaining about a lot. For example hypothetically you could have something like FP contexts or specialized high-level types where you can easily express how many digits are you expecting and the runtime/compiler would manage rounding etc so that you'd get more often correct results. Tbh that's just top of my head, and I didn't think too much about it.
But I think the question remains, how much of the problems are actually intrinsic to FP, and if you did actually cross t's and dot i's then would there still be some intractable problems in using FP with money? So is the problem "just" that FP can easily be misused, or that its impossible to use correctly?
I want to emphasize that I do not recommend anyone go using FP for money now. I'm just curious because its something I don't fully understand and well, HN has smart people that can hopefully help me there.
Question is, is that a problem with developers or floats?
Of course with floats. Requirements come from decimal-expecting people and developers have to convert requirements into an algorithm. If there’s a fundamental semantic or at least syntactic obstacle, it’s not a problem with developers.
Iow, if a language/system only has floats as “numbers”, it sucks for most business-level calculations.
That is the one example that floats around a lot, but its also imho not very good one. '0.1', '0.2', and '0.3' are not floating point values, so the premise is flawed.
is far less surprising.Also `round(0.1 + 0.2, 15) == 0.3` is true (in python), so being conscious about rounding things appropriately goes long way. And I imagine that correct rounding is relevant in monetary calculations no matter what sort of numbers you are using, so while while floats the situation might be more pronounced I don't see it being such fundamental problem.