examples of floating point problems https://jvns.ca/blog/2023/01/13/examples-of-floating-point-problems/
@b0rk Maybe it's useful to mention what that problem is? The way I phrase it is that FP is intended for computations where you care about relative accuracy.
@robryk @b0rk My take is that floating point is for when you want to do non-integer arithmetic fast, without using much memory, and you're willing to trade off some accuracy to do that. And that it used to be necessary for many everyday computations when computers were much slower and had kilobytes of RAM, but these days should really only be used for special purposes like 3D graphics and neural net simulations where performance is still critical.
Note that FP is a suboptimal choice for things where you have predominantly absolute inaccuracy (e.g. time-since-epoch). It shines when your inaccuracy is predominantly relative. That is a usually a good assumption if your measurements are done using a measure appropriate for the scale you're measuring.
That doesn't avoid most of the surprises:
- adding lots of small values is still tricky,
- still ~nothing is associative,
- still 1.0/3.0 + 1.0/3.0 + 1.0/3.0 != 1.0,
...
The only one it seems to avoid is that numbers entered as decimal fractions are represented precisely. However, that is, arguably, harmful: it makes it less apparent than other numbers are not, and encourages doing things in a way that relies on some outputs being exact.
@robryk @JuergenStrobel @b0rk Right, which is why I think floating point should be avoided as much as possible, and when you have to do it, you should try to use decimal floats, and ideally not ones with fixed precision.
If I were designing a programming language I'd call the data type "approximate" rather than "float", to act as a red flag to programmers.
@mathew @robryk @b0rk There's no way to do automatic variable precision FP, it would devolve to rationals and is intractable in general. Also decimal FP is less dense, accurate and performant than binary. Many scientists and other users will prefer speed and accuracy over being able to present 1/10 exactly, especially in the common case where they don't see any numbers at all.
@robryk @JuergenStrobel @b0rk True, but I still think it's time we routinely did floating point calculations in decimal, rather than binary, to reduce the number of surprises. Apparently this is finally moving towards being a mainstream opinion these days, because IEEE 754-2008 is a thing.
https://en.wikipedia.org/wiki/Decimal_floating_point#IEEE_754-2008_encoding