examples of floating point problems https://jvns.ca/blog/2023/01/13/examples-of-floating-point-problems/
I disagree that this is the only case.
If (a) you actually care about relative precision of results (b) you can structure the whole computation in the middle so that it's well behaved (i.e. derivative of the logresult wrt logvalue for any intermediate value is bounded by a reasonably small constant) then floating point is actually doing precisely what you want. Fixed point would _not_ be doing what you want then, because it would have fixed absolute precision (so would have worse precision when the output values are small).
This is not as contrived a setup as it sounds like. Many physical processes can be described as such computations (because all intermediate values are noisy, so the process becomes chaotic if it's not well-behaved as described above). This is also how people do computations by hand (e.g. compute everything to some number of significant digits), so it's a model that's very familiar to many.
I don't see any alternative. When I do computations on paper that involve values measured with some uncertainty, I essentially use base 10 floating point.
_If you're in the right situation_ those eccentricities do not matter. It doesn't matter to me that I can't represent exactly 2/3 in decimal floating point -- the value I'm going to e.g. multiply that 2/3 with in a few moments anyway comes from a measurement with some relative error, so I can just choose appropriately accurate representation of 2/3.
@mathew @robryk @b0rk Not very specialized though. It probably includes the majority of all computing happening in the world. Take games for instance; many games are, at heart, just science or engineering simulators with a beautiful interface.
But games show how to move forward: those simulators are called "physics engines" or "spatial sound libraries" and most game creators don't roll their own. Similarly you should use a numerical library whenever possible rather than do it yourself.
@jannem @robryk @b0rk The game programming that requires floating point is a highly specialized area, handled by a few programmers. As you say, everyone else just uses the engines they build.
Numerical libraries are the same deal. A very specialized area few programmers are involved in.
Most programming isn’t in areas like that where floating point is necessary.
Note that FP is a suboptimal choice for things where you have predominantly absolute inaccuracy (e.g. time-since-epoch). It shines when your inaccuracy is predominantly relative. That is a usually a good assumption if your measurements are done using a measure appropriate for the scale you're measuring.
@robryk @JuergenStrobel @b0rk True, but I still think it's time we routinely did floating point calculations in decimal, rather than binary, to reduce the number of surprises. Apparently this is finally moving towards being a mainstream opinion these days, because IEEE 754-2008 is a thing.
https://en.wikipedia.org/wiki/Decimal_floating_point#IEEE_754-2008_encoding
That doesn't avoid most of the surprises:
- adding lots of small values is still tricky,
- still ~nothing is associative,
- still 1.0/3.0 + 1.0/3.0 + 1.0/3.0 != 1.0,
...
The only one it seems to avoid is that numbers entered as decimal fractions are represented precisely. However, that is, arguably, harmful: it makes it less apparent than other numbers are not, and encourages doing things in a way that relies on some outputs being exact.
@robryk @JuergenStrobel @b0rk Right, which is why I think floating point should be avoided as much as possible, and when you have to do it, you should try to use decimal floats, and ideally not ones with fixed precision.
If I were designing a programming language I'd call the data type "approximate" rather than "float", to act as a red flag to programmers.
@mathew @robryk @b0rk There's no way to do automatic variable precision FP, it would devolve to rationals and is intractable in general. Also decimal FP is less dense, accurate and performant than binary. Many scientists and other users will prefer speed and accuracy over being able to present 1/10 exactly, especially in the common case where they don't see any numbers at all.
@robryk @b0rk My take is that floating point is for when you want to do non-integer arithmetic fast, without using much memory, and you're willing to trade off some accuracy to do that. And that it used to be necessary for many everyday computations when computers were much slower and had kilobytes of RAM, but these days should really only be used for special purposes like 3D graphics and neural net simulations where performance is still critical.