examples of floating point problems https://jvns.ca/blog/2023/01/13/examples-of-floating-point-problems/
@b0rk Maybe it's useful to mention what that problem is? The way I phrase it is that FP is intended for computations where you care about relative accuracy.
@robryk @b0rk My take is that floating point is for when you want to do non-integer arithmetic fast, without using much memory, and you're willing to trade off some accuracy to do that. And that it used to be necessary for many everyday computations when computers were much slower and had kilobytes of RAM, but these days should really only be used for special purposes like 3D graphics and neural net simulations where performance is still critical.
I disagree that this is the only case.
If (a) you actually care about relative precision of results (b) you can structure the whole computation in the middle so that it's well behaved (i.e. derivative of the logresult wrt logvalue for any intermediate value is bounded by a reasonably small constant) then floating point is actually doing precisely what you want. Fixed point would _not_ be doing what you want then, because it would have fixed absolute precision (so would have worse precision when the output values are small).
This is not as contrived a setup as it sounds like. Many physical processes can be described as such computations (because all intermediate values are noisy, so the process becomes chaotic if it's not well-behaved as described above). This is also how people do computations by hand (e.g. compute everything to some number of significant digits), so it's a model that's very familiar to many.
@robryk @b0rk https://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic
For example, https://gmplib.org