examples of floating point problems https://jvns.ca/blog/2023/01/13/examples-of-floating-point-problems/
@b0rk Maybe it's useful to mention what that problem is? The way I phrase it is that FP is intended for computations where you care about relative accuracy.
I disagree that this is the only case.
If (a) you actually care about relative precision of results (b) you can structure the whole computation in the middle so that it's well behaved (i.e. derivative of the logresult wrt logvalue for any intermediate value is bounded by a reasonably small constant) then floating point is actually doing precisely what you want. Fixed point would _not_ be doing what you want then, because it would have fixed absolute precision (so would have worse precision when the output values are small).
This is not as contrived a setup as it sounds like. Many physical processes can be described as such computations (because all intermediate values are noisy, so the process becomes chaotic if it's not well-behaved as described above). This is also how people do computations by hand (e.g. compute everything to some number of significant digits), so it's a model that's very familiar to many.
@robryk @b0rk I wouldn't suggest fixed point as the alternative, except for things like currencies perhaps. I would suggest variable size decimal floats and doing the computation the way people expect.