Edited the post to add a "floating point isn’t bad or random" section because I really did not want people to take away "floating point is bad". Floating point is amazing! It's just solving an inherently very hard problem.

@b0rk Maybe it's useful to mention what that problem is? The way I phrase it is that FP is intended for computations where you care about relative accuracy.

@robryk @b0rk My take is that floating point is for when you want to do non-integer arithmetic fast, without using much memory, and you're willing to trade off some accuracy to do that. And that it used to be necessary for many everyday computations when computers were much slower and had kilobytes of RAM, but these days should really only be used for special purposes like 3D graphics and neural net simulations where performance is still critical.

@mathew @b0rk

I disagree that this is the only case.

If (a) you actually care about relative precision of results (b) you can structure the whole computation in the middle so that it's well behaved (i.e. derivative of the logresult wrt logvalue for any intermediate value is bounded by a reasonably small constant) then floating point is actually doing precisely what you want. Fixed point would _not_ be doing what you want then, because it would have fixed absolute precision (so would have worse precision when the output values are small).

This is not as contrived a setup as it sounds like. Many physical processes can be described as such computations (because all intermediate values are noisy, so the process becomes chaotic if it's not well-behaved as described above). This is also how people do computations by hand (e.g. compute everything to some number of significant digits), so it's a model that's very familiar to many.

@robryk @b0rk I wouldn't suggest fixed point as the alternative, except for things like currencies perhaps. I would suggest variable size decimal floats and doing the computation the way people expect.

Follow

@mathew @b0rk

What do you mean by variable size floats? Which size varies and what controls how it varies (e.g. what is the size of a result of an arithmetic operation)?

@mathew @b0rk

Do you mean arbitrary precision rationals or arbitrary precision values that are rationals with a power of 2 in denominator? Note that latter (the ones that are called `mpf` in gmp) can _not_ represent e.g. 1/3.

@robryk @b0rk I was thinking arbitrary precision decimal floating point numbers, but it’s nice if languages also have support for fractions so they can compute with 1/3 accurately, yes.

@mathew @b0rk

Arbitrary precision decimal or binary floating point either:
- requires you to actually specify the precision (so is "normal" FP, just wider), or
- doesn't support division (because results of division cannot be made exact at any precision).

@robryk @b0rk Yes. But with arbitrary precision BCD you can specify the precision in base 10 decimals, and the calculation is carried out in base 10, so the effect of precision limits is much easier to understand. Plus you’re not limited to whatever precision your binary (hardware) floats give you.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.