Follow

@b0rk

Nit:
> if you add very big values to very small values, you can get inaccurate results (the small numbers get lost!)

This is either not true or misleading. The result is the closest value to the actual result of the computation that is representable. The exact result is not representable, but as soon as you consider multiplication that will be the case for fixed point values too.

I'm not sure if you intentionally mentioned only some of the solutions for the odometer (an atypical one that's missing is to introduce randomness: if the addend is smaller than minimum representable difference, increment by min_repr_diff with change of addend/min_repr_diff; apart from there there are standard approaches for summing long sequences of fp values).

> example 4: different languages sometimes do the same floating point calculation differently

GPUs will often have some FP functions implemented so that they provide slightly different results. I vaguely recall that libm's trig functions can give different results depending on which exact cpu libm is compiled for (because it will or won't use particular intrinsics for trig functions depending on that).

> example 5: the deep space kraken

Similar cool looking issue from Outer Wilds: youtube.com/watch?v=ewSgPdBjNB (very minor spoilers for OW).

> I think the lesson here is to never calculate the same thing in 2 different ways with floats.

If you squint this looks like the lesson from, I think, most of the examples (all but the odometer ones?).

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.