@danluu The main issue with decimal arithmetic in general is that it's trying to address a PR problem that non-practitioners _think_ is a numerical problem when it's really not.

The one justification you ever see for it is the "0.1 + 0.2 != 0.3" issue (usually those exact numbers).

In some cases that matters, e.g. certain financial computations, but honestly those are usually better off either using doubles scaled by 10000 (to meet the 4-decimal-digit requirement) or fixed-point, depending.

@danluu Anyway, re: the PR problem, it really offends people that 0.1 + 0.2 != 0.3 but the real issue here is that any finite representation must eventually lose precision, and while using individual bits does not look "nice" for decimal fractions, it still is numerically much preferable to lose base-10 digits.

It is telling that nobody ever seems to give examples involving multiplication, much less something actually tricky like power functions.

Follow

@rygorous @danluu

If you have floating point values that you process, often they come from some noisy measurement, and then you can't avoid understanding numerical stability regardless of how you process them, which is only a small step from thinking of numerical stability of various steps of computation separately, as if noise was added everywhere inbetween (which is often a good enough approximation of what FP does).

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.