@danluu The main issue with decimal arithmetic in general is that it's trying to address a PR problem that non-practitioners _think_ is a numerical problem when it's really not.
The one justification you ever see for it is the "0.1 + 0.2 != 0.3" issue (usually those exact numbers).
In some cases that matters, e.g. certain financial computations, but honestly those are usually better off either using doubles scaled by 10000 (to meet the 4-decimal-digit requirement) or fixed-point, depending.
If you have floating point values that you process, often they come from some noisy measurement, and then you can't avoid understanding numerical stability regardless of how you process them, which is only a small step from thinking of numerical stability of various steps of computation separately, as if noise was added everywhere inbetween (which is often a good enough approximation of what FP does).