we all know floating point numbers can be treacherous, but what are specific examples of when they've betrayed you?
so far I have:
* money calculations where you need to get the exact correct answer
* twitter's tweet IDs are bigger than 2^53 and can't be represented as a Javascript number, you have to use a string
* implementing numerical algorithms (like matrix multiplication) (I've never done this)
> problem: adding very small numbers to large numbers results in no change
Or, the ability to represent numbers that are too small to make a change when added to e.g. 1, but yet are nonzero. (The reason I want to point out this viewpoint is that the way we fix this by moving to fixed point is in large part by _removing the ability to represent those values_.)
> problem: +0 != -0
I don't think this is much of a problem; the problem usually is trying to use equality comparisons on FP values. It's sometimes possible to do that in a way that makes sense, but it's very fiddly (and if you use anything complicated enough (like trig functions) the guarantees library functions give you no longer uniquely identify the result).
> example: using NaN as a key in a hashmap (https://research.swtch.com/randhash)
The only case when I saw a FP-keyed hash{table,map} and it wasn't an obvious mistake was for memoizing a result of a function. I don't recall seeing any such H[TM] that wasn't serving as a cache that actually made sense.
I would say that lack of common data structures that represents mappings from FP values in a way that is close to the usecases is a more generic problem here.
@b0rk
Hm~ another way of looking at the "adding small and large numbers" problem.
It's not a problem by itself that large+small=the_same_large. The problem is that large+small+small+....+small=the_same_large!=large+(small+small+...+small).
So, the problem we have is that addition doesn't associate.