we all know floating point numbers can be treacherous, but what are specific examples of when they've betrayed you?
so far I have:
* money calculations where you need to get the exact correct answer
* twitter's tweet IDs are bigger than 2^53 and can't be represented as a Javascript number, you have to use a string
* implementing numerical algorithms (like matrix multiplication) (I've never done this)
trying to summarize a bit so far:
problem: NaN exists
example: using NaN as a key in a hashmap (https://research.swtch.com/randhash)
problem: adding very small numbers to large numbers results in no change
example: vanishing gradient in machine learning
problem: arithmetic on large numbers in floats is inaccurate
example 1: subtracting timestamps https://randomascii.wordpress.com/2012/02/13/dont-store-that-in-a-float/
example 2: implementing a game engine / physics simulation, things get jittery far away from the origin
problem: +0 != -0
example: ???
Hm~ another way of looking at the "adding small and large numbers" problem.
It's not a problem by itself that large+small=the_same_large. The problem is that large+small+small+....+small=the_same_large!=large+(small+small+...+small).
So, the problem we have is that addition doesn't associate.