today I'm thinking about why it's useful for someone who mostly works in higher-level languages (python, JS, scala, etc) to understand how computers represent things in bytes.
some ideas I have so far:
- reading the output of strace
- doing back-of-the-envelope storage calculations
- choosing the right size for a DB primary key (to prevent overflow)
- knowing the limits of JS numbers
- optimizing algorithms (like knowing that multiplying by 2^n is fast)
- debugging encoding issues
what else?
@b0rk the thing about learning about it is that it can explain some weirdness
is that -1 is True in visual basic, because false is 0, not false inverts the bits, and all bits set is -1 in two's complement
@b0rk i think it's worth learning things like "two's complement is about avoiding two values for zero" and handwaving "and it's nicer to implement in hardware"
but it might come up if you end up asking "then why do floating point numbers use a sign bit?"
but i'm not sure how illuminating "floating point wants a positive or a negative zero, so that even if numbers round down, the sign is preserved" is
@radehi @b0rk i have the "Handbook of Floating-Point Arithmetic", yes