today I'm thinking about why it's useful for someone who mostly works in higher-level languages (python, JS, scala, etc) to understand how computers represent things in bytes.

some ideas I have so far:

- reading the output of strace
- doing back-of-the-envelope storage calculations
- choosing the right size for a DB primary key (to prevent overflow)
- knowing the limits of JS numbers
- optimizing algorithms (like knowing that multiplying by 2^n is fast)
- debugging encoding issues

what else?

I'm also curious about whether there's ever a reason to understand how two's complement works (I'm sure it's cool but I've never needed to learn it so far, only that signed and unsigned integers are different)

@b0rk the thing about learning about it is that it can explain some weirdness

is that -1 is True in visual basic, because false is 0, not false inverts the bits, and all bits set is -1 in two's complement

@b0rk i think it's worth learning things like "two's complement is about avoiding two values for zero" and handwaving "and it's nicer to implement in hardware"

but it might come up if you end up asking "then why do floating point numbers use a sign bit?"

but i'm not sure how illuminating "floating point wants a positive or a negative zero, so that even if numbers round down, the sign is preserved" is

@b0rk nb in hindsight the better explanation also includes "unlike integers, floating point numbers represent a range of possible values, so "-0" also means "very small negative numbers too small to represent" as well as meaning zero"

but honestly? you could do an entire zine on just floating point arithmetic

Follow

@tef @b0rk Several people have written entire textbooks on just floating point arithmetic

@radehi @b0rk i have the "Handbook of Floating-Point Arithmetic", yes

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.