> - optimizing algorithms (like knowing that multiplying by 2^n is fast)
I see optimizing algorithms and doing things that are faster by a multiplicative constant due to special shape of our hardware as two very different thing.
I think it's very useful to acquire the concept of invariants, pre- and post-conditions, and contracts. A common way to do so is to learn about some nontrivial data structures. However, an alternative way to learn that is to play around with distributed systems.
Some basic level of optimizing algorithms (being able to estimate what's going to be faster than what, knowing what is practically fast and what isn't, etc.) are useful in pretty obvious ways.
Learning those quirks of hardware that make some things faster is IMO not directly useful (similar to learning what are the limitations of various kinds of SIMD). The only way in which I see that as useful is as part of learning how the layers between what you are writing and logic gates work. Knowledge about some of those layers helps with debugging, and I struggle to describe why I intuitively feel that having a passing knowledge of most of them is useful.
@b0rk Unless these were BigIntegers, I'd be very surprised, because n couldn't have been larger than 64.
I think it's marginally useful to know that multiplying by a power of 2 is special. I think it's more useful to know how numbers are represented and thus which things are necessarily expensive and which are potentially cheap on such representations. (For BigInts, the existence of cheap shift instruction is probably lost in the rounding error when you compare actually doing repeated big-with-small multiplications with shitfing.)
@robryk i wrote a blog post about it here, it's because java.lang.Math.pow is slow https://jvns.ca/blog/2015/09/10/a-millisecond-isnt-fast-and-how-we-fixed-it/
Ah, I forgot FP exists and that pow() accepts non-interger exponents. I wonder how much of the gain would remain if (1 << mj) was rather a loop (I would guess most of it on logscale).
@robryk yea agreed
@robryk when I wrote that I was thinking of the time when I was using a library that was spending 95% of its time multiplying by 2^n (because it was using a very slow Java exponentiation method). we replaced it with `x << n` and the code got a lot faster with a 1-line change.