Another comment about concurrency.
What they drilled into us at CMU was that concurrency is not the same as parallelism — because parallelism is deterministic and concurrency is non-deterministic. This was almost right: but concurrency is not the same as non-determinism either! This is the real reason for the failure of naïve powerdomain semantics of concurrent processes.
@jonmsterling What do you mean by powerdomain semantics?
@robryk Nondeterminism is usually interpreted in the "powerdomain monad" -- where the effect allows you to flip a coin, basically.
People used to use this to interpret concurrency, by nondeterministically interleaving all the effects performed by different threads.
Then I'm curious which departure from that you think makes that model an oversimplification: more lax memory models, fairness constraints on the scheduler, something else I'm not thinking of, or some combination?
Do you mean explicit branches in code when you say branching? If so, I'm somewhat confused: I can get nondeterministic behaviour from a concurrent system with no explicit branches (example: thread A does write(false to x); write(true to x); thread B does return read(x);).
(I agree that interleaving of operations is not a correct way to describe an execution of a concurrent program and that (even though for many memory models one can use a linear order on something other than operations) a partial order where each thread is totally ordered and which satisfies some other memory-model-specific requirements is both a correct and natural representation of an execution.)
@robryk When I say “branching” I mean not only explicit program branching, but also the implicit one — when you read(x), this introduces branches for all possible values of x.
But yes, I agree with your example and other comments.