Does the concept of "probabilistic" lock-free parallel programming exist in theoretical CS? The idea is to intentionally allow race conditions, but design it in such a way that the probability of triggering it is extremely low, making it negligible in practice. Like how an ideal synchronizer circuit is impossible to build, but practical ones have a MTBF of 1e10 years.
One other area that is interesting and is (or at least was ~8 years ago) badly studied is measuring time in some other way than ~instructions. Back then I couldn't find a measure of complexity that would tell me that e.g. a structure where each thread increments its own memory location is faster than a structure where everyone does fetch_and_add on a shared memory location instead.
@niconiconi Ah, and yet another thing that is theoretically poorly studied IMO but does actually even appear in practice are data structures that are lock-free with high probability (i.e. with low probability they might even block).
I'm somewhat sour that this is not a better-studied area, because most of my inventiveness when I was working on this area was spent on dealing with edge cases caused by patterns of extreme scheduler unfairness that I never once managed to trigger on actual hardware. (And I would guess that if we had a way to formalize this, I could have reduced this to "particular patterns of scheduler behaviour, that are necessarily low-probability if the scheduler is not allowed to observe your internal state").