The current default cryptographic hash functions are serial, in that you can’t efficiently use a cluster computers to hash a 10TB file, say.

When do people think this change? Concretely, when will the hash function people default to (e.g., the one used to hash Linux kernel distributions) have logarithmic circuit depth as a function of file size?

Follow

@irving Whenever that's important we already use such functions: you can construct something with circuit depth of f(security parameter)*log(input size) by computing the Merkle tree root hash.

@robryk I know that, but your statement should be amended to "whenever it is required". It would already be quite useful even for normal O(MB) file sizes to be able to use parallelism, which is why I am asking about the default hash.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.