The Shannon entropy:

H=–Σp_i log(p_i)

often decreases when you learn something. If you pick a card at random from a normal deck, H is log(52), but if you learn it’s diamonds, H becomes log(13).

But a decrease isn’t guaranteed. I read this example in Peres’s QM textbook (attributed to Uffink):

If your key has a 9/10 chance of being in your pocket and a 1/1000 chance of being in 100 other places, H=0.7856 nats.

If you then check your pocket and find it’s not there, H increases to 4.60517.

Follow

@gregeganSF The thing that is smaller than the original entropy is the expected value of the new entropy after the "check in the pocket" experiment.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.