The Shannon entropy:

H=–Σp_i log(p_i)

often decreases when you learn something. If you pick a card at random from a normal deck, H is log(52), but if you learn it’s diamonds, H becomes log(13).

But a decrease isn’t guaranteed. I read this example in Peres’s QM textbook (attributed to Uffink):

If your key has a 9/10 chance of being in your pocket and a 1/1000 chance of being in 100 other places, H=0.7856 nats.

If you then check your pocket and find it’s not there, H increases to 4.60517.

Follow

@gregeganSF This situation will happen for any continuous (for L1 distance between probabilities) definition of entropy: if you bring your probability-of-being-not-in-pocket to 0, the limit will be entropy-of-a-constant, and yet on every point where that probability is nonzero the entropy-conditional-on-not-in-pocket will be unchanged.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.