@boilingsteam … and this is why you should add confidence when predicting events with low base rates.
Those series of “fail” wouldn’t look so bad when clearly marked with “10% likely”.

@boilingsteam Yes, but the confidence is important too :blobnerd:

One should be right exactly 75% of the time when placing 75% confidence. No more (underconfident), no less (overconfident).

There’s a practical difference when betting acting on 80% predictions (fairly sure) vs 55% predictions (barely has any clues) vs 99% predictions (verified insider info?).

I’d even argue predictions without confidence are worse than nothing (i.e. misleading) as they silently substitute readers’ subjective verisimilitude for authors’ information.

@dpwiz I agree with the principle (and I have read the book Superforecasters which talks about this subject at length) but there is a big flaw in that nobody benchmarks a 75% confidence in the same way. There is no clear tool to help you set a proper confidence value at the individual level apart from the obvious 0, 50 and 100 ones. anything in between is massively subjective and only make sense if you do hundreds of predictions.

Follow

@boilingsteam Hm… I’ve heard most people can distinguish 2-3 steps between “don’t know anything about that” and “willing to bet my ass on it” without training. And with more practice relatively easily add 2 steps more.
I recon discerning between 93 and 96 would be tough for even a practitioner of the art, but most pragmatic questions don’t require more than baseline skill level.

One such tool is indeed calibration practice. You git gud at what you do. Posting (and even reading!) confidence-annotated predictions will make your spidey senses tingling for misplaced confidence.

Another powerful tool is hypothetical switch from the original prediction to an equivalent abstract lottery situation. I.e. “New steam deck will be released in the next year (66%)” → “spin the wheel to win with 66% chance (2:1 odds)”. One can feel the difference with their guts.

@boilingsteam > Since 2001, Hubbard Decision Research trained over 1,000 people across a variety of industries.

Analyzing the data from these participants, Doug Hubbard reports that 80% of people achieve perfect calibration (on trivia questions) after just a few hours of training.

Wow. I wonder how long does that last :blobcatcoffee:

@dpwiz perfect calibration based on what? predictions that are not real?

@boilingsteam on being right X% of the time when placing X% confidence

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.