Show newer

@boilingsteam > Since 2001, Hubbard Decision Research trained over 1,000 people across a variety of industries.
> Analyzing the data from these participants, Doug Hubbard reports that 80% of people achieve perfect calibration (on trivia questions) after just a few hours of training.

Wow. I wonder how long does that last :blobcatcoffee:

@boilingsteam Hm... I've heard most people can distinguish 2-3 steps between "don't know anything about that" and "willing to bet my ass on it" without training. And with more practice relatively easily add 2 steps more.
I recon discerning between 93 and 96 would be tough for even a practitioner of the art, but most pragmatic questions don't require more than baseline skill level.

One such tool is indeed calibration practice. You git gud at what you do. Posting (and even reading!) confidence-annotated predictions will make your spidey senses tingling for misplaced confidence.

Another powerful tool is hypothetical switch from the original prediction to an equivalent abstract lottery situation. I.e. "New steam deck will be released in the next year (66%)" → "spin the wheel to win with 66% chance (2:1 odds)". One can feel the difference with their guts.

@boilingsteam Yes, but the confidence is important too :blobnerd:

One should be right exactly 75% of the time when placing 75% confidence. No more (underconfident), no less (overconfident).

There's a practical difference when ~~betting~~ acting on 80% predictions (fairly sure) vs 55% predictions (barely has any clues) vs 99% predictions (verified insider info?).

I'd even argue predictions without confidence are worse than nothing (i.e. misleading) as they silently substitute readers' subjective verisimilitude for authors' information.

@boilingsteam ... and this is why you should add confidence when predicting events with low base rates.
Those series of "fail" wouldn't look so bad when clearly marked with "10% likely".

@tao It's in their published threat model / security assumptions:

> Attackers do not have access to private keys referenced within the C2PA ecosystem (e.g., claim signing private keys, Time-stamping Authority private keys, etc.). They may, however, attempt to access these keys via exploitation techniques...

And later, in the spoofing section.

Proper key handling is notoriously difficult. And with incentives like here, attackers would be motivated to hit it even more than some DRM system.

And anyway, no need for a breakthrough if you can walk in with a gag order and do what you need.

@tao The cryptography infrastructure would be broken܍ in no time and then the courts would have to face "cryptographically secure" fakes.

܍ Knowing ahem.. state actors, the thing would be backdoored through and through so the services could do their thing whenever they need.

@Sherifazuhur While we're at it, why did you explicitly link the older version of the tweet instead of just the recent one?

@DetersHenning I've been following arabic media for a while and, unfortunately, this is par for the course. Every trick you can find in psyops book is deployed at large.

@DetersHenning @Sherifazuhur @simon_brooke I have a feeling that the post is intended to specifically* support "the court ordered a ceasefire" narrative, with a follow up of "Israel violates the court decision".

*) Given how the first point is phrased and then amended.

@mjg59 @archiloque @aeva x86 is basically the same. Vendors add extras to prop their benchmark numbers. Then the extras get either adopted or abandoned.

Ditto with software "distributions".

We can live with that, as we have alternatives and not blobs are required for mainstream ops.

@redmp

Evil haskell tips:

```
case
quotRem n d
of
(q, r) -> ...
```

@redmp @BoydStephenSmithJr I started to use `where`-like `let`:

```
let
z = bar x
w = zip y
in
whatevs z w
```

This way it better preserves layout when shuffling things around.

@danilo > Give me a week and you can have plans for a scalable fusion reactor design.

Oh... Trading "decades" of building AI for finally getting fusion after a week instead of perennial "in 20 more years" is so, so, so worth the deal.

Even if it burns out after that single miracle, having cheap clean energy will solve any climate-related problems in no time. We have solutions already, the problem is they aren't particularly energy-efficient.
With a few more hints at efficiency geoengineering can be achieved to marvelous results.

@danilo Incremental improvement in carbon capture is all it has after all the time? Did it even have "Engines of creation" in its corpus? We are a living proof of programmable nanotechnology being possible. Even humans can solve it on their glacial bureaucratic timescales given incentives. With AIs, this step is basically unavoidable due to its omni-useful nature. Whatever the task you can postulate "brb, inventing nanotech" will be one of the first replies from a truly capable AI.

If the story is about a chatbot instantiated for hype, then yeah, everything checks out. But if the "1000 person-years compressed into 1 hour" is the real deal then the ending doesn't make any sense.

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.