Show newer

@JuergenStrobel @mathew @b0rk

Note that FP is a suboptimal choice for things where you have predominantly absolute inaccuracy (e.g. time-since-epoch). It shines when your inaccuracy is predominantly relative. That is a usually a good assumption if your measurements are done using a measure appropriate for the scale you're measuring.

@mathew @b0rk

Arbitrary precision decimal or binary floating point either:
- requires you to actually specify the precision (so is "normal" FP, just wider), or
- doesn't support division (because results of division cannot be made exact at any precision).

@mathew @b0rk

Do you mean arbitrary precision rationals or arbitrary precision values that are rationals with a power of 2 in denominator? Note that latter (the ones that are called `mpf` in gmp) can _not_ represent e.g. 1/3.

@mathew @jannem @b0rk

I don't see any alternative. When I do computations on paper that involve values measured with some uncertainty, I essentially use base 10 floating point.

_If you're in the right situation_ those eccentricities do not matter. It doesn't matter to me that I can't represent exactly 2/3 in decimal floating point -- the value I'm going to e.g. multiply that 2/3 with in a few moments anyway comes from a measurement with some relative error, so I can just choose appropriately accurate representation of 2/3.

@mathew @b0rk

What do you mean by variable size floats? Which size varies and what controls how it varies (e.g. what is the size of a result of an arithmetic operation)?

@mathew @b0rk

I disagree that this is the only case.

If (a) you actually care about relative precision of results (b) you can structure the whole computation in the middle so that it's well behaved (i.e. derivative of the logresult wrt logvalue for any intermediate value is bounded by a reasonably small constant) then floating point is actually doing precisely what you want. Fixed point would _not_ be doing what you want then, because it would have fixed absolute precision (so would have worse precision when the output values are small).

This is not as contrived a setup as it sounds like. Many physical processes can be described as such computations (because all intermediate values are noisy, so the process becomes chaotic if it's not well-behaved as described above). This is also how people do computations by hand (e.g. compute everything to some number of significant digits), so it's a model that's very familiar to many.

@danluu What do you mean by disabilities that match the standard masculine ideal? I thought the standard masculine ideal explicitly included lack of disabilities.

@stilescrisis @danluu

I find it amusing that small hands are a thing that is considered negative, because it's very useful if you do any mechanical work at all (well, maybe having thin hands with long fingers is better, but still, thick hands are the least convenient when you need to fish out a nut or a bolt you've dropped out of the thing you were bolting together).

@b0rk Maybe it's useful to mention what that problem is? The way I phrase it is that FP is intended for computations where you care about relative accuracy.

@b0rk

Nit:
> if you add very big values to very small values, you can get inaccurate results (the small numbers get lost!)

This is either not true or misleading. The result is the closest value to the actual result of the computation that is representable. The exact result is not representable, but as soon as you consider multiplication that will be the case for fixed point values too.

I'm not sure if you intentionally mentioned only some of the solutions for the odometer (an atypical one that's missing is to introduce randomness: if the addend is smaller than minimum representable difference, increment by min_repr_diff with change of addend/min_repr_diff; apart from there there are standard approaches for summing long sequences of fp values).

> example 4: different languages sometimes do the same floating point calculation differently

GPUs will often have some FP functions implemented so that they provide slightly different results. I vaguely recall that libm's trig functions can give different results depending on which exact cpu libm is compiled for (because it will or won't use particular intrinsics for trig functions depending on that).

> example 5: the deep space kraken

Similar cool looking issue from Outer Wilds: youtube.com/watch?v=ewSgPdBjNB (very minor spoilers for OW).

> I think the lesson here is to never calculate the same thing in 2 different ways with floats.

If you squint this looks like the lesson from, I think, most of the examples (all but the odometer ones?).

@filippo @internetarchive

I extrapolated from ChatGPT regularly hallucinating bibliographical references.

True, not much, unless that swamps real ones. Dunno if that would happen.

@filippo @internetarchive

I would expect that to result in some amount of totally fake URLs.

@eta But how does the train motion detection work? Usually you wouldn't care whether it overtriggers _out_ of unoccupied segments, because the designators there are empty.

@maddiefuzz Do I UC that these were taken at different distances?

@eta I don't entirely know what logic is used to move train designators forward: don't they run a risk of that engaging for whatever odd reason and these moving someplace else?

@b0rk

The appropriate part of the spec for this particular thing is w3.org/TR/activitystreams-core, in case it's helpful (it describes all the weird quirks where field can be a string with the url of the object, a Link object that describes the same, or the object inlined which then might or might not have a canonical URL of its own...).

@b0rk It's not a Mastodon, but an ActivityPub API.

@b0rk

There's no fixed URL. The URL can be read from the `replies` field of the post itself: paste.sr.ht/~robryk/40f933d400

@b0rk

I'm surprised by this. Each post has a replies collection that you can fetch. Fetching collections is slightly annoying (because they're usually paginated and paginated collections in ActivityPub work by each page providing the link to the next page), but is not too bad.

Is this more annoying than I realize or does that simply not work?

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.