Show newer

@allenstenhaus @kiosk

> but I'm not confident in my Linux knowledge & abilities in their own right.

One thing that's great about inspectable systems is that you need to learn very little to start being able to discover how any part of the system works. On Linux, as soon as you learn about strace/attaching debuggers/places to get kernel stacks from/... you gain a universal way to tell how something that you can clearly see working works.

> When I latch onto a problem, I never give up until I've found a solution, even if the problem takes years to resolve.

Inspectable computing systems is a way to get to that approach that I'm particularly partial to, because of 9(?) year old me slowly figuring out that gpm and X trying to read from the same mouse is what made mouse unusable in my first installation of Linux over the course of weeks or months.

@royaards It's actually interesting to figure out what properties of snow matter for skiing.

@allenstenhaus @kiosk

But you _are_ doing it, instead of passively sitting by? That does require something that people often call confidence and it's the kind of confidence that usually beneficial.

@kornel You mean TRAPPIST-1e in particular or eyeball planets in general?

How sure are we that TRAPPIST-1e's predominant liquid (do we know there's one?) is water?

@kornel

With the winds going in both directions (unless there's an ocean there, in which case it might be the current that's going in the opposite direction).

@moonbolt
Aaaaaah, I now see where I read too much into it.

(The curse of trying to compensate for reading too little of what people wish to imply~~.)

@delroth That's surprising. TIL. Does the error explain anything past the inability?

robryk boosted

@robryk I thought so, that's the first thing I tried, but apparently you can't LIMIT a DELETE with postgresql!

@moonbolt

I would suggest setting those settings anyway, given that (a) it's easy to do so (b) it might make some of those people actually abstain (c) it makes the activities of others more obviously wrong.

@moonbolt In case you're not aware, Mastodon allows users to configure what the robots.txt-like mechanism[1] should report for their profile page (and presumably threads); it's in "Preferences > Other > Opt-out of search engine indexing".

It obviously affects only crawlers that will obey that, but that might already be helpful?

[1] I presume that's done via robots meta tag or via X-Robots-Tag header; I haven't checked though.

@b0rk

Hm~ another way of looking at the "adding small and large numbers" problem.

It's not a problem by itself that large+small=the_same_large. The problem is that large+small+small+....+small=the_same_large!=large+(small+small+...+small).

So, the problem we have is that addition doesn't associate.

@b0rk

> problem: adding very small numbers to large numbers results in no change

Or, the ability to represent numbers that are too small to make a change when added to e.g. 1, but yet are nonzero. (The reason I want to point out this viewpoint is that the way we fix this by moving to fixed point is in large part by _removing the ability to represent those values_.)

> problem: +0 != -0

I don't think this is much of a problem; the problem usually is trying to use equality comparisons on FP values. It's sometimes possible to do that in a way that makes sense, but it's very fiddly (and if you use anything complicated enough (like trig functions) the guarantees library functions give you no longer uniquely identify the result).

> example: using NaN as a key in a hashmap (research.swtch.com/randhash)

The only case when I saw a FP-keyed hash{table,map} and it wasn't an obvious mistake was for memoizing a result of a function. I don't recall seeing any such H[TM] that wasn't serving as a cache that actually made sense.

I would say that lack of common data structures that represents mappings from FP values in a way that is close to the usecases is a more generic problem here.

@m0bi13

Wydaje mi się, że warto podzielić ten problem na dwa: czy teksty przetrwają w Internecie i czy możliwość dalszego użytkowania (dodawania komentarzy przez ActivityPub, dodawania tekstów przez użytkowników) przetrwa.

Co do pierwszego: if all else fails, wszystkie rzeczy, które znikają powoli, można dość łatwo zarchiwizować poprzez poproszenie web.archive.org, żeby to zrobiło (i archiwum to jest dość łatwo znajdowalne, jeśli znasz oryginalny URL). Nie martwiłbym się tym zbytnio.

Co do drugiego: w pewnym sensie wsparcie dla migracji w ActivityPub daje jakąś namiastkę tego. Poza tym to jest czysto społeczny problem, a moje intuicje dotyczące czegokolwiek społecznego są zwykle niepoprawne, więc nie mam tu chyba nic istotnego do powiedzenia.

@b0rk

I would phrase the money thing rather as "the exact correct answer _according to rules incompatible with ones for FP_".

I think there are very many problems that are instances of "implementing a numerical algorithm", even if people who don't think in these terms won't call them that. For example, computing a sum of a long sequence of numbers (where you can acquire errors by adding many numbers that are below the smallest representable difference in the sum accumulated so far).

@danluu Do you know of any country's equivalent of NTSB that would manage to learn about and investigate accidents where at first glance some safety feature failed to operate?

@b0rk

Ah, I forgot FP exists and that pow() accepts non-interger exponents. I wonder how much of the gain would remain if (1 << mj) was rather a loop (I would guess most of it on logscale).

@b0rk Unless these were BigIntegers, I'd be very surprised, because n couldn't have been larger than 64.

I think it's marginally useful to know that multiplying by a power of 2 is special. I think it's more useful to know how numbers are represented and thus which things are necessarily expensive and which are potentially cheap on such representations. (For BigInts, the existence of cheap shift instruction is probably lost in the rounding error when you compare actually doing repeated big-with-small multiplications with shitfing.)

@b0rk Maybe knowing that they are bit-identical on the intersection of their ranges is ~directly useful? Also, that addition is the same bitwise operation on both. Once you know these two things, you know everything about 2-s complement (you can infer how any value is represented).

@b0rk

> - optimizing algorithms (like knowing that multiplying by 2^n is fast)

I see optimizing algorithms and doing things that are faster by a multiplicative constant due to special shape of our hardware as two very different thing.

I think it's very useful to acquire the concept of invariants, pre- and post-conditions, and contracts. A common way to do so is to learn about some nontrivial data structures. However, an alternative way to learn that is to play around with distributed systems.

Some basic level of optimizing algorithms (being able to estimate what's going to be faster than what, knowing what is practically fast and what isn't, etc.) are useful in pretty obvious ways.

Learning those quirks of hardware that make some things faster is IMO not directly useful (similar to learning what are the limitations of various kinds of SIMD). The only way in which I see that as useful is as part of learning how the layers between what you are writing and logic gates work. Knowledge about some of those layers helps with debugging, and I struggle to describe why I intuitively feel that having a passing knowledge of most of them is useful.

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.