Show newer

@delroth
Have you seen the hydrogen bottle that the protons are coming from?

@dr2chase @crschmidt @attie

Or they were tired of opposing stupid ideas, so wanted to focus on opposing stupid ideas that have worse consequences.

@cliffordheath @attie

Yeah, they could simply describe a relaxed comparison that would at least free implementations from having to reproduce quantization noise (e.g. pointwise difference between original and decompressed at most 1 everywhere). But if they wanted to ask people not to faithfully reproduce other kinds of noise, they would need to first mostly solve the problem by describing how to compare two signals for equality modulo all those kinds of noise.

Thesis, I'm sure this one is definitely NSFW 

@koakuma

Also see xkcd.com/1403/ and the Finnish tradition of a graduation sword for new PhDs.

@crschmidt @attie

This is data sampled from some sort of sensors. It will have some sort of white-ish noise added to it, because a.o. the sensors quantize and if you are close to the boundary between 8 and 9 the choice of what the sensor emits can get arbitrarily sensitive. They are asking for lossless compression, so they are asking for that noise to be reproduced. That sounds totally impossible in the data rate described.

@codefolio @attie

They explicitly ask for lossless though. (To be fair, asking for lossy requires having a loss metric, which requires a good understanding of what's signal and what's noise.)

@cliffordheath @attie

This is also a reason why their request is likely simply impossible. I expect the signal to have some amount of white noise, and they ask for lossless compression, so they are asking for the noise to be reproduced exactly.

They have 10b samples and want more than 200x compression (presumably over linear PCM, which is the format of the files they published). This can only work if any white noise present itself takes less than 1/200 of the data rate, so if the white noise's amplitude is on the order of 2^-20 of their resolution (encodable in ~1/20 bit/sample).

(I speak of white noise only, because I'm assuming it's uncorrelated in different samples, both across electrodes and across time.)

Ignoring any correlations between general health and weight, I would naively think that risk would be proportional to weight (or rather, weight of the tissue in question): the chance of creating a mutation that is effective at creating a malignancy should be roughly constant per cell per unit of time, so the total rate of that happening should scale with number of cells.

Is there an obvious reason why this scaling is wrong, or is it not observable due to the health-weight correlation? (I've spent a few minutes trying to look it up, but found a mountain of experimental results correlating BMI with cancer risk only.)

@whitequark @mcc

Eh, it does as badly as everything else on Grays and Sieverts (it will happily convert between one and other).

@whitequark @mcc

This made me start thinking how to add handling of logarithmic units to it :)

@b0rk You might need to back the merge out though, if the merge was not an ff, but was trivial enough not to require your assistance.

@kuba

Numer 1 ma moim zdaniem dwie trochę różne interpretacje:

- jak wybieram kursy, to kieruję się tym, które uważam za ważne,
- jak patrzę na to, jakie kursy wybieram, wydaje mi się, że wybieram te najważniejsze.

@b0rk Out of curiosity, why does the "do a fastforward merge" tip not use `--ff-only`? (I would expect it to, because that makes the situation less confusing if an ff merge is impossible. But maybe brevity won out?)

BTW. I really like that you mention `--force-with-lease` over `--force` (especially that I keep forgetting the name of the flag for the former).

@Suiseiseki @sohkamyung

I would guess that the main risk comes from blows to the valve (which enjoys a significant but incomplete protection from that collar).

@sophieschmieg

I'm somewhat amused that you complained about the queue and promptly contributed to its growth. :)

Thanks for the recommendation.

@sarahjamielewis

Do you want to see the whole graph in one view, with each node having some (editable) position, or do you envision some other visual representation?

I'm asking because you mention thousands of nodes, and that seems like something that can be wieldy in single-sheet setup only if it's really sparse.

@_thegeoff @ottaross

Clumpiness needs some notion of scale: at some scale, a big ball of water is extremely clumpy, because most of it is ~empty and there are some very heavy nuclei strewn around. At more coarser scale, it's very uniform.

(At sufficiently coarse scales ~everything becomes uniformish, but the coarseness of that scale depends on the thing and IMO there might be multiple changes back-and-forth below that.)

@charonpdx @tinker

Also, printers tend to print confidential information.

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.