Show newer

I love #GitHub, but is the fact that it is a proprietry, trade-secret system that goes against Free and Open Source Software (#FOSS) principles an issue? Even if you think not, then surely their unpermitted use of #copyleft code to train for-profit #Copilot is an issue. Really interesting article from @conservancy diving into this issue here: sfconservancy.org/GiveUpGitHub

I've been saying this for decades. There has been a bit of progress and setbacks too,

"How bibliometrics and school rankings reward unreliable science" by
#IvanOransky et al,

bmj.com/content/382/bmj.p1887

How bibliometrics and school rankings reward unreliable science

If we want better science we should start by deflating the importance of citations in promoting, funding, and hiring scientists, say Ivan Oransky and colleagues How much is a citation worth? $3? $6? $100 000? Any of those answers is correct, according to back-of-the-envelope calculations over the past few decades.123 The spread between these numbers suggests that none of them is accurate, but it’s inarguable that citations are the coin of the realm in academia. Bibliometrics and school rankings are largely based on publications and citations. Take the Times Higher Education rankings, for example, in which citations and papers count for more than a third of the total score.4 Or the Shanghai Ranking, 60% of which is determined by publications and highly cited researchers.5 The QS Rankings count citations per faculty as a relatively low 20%.6 But the US News Best Global Universities ranking counts publication and citation related metrics as 60%.7 These rankings are not, to borrow a phrase, merely academic matters. Funding agencies, including many governments, use them to decide where to award grants. Citations are the currency of academic success, but their …

www.bmj.com

@tomek

Kurczę, a wydaje mi się, że tak kiedyś płaciłem. Teoretycznie mógł też się zepsuć...

@tomek ekran nie jest jednocześnie "czytnikiem" zbliżeniowym?

I've always found poor overall quality of research produced by honest actors to be a bigger problem than outright academic fraud. Somehow the latter never seems interesting or surprising to me whereas the former points out to serious systemic problems in scientific formation. How do we reinstitute rigorous methodological training, genuine curiosity, deep theoretical thinking, programmatic and systematic effort, careful execution in scientific practice? Seems to be the harder problem to solve.

Could this be the paradigm shift all of #OpenScience has been waiting for?

Council of the EU adopts new principles:
"interoperable, not-for-profit infrastructures for publishing
based on open source software and open standards"
data.consilium.europa.eu/doc/d

and now ten major research organizations support the proposal:
coalition-s.org/wp-content/upl

What they propose is nearly identical to our proposal:
doi.org/10.5281/zenodo.5526634

Does this now get the ball rolling, or is it just words on paper?

When you look out to cosmic distances, it's difficult to have any sense of 3D shapes. Take this bright galaxy, M87: Is it shaped like a ball, an egg, a pancake?
Turns out, there is now a way to tell! (1/2)
#perspective #space

@talyarkoni@sigmoid.social

Also, first general AI programs is 66yo (General Problem Solver) ;)

@freemo @louiscouture
I always thought, that qoto logo is green onion with eyes!

@lakens
Assuming of normal distribution under h0 (simply because of the CLT), can be perfectly valid, so t-tests for h0 also. But at the same time, equivalence test could be not!

@lakens @lakens
Yeah, I know your article about it. Bahrens Fisher problem is heavily discussed for years :)
But both tests have assumption of normal distribution of means. And the same problem of ignoring heavy tailed distribution /vviolation of normality / skewed distribution / heteroscedascity/ mediation / moderation. However called situation, where sample is to small to be efficiently affected by CLT.

Let me repeat, equivalence testing can't provide conclusion, that effect is small, when it's relatively rare comparing to sample, regardless significance of results.

@lakens @JorisMeys

Estimation effect size and CI via Welsh's t-test assumes normal distribution of effect :) I mentioned that :)

@JorisMeys
But we never know if sample is big enough to detect rare (but strong) effect. EqTesting is easy way to underestimate sample size (it is why probably EqTesting is so popular in pharmaceutical studies).

@lakens

1) Of course, you can assume any distribution. (And that procedure is called "Neyman-Pearson theory of statistical testing".)
'Equivalence testing' is procedure almost always connected to t-test. Like in your textbook (photo 1) or TOST procedure (Schuirmann, D. J. 1987) .

2) "Violations of normality mostly have very little impact on error rates", violation of normality have biggest impact on estimation of variance, so also on error rates and effect estimation. (It's why heteroscedasticity is so important.)

1) It'll be easy to show how easily 'equivalence tests' can be very wrong, if assumptions ignores non-normality of effect (by using t-test).
I think I can make some simulation after 22:00 GMT. For now, I can show what happens to p-distribution, when effect is (very) not normal. (photo 2 - no effect, non normal distibution when h1=true, 3&4 valid use of t-test, effect big but moderated).

Schuirmann, D. J. (1987). A comparison of the two one-sided tests procedure and the power approach for assessing the equivalence of average bioavailability. Journal of pharmacokinetics and biopharmaceutics, 15, 657-680.
link.springer.com/article/10.1

Show older

Paweł Lenartowicz's choices:

Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.