Show newer

I've been saying this for decades. There has been a bit of progress and setbacks too,

"How bibliometrics and school rankings reward unreliable science" by
#IvanOransky et al,

bmj.com/content/382/bmj.p1887

I've always found poor overall quality of research produced by honest actors to be a bigger problem than outright academic fraud. Somehow the latter never seems interesting or surprising to me whereas the former points out to serious systemic problems in scientific formation. How do we reinstitute rigorous methodological training, genuine curiosity, deep theoretical thinking, programmatic and systematic effort, careful execution in scientific practice? Seems to be the harder problem to solve.

Could this be the paradigm shift all of #OpenScience has been waiting for?

Council of the EU adopts new principles:
"interoperable, not-for-profit infrastructures for publishing
based on open source software and open standards"
data.consilium.europa.eu/doc/d

and now ten major research organizations support the proposal:
coalition-s.org/wp-content/upl

What they propose is nearly identical to our proposal:
doi.org/10.5281/zenodo.5526634

Does this now get the ball rolling, or is it just words on paper?

petersuber  
This is big. No #embargoes. No #APCs. "The #EU is ready to agree that immediate #OpenAccess to papers reporting publicly funded research should be...

When you look out to cosmic distances, it's difficult to have any sense of 3D shapes. Take this bright galaxy, M87: Is it shaped like a ball, an egg, a pancake?
Turns out, there is now a way to tell! (1/2)
#perspective #space

“Responsible research assessment should prioritize theory development and testing over ticking open science boxes”

New preprint comments on proposals to change hiring and promotion in psychology to become more oriented to open science.

psyarxiv.com/ad74m/

Few quotes follow: 🧵👉

#Science
#Psychology
@psychology
#OpenScience
#MetaScience
#MetaResearch
#SociologyofScience
#ScienceofScience
#STS
@stsing
#PsychJobs
@academicchatter
@academicsunite

This slide is from a talk I gave at an OSF symposium a few years back. It's still relevant and I think we should have prioritized, we should still prioritize, the set of issues on the right over those on the left. But I would now want to add measurement on the right side as well.

And a clarification: My prioritization is not because I think right side issues are more important (which I do) but because fixing the left ones won't make much of a difference until the right side makes sense.

Our work on the theoretical foundations of results reproducibility is out at #RSOS and is open access. We dissect the relationship between open science, replication experiments, and reproducible results and challenge many deep-seated assumptions. We specifically show why meaningful replications need to be based on deeper theoretical understanding and stronger empirical foundations.
royalsocietypublishing.org/doi

Conclusion: We need to design better experiments. For that, we need a theoretical understanding of what an experiment is and does.

Show thread

@danhon I see you’re a man that also has good taste in mice 👀

I do not want Slack to provide a probabilistic summary of what I said. I don't want notion to guess what I'm going to say. I want to choose my words with clarity and precision in mind, and if people want me to take the time to read what they've written I would hope that they've taken the time to choose their words too

And I really want to take my words out of training data sets

"The single most important problem with null hypothesis testing is that it provides researchers with no incentive to develop precise hypotheses. To perform a significance test, one need not specify the predictions of either one’s own research hypothesis or those of alternative hypotheses"
(Gerd Gigerenzer 1993)

🌲
not reproducible ≠ wrong/false/fluke
reproducible ≠ true

not reproducible ≠ poor science
reproducible ≠ good science

reproducibility of results is not a reliable indicator of truth or research quality or epistemic progress.
🌲

I think the calculus of #RegisteredReports might have flipped in a #SurveillancePublishing APC-driven #OpenAccess world.

Subscription models meant that a journal could command a high price by being in high demand in a self-reinforcing cycle where since most libraries subscribed to them then they would have high readership, etc. Multiply that by the power of bundling or whatever. Libraries being a conduit could tell who read what and tailor subscriptions accordingly. actual loss of readership could impact subscription cost during negotiations, so null results are less attractive because they command fewer readers and citations and whatnot. publication bias ensues. the classic story.

in an APC world, where the profit is derived from authors willing to directly pay more for the attendant view count and citation, the registered report is instead more like a commitment to pay at some future time to publish. if the prices keep going up, the journal effectively invests in your need to publish as a security.

this is doubly perverse in a surveillance publishing system, where the publishers operate paper recommendation and rating systems linked to funding and employment decisions. in that case, they can just manufacture the view count and citation - and even literal "scientific value score" - as a function of APC price, so null results aren't even a problem since the exclusivity-prestige link is partially dissolved.

I wonder if causes for publication bias could have changed substantially enough that registered reports could backfire as a means of combating publication bias. Since the primary filter is the perceived importance of a piece of work - assuming the authors could pass some competency and design check normal to the field - which is most likely to be at least partially evaluated by the same system of self-fulfilling metrics used in the recommendation/scoring systems for funders and employers, they might directly reinforce hype cycles. couple that again with the prestige gradient model of APC pricing where one publisher owns many Journals at different prestige levels and can bounce you down the ladder to one with a lower but still high APC.

Journals then would then be effectively sorting papers by APC according to the propensity for views/citations, regardless of outcome. It's sort of a combination of payola and security. plz lmk where I'm missing something here bc not just trying to shit on the parade.

📢 Yes we can! Only 19 more # researchers' signatures are needed for the #PCIManifesto to reach 🌟 the symbolic threshold of 1️⃣0️⃣0️⃣0️⃣! Spread the word! #openedu #openscience #openaccess peercommunityin.org/pci-manife

and less rigorous researchers. It's a failed system. I don't believe in peer review at all because it's a relic from a corrupt system that glorifies individual research paper and oversimplifies science to a handful of results. There's so much to dismantle.

Show thread

Peer review? Every time I read your research, I am the peer. I read, process, think, evaluate, decide what to do with it. I can choose to ignore it, challenge it, use it, improve upon it, etc. That's the work, no? We each are peer reviewers when we engage with others' work. We don't need journal involvement for peer review to happen. We can't rely on a few random and anonymous reviewers to do our job either. Outsourcing this basic part of research process has only turned us into lazier readers >

Elon Musk, Twitter & die Pressefreiheit 

Stell dir vor du bezahlst auch noch einen Clown wie #Musk dafuer, dass er freie Berichterstattung zensiert 😅

ZDF Frontal? Waren das nicht die, die auch immer mal wieder kritisch ueber die #Tesla Shitshow in Gruenheide berichtet haben?

Free Speech my ass!

Sorry fuer den Mini-Rant. Bin gerade angewidert.

Show older

Paweł Lenartowicz's choices:

Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.