Preprint updated because actually submitted - we discuss why differences in reliability might not actually inform you about differences in measuring quality - and what to do instead!

Get the full story at osf.io/preprints/psyarxiv/ud9r

Happy to announce I'm teaching a summer school on cutting edge #metaanalysis using open source software, from basics up to machine learning methods for exploring heterogeneity, icw
MethodsNET at @Radboud_uni ru.nl/en/education/education-f

Similarly, if anyone is aware of a literature search tool that allows for selection based on data being openly available, that would be great as well! Like a Googe Scholar, EBSCOhost etc. where I can select for the presence of open data

Show thread

Anything that goes in similar directions also helps! I'm aware of typical ManyLabs and Registered Replication Report stuff, but systematical variation in either outcome measure (psychometric scale etc.) or the treatment, manipulation, intervention etc. has been coming up short unfortunately

Show thread

I'm searching for open replication data! Conceptual replications employing either:
- same treatment/intervention but different outcome measure
or:
- different treatment (dose)/intervention but identical outcome measure
(or both)

Any pointers appreciated!

Sometimes, to distinguish a population parameter from an observed estimate, the term "true" is used. For example, sometimes researchers use "true mean" to distinguish the population mu (greek \mu) from the sample mean (x-bar).

In classical test theory, we use "true score" to distinguish between an observed score, that contains error, and the score that we should have gotten if no measurement error was involved.

In order to properly distinguish the two types of "true-ness", is there another adjective I could use for the first case, the population mu (greek \mu)?

The @DataColada team has been sued by Francesca Gino for exposing the fraudulent data underlying four of her papers (she claims defamation). The legal defense could get expensive. Please join us in providing financial support for their defense.

gofundme.com/f/uhbka-support-d

There is a GoFundMe to help the @datacolada team with the legal costs of being sued by Francesca Gino: gofund.me/58491686

If you can, chip in!

@alexh

The more I think about it, the more certain I am, that this is actually happening (to some extent) - at the very least in judging the effect of novel treatments & interventions. Would be great to talk to some people dealing with such studies - to
a) get confirmation that this is indeed what's happening
b) get an idea of how such benchmarks or baselines are developed
c) understand how we can make such processes our own in the more experimental, ad hoc side of Psychology

@alexh

I remember a discussion on Twitter from a while ago - I think there's some movement in clinical Psychology as well, to move away from standardized effect sizes and instead discussing change in measures like the BDI. I'm totally out on a whim here though :D
Additionally, discussing something like therapy success sounds promising to me as well - it may be hard to define, but once defined, interpretation and modelling should be more transparent

@alexh

Oh I wasn't aware of this yet! That sounds like a great step in the right direction - I've just started looking into this, and it seems they specifically communicate "growth in sd" (standardised effect size) and "additional days of learning" (unstandardized/intuitive effect size). I haven't really grasped how these additional days are estimated though. I'd assume with some kind of (linear) predictive model?

@alexh @improvingpsych @JFuenderich @epizyklen@nerdculture.de

Agreed! I think "if the measure means much" is the key point here - I don't really have the experience to judge this, but I can get the feeling that quite often, we don't know enough about our measures to interpret them directly in a meaningful way. Resorting to standardized effect sizes can lead to a (false) sense of security in interpretations - I'm excited to hear what people at think about this.

@alexh @improvingpsych @JFuenderich @epizyklen@nerdculture.de

Too bad that you can't make it, would have loved to hear more about your experience with this!

What's your reason to prefer absolute effect sizes?

@improvingpsych

@JFuenderich, @epizyklen@nerdculture.de and I would love to discuss standardisation practices with you in Padua (or remote). The video is on osf: osf.io/eab9u or, including subtitles, on YouTube: youtu.be/tRveDihxtfM

Show thread

Going to , but unsure what unconference to visit? Go ahead and give our very brief (5 min!) ramp-up video on standardisation practices in Psychology a watch! @improvingpsych

We consider all research should be immediately available #OpenAccess, regardless of whether it was publicly funded or privately funded.

🚨Psych-DS is hiring! 🚨
We are looking for a software engineer to build validation tools (Python, R, Javascript/client side browser) for a technical specification/data standard for behavioral datasets

Details in thread, application here: careers.peopleclick.com/career

On March 30th 1867, Alaska was purchased from Russia for 7.2 million $. Today we present to the internet: Our totally free framework to harmonize & analyze multi-lab data! Thanks for having us at #ESMARConf #eshackathon

See here on #youtube: youtube.com/watch?v=m-W8O2yhRe

Also, check out our preprint: psyarxiv.com/bcpkt/

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.