Hello #statstodon! A reviewer asks me to perform a post-hoc #PowerAnalysis. I know this is generally not advised because if you replace the a priori effect size by the effect size measured in the experiment, this will introduce an erroneous relationship between the significance level of the test and the measured power.
... but does that mean that there is no proper way of measuring power retrospectively? For example, if you refrain from using the measured effect size and instead simulate a range of "a priori" effect size unrelated to the results of the test, then the dependency of the power to the significance level should not happen?
#stats #statschat @lakens
@lakens Thank you! And this is a very good reminder that your book is in my to-read list!
@leovarnet ok. Sorry just to point to it. But I get a lot of questions everyday and pointing to answers is most efficient on my end.
@leovarnet btw that chapter is just my Collabra paper, with a bit of reshuffling and additions.
@lakens Of course, that was just the pointer I needed! Thanks again
@leovarnet @lakens Our thoughts:
Heinsberg LW, Weeks DE. Post hoc power is not informative. Genet Epidemiol. 2022 Oct;46(7):390-394.
@StatGenDan This looks very relevant to my question indeed! Thank you!
@leovarnet there is sensitivity power analysis (see the section in the sample size justification chapter in my textbook). That can be done after a study. Otherwise power for the smallest effect of interest.