Taking a closer look at this paper, I really dislike it.

nature.com/articles/s41586-022

It's clear that newer papers *must* have a lower “disruption” score than older ones under a null model — they even confirm this in the supplemental material with a randomization test.

When comparing with the null model they compute only the z-score, getting values at most 4 or so. It's also besides the point — as usual, small meaningless deviations from the null model can be statistically “significant.” Effect size ≠ statistical significance.

Finally, according to their definition, review papers would be “disruptive” because they funnel a bunch of citations. And a paper that does not cite anyone but is universally cited would not be “disruptive”. 🤷

@networkscience
@academicchatter
#networkscience #networks

@tiago @networkscience @academicchatter
Interesting points, thank you for sharing! I need to read it.. I wonder what @LaurelineLogiaco thinks of it.

Follow

@elduvelle @tiago @networkscience @academicchatter It's a good point that the disruption index (DI) chosen in Park et al is not perfect - though it does correlate with human-labeled novelty, see direct.mit.edu/qss/article/1/3 . A positive DI in Park et al requires citations of the focal paper but none or few of its refs. This is indeed harder to do with larger citation lists, in particular if people cite older works not necessarily because they are still the canon but possibly for discussion. That said, I think that there are measures beyond the DI that also argue for an increased difficulty to produce and get recognition for 'disruptive' work - both in the Park et al paper (Cf. their fig. 6 about the diversity of scientific knowledge used; or fig.3 about the vocabulary of papers) and elsewhere. For instance, Chu and Evans (pnas.org/doi/10.1073/pnas.2021) focused on the Gini coefficient of citations, the duration of dominance of papers, or the probability of a paper to gradually become very cited. In the context of comparing the success of innovative papers among different types of researchers, Hofstra et al. used ML to quantify the presence of new conceptual linkage within papers (pnas.org/doi/10.1073/pnas.1915), and showed that the categories of people who recently started to join academia (under-represented minorities) produced more 'innovative' works that had a hard time getting cited. I think that this converging evidence from many different measures and analyses methods suggests the merit of the underlying hypothesis that disruptive/novel contributions have a hard time being seen and valued, perhaps more today than in the past for an ensemble of reasons. Even if the effect size is probably much smaller than suggested by the DI in Park et al ;-).

@LaurelineLogiaco @tiago
Wow… I didn’t expect such a detailed response, thank you so much! It does make sense…

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.