vkehayas boosted

I've written before about why the current system of peer review is failing science. Now I'd like to set out a few ideas about what I think could replace it. This is an opinionated list. It's what I'd like to happen. However, different people and fields will want different things and it's vital to come up with a flexible system that allows for this diversity.

It must be ongoing, open, post-publication peer review, for the reasons set out in the previous article: pre-publication peer review systematically fails as a check on technical correctness and creates perverse incentives and problematic biases in the system as a whole. Making it an ongoing and open process allows us to discover problems that may be missed in a closed process with a small number of opaquely selected reviewers. It lets us concentrate more review effort on influential work that ought to be more closely scrutinised than it is at the moment. And of course, instant publication speeds up science allowing people to immediately start making use of new work rather than waiting months or years for it to be available.

Reviews should be broken down into separate components: technical correctness, relevance for different audiences, opinions of likely importance, etc. Reviewers do not need to contribute all of these, and readers or other evaluators should feel free to weight these components in whatever way suits them.

We need better user interface / experience to navigate and contribute reviews. Every time you look at a paper, just the abstract or the full text, you should be presented with an up to date set of indicators like 3✅ 2❌ 1⚠ for three positive reviews, 2 reviews with unresolved major issues and 1 with minor issues. Clicking on these would pop up the relevant reviews and allow the reader to quickly drill down to more detailed information. Similarly, contributing reviews or commentary should be frictionless. While reading a paper you should be able to highlight text and add a review or commentary with virtually no effort. A huge amount of evaluation of papers is done now by individual readers and journal clubs, but all that work is lost because there's no easy way to contribute it. Realising all this requires shifting away from the PDF to a more dynamic format, and abandoning the outmoded notion of the version of record.

There needs to be a way to integrate all the different sources of feedback so you don't have to visit a bunch of different websites to find out what people are thinking about a paper, but instead it just pops up automatically when you open it. That will require standardised ways of sharing information between different organisations doing this sort of feedback.

None of these are particularly new ideas. The new model of eLife is about creating structured post-publication reviews with standardised vocabulary, and their nascent sciety platform is an attempt to integrate all the different sources of feedback. hypothes.is is a great first step towards improving the interface for review (although their recent move to for-profit status is worrying). The key will be to organise a way to put them all together and make it frictionless and pleasant to engage with it. It will require a revolutionary change because to make all these things work together it all has to be open and free, and legacy publishers will fight that.

Finally, with Neuromatch Open Publishing, we are working towards building a non-profit infrastructure to enable us to collectively do all these experiments and more.

Ending support for legacy academic publishing

I profoundly disagree with the current system of academic publishing, and so I have decided that I will no longer give any voluntary labour to support it. I believe it no longer serves science well for us to maintain this unhealthy system. Instead, I will spend that time building alternatives. Originally, the purpose of a journal was simple dissemination of knowledge. The internet has made this redundant. Only relatively recently did journals acquire the secondary purposes of organising peer review and editorial selection. Peer review was not common until the mid 20th century, with Nature only starting to use it systematically in 1973 and The Lancet in 1976. It has several functions: it can serve to evaluate the technical correctness of a study, give feedback to authors to help communicate their work better, and give an opinion on the significance of work to help editors make publishing decisions. Normally when we think about peer review we consider all these functions and their value to science bundled together, but they are separate and I believe each can be done better in a different way. Evaluating technical correctness of work has a clear value and important role in science, however peer review as currently managed is not reliable enough. Firstly, having typically only two reviewers means that there are relatively few opportunities to catch errors, and it is therefore unsurprising that many errors are found in published papers. Errors found following publication are rarely corrected. Secondly, the quality of peer reviews is very uneven, with some reviewers giving very careful and detailed analysis that can transform a paper, and some giving a quick opinion based on a skim read. This latter is not a moral failing of reviewers, but an unavoidable consequence of the excessive demands made on scientists’ time, and the fact that time spent on peer review is rarely, if ever, counted or valued by people making decisions about funding, hiring and promotion. We are expected to do it, but not rewarded for doing it well. Thirdly, the process by which reviewers are selected is not transparent and cannot guarantee that appropriate reviewers are chosen. Indeed, it seems unlikely that we are finding the best reviewers for a paper given how difficult it can be for editors to even find enough reviewers for a paper. In practice then, peer review as currently constituted fails in its role of giving confidence in the technical correctness of published work. We need to move to a system where reviews are given on a rolling basis to work that is immediately published on submission (post-publication peer review). This will increase the chance that errors are found, because there will have been more eyes on the paper, including from people who are more invested in the results. Some papers cannot be adequately reviewed by just two reviewers because they use a broad range of techniques, and post-publication peer review addresses this by hugely widening the pool of potential reviewers. Papers that are very widely used and cited should be subject to much more stringent review because the consequences of an error are much graver, and post-publication peer review makes this happen organically. The second function of peer review is giving feedback to authors to improve their work or how it’s communicated. This is laudable, but I see no reason why this should be a required step for publication rather than an optional service available to authors. Making a response to reviewers’ comments non-optional (unavoidable when the feedback role is integrated with the selection role of peer review) sometimes improves a paper and sometimes makes it worse. It should be the authors’ choice how to write their paper. The third function is giving an opinion on significance. The potential value of this to science is to use the journal in which a paper is published to provide a signal to scientists about its likely importance. This comes with a risk of bias because those decisions are taken by a small group of mostly senior scientists who cannot be representative of the community as a whole. This bias is then compounded by the fact that future career success depends on journal track record. Despite the issues of bias, curation of a selection of papers by a small group of field experts can provide some valuable information, but this information should be provided separately from publication and non-exclusively. We should have a variety of ways in which papers are recommended, including group curation, individual curation, social network driven (“likes”), and purely algorithmic (topic modelling). Scientists should use whatever works for them. Singling out one such mechanism as more important than others hugely amplifies its significance and sends a distorted signal, both to the community and outside, that a selected paper is objectively good and important. Integrating all these functions into a single system of peer review and journal publishing rather than keeping them separate introduces additional problems. Since evaluation of technical correctness is considered together with opinions on significance that determine future career success, authors are highly incentivised to write their papers in a less transparent way that makes it harder to find errors, and to overstate the significance of their findings. This leads to a situation where the most prestigious journals with the highest competition also have the lowest reliability and highest rates of retraction. The current system is incredibly wasteful in terms of time, effort and money. Competition for inclusion in journals means that papers often go through multiple rounds of peer review, being rejected by a series of journals after many hours of work by authors, editors and reviewers. The huge effort involved contributes to a culture of overwork in science that excludes people with caring duties and is damaging to mental health. Many scientists do their reviewing and editorial work in the evenings and weekends, for example. Inefficient publisher processes waste huge amounts of time in submission, formatting and reformatting of papers, and publisher monopolies mean they have little reason to improve these antiquated systems. Pre-publication peer review delays dissemination by months or even years, slowing down the rate of scientific progress. The financial costs can be eye-watering, thousands to tens of thousands of dollars per paper, much of it coming from tight science budgets and going straight to the huge profit margins of scientific publishers (some of the highest profit margins in the world, for example in 2010 Elsevier posted a 36% profit margin, higher than Google, Apple or Amazon). Exclusive publishing and copyrights means that the results of (often publicly funded) work are not freely available to view or re-use, leading to slower progress and time wasted duplicating work. Journals were historically important in disseminating work and in organising peer review, feedback and curation. These are important functions and the hard work that we put into them is not wasted, but it is inefficient. We do not need journals as they exist now. With preprint servers, publication and dissemination is a solved problem. There are already multiple solutions for post-publication peer review and paper recommendation, and there are many active projects exploring alternatives. We need to find a way to maintain the good things about the current system while getting rid of the harmful aspects. I am, therefore, resigning from all my editorial roles. I will no longer review for any profit-driven journal. I will no longer write pre-publication reviews for any journal, but will happily provide feedback to authors or post-publication reviews for technical correctness in cases where this is necessary. I am particularly sad to leave eLife, a journal that is not only publishing some of the most interesting science, but that is also doing a huge amount to move us forwards. However, this role still required me to make editorial judgements that I do not believe we should be making. With regret, I will continue to submit some papers to legacy journals. For the moment, this is a necessity if I wish to continue in research and for my trainees’ careers. I hope to change that, but it won’t happen overnight. Some will say that it is hypocritical to refuse to review others’ work but expect them to review mine. I respect and understand this point of view but I do not agree. Firstly, I encourage others not to review my work for these journals, or indeed anyone else’s. Please join me in refusing to do this! Forcing a crisis will be painful, but it’s how we change this broken system. Secondly, I’m not doing this because I only want to take from the community and give nothing back. I’ve spent the majority of my career building freely accessible tools to help other scientists (Brian, KlustaKwik, Neuromatch), and will continue to do so. I simply choose to give back in a different way. Reviewing and editorial work is sometimes considered a part of academic “service” work, but I have come to believe that it does not serve the scientific community well to maintain institutions that hold us back from changing to a better system, but rather to oppose them. I want to be clear that it is the institutions (and particularly the profit driven ones) that I oppose, and not the majority of people working hard within those institutions. I did not come to this decision easily, and I make no judgement on anyone who chooses to continue working within the current system. For myself, I believe that I can be of greater service to the scientific community by building a viable alternative to the current system. I hope that you will join me. Related Twitter thread This article on my website

thesamovar.github.io
vkehayas boosted

For every #Research Article and #Review article that is published in one of our journals, @Dev_journal, @J_Cell_Sci, @J_Exp_Biol, @DMM_Journal and @BiologyOpen, a #nativetree is planted in a forest in the UK. We are also funding the restoration and preservation of #ancientwoodland and dedicating these trees to our peer reviewers. #forestofbiologists

forest.biologists.com

vkehayas boosted
vkehayas boosted

Blown away today by: generalized/fluid intelligence

Circa 1904, Charles Spearman made an important observation about human intelligence: people who perform well on one type of task tend to also do well on ones that are seemingly distinct. It's the basis of the IQ test (for all it's faults, of which there are many, but here I focus on the insight). It's called generalized or fluid intelligence: the ability to solve novel problems.

Jon Duncan (Cambridge/Oxford) has studied this throughout his career, and he interprets it as the ability to breakdown complex problems into simpler ones. He offers up the example of traveling to Japan. How do you move your body and interact with the world to do that? What do you do with your left hand in the process? That’s unclear. But it becomes clear if you breakdown the problem into simpler ones like: you need to buy a plane ticket, which requires that you log into the internet, which requires you to move your computer mouse ...

The idea is that a lot of problems in the world come down to breaking down complicated things in this way, and some folks are better at it than others (for complex and TBD nature/nurture reasons). Patients with damage to prefrontal cortex are characteristically bad at it. In human fMRI, it's linked to a network of brain areas called the multiple demand system.

What's left unsaid: we’ve made some progress describing that a particular brain network is responsible, but very little in explaining how this network breaks down complex problems into simpler ones. But brain research is finally well poised to do so. I hope one of you who are reading this get inspired to do just that. It's one of the most exciting open questions in brain research today, I think.

Duncan's work as a book:
yalebooks.yale.edu/book/978030

A talk:
youtube.com/watch?app=desktop&

A recent paper:
pubmed.ncbi.nlm.nih.gov/327713

vkehayas boosted

Looking for feedback on some new thoughts about Big Ideas in brain/mind research.

I've spent quite a long time researching and thinking about the history of brain/mind research in terms of the Big Ideas that have emerged. Pre-1960, it's pretty easy to list the big ideas that researchers had reached consensus around. Since 1960, that's harder to do. There's plenty of consensus around new facts (like umami is supported by receptor X on the tongue), but it's difficult to regard the things that brain researchers agree on as new, big ideas. At first, I (mis)interpreted this as a paucity of new ideas, but I no longer think that's correct - I've found a ton. Instead, I now believe that they are there but we haven't arrived at consensus around them.

I'm wondering: Why might have researchers arrived at more consensus around Big ideas introduced 1900-1960 vs 1960-2020? Obviously there's the filter of history and the fact that it takes time to work things out. But is there more to it than that? For example, have the biggest principles already been discovered? And so we are left with more of a patchwork quilt?

A sample of big ideas pre-1960ish with general consensus
*) Nerve cells exist (it's not a reticulum)
*) Neurons propagate info electrically and then chemically between them
*) DNA > RNA > Protein is a universal genetic code or all living things
*) Explaining behavior needs intermediaries between stimuli and responses (cognitive maps/minds)

A sample of big ideas with no general consensus introduced post-1960ish:
*) Cortical function emerges from repetitions of a canonical element
*) The brain is optimized for goal-directed interactions with the environment in a feedback loop (prediction/embodiment/free energy)
*) The brain is a complex system with emergent properties that cannot be understood via reductionist approaches
*) Fine structural detail in the brain (the connectome) matters for brain function

I'd love to hear your thoughts.

vkehayas boosted

neuroscientists: the brain is the most complex discrete nonlinear biological system we are aware of
also neuroscientists: by trial averaging this data we assume that the entirety of that complexity is statistically independent noise around some single true value in the perfectly euclidean metric space we construct by taking the trial average.

Show thread
vkehayas boosted
vkehayas boosted
vkehayas boosted

Your periodic reminder that just because a URL is saved at archive.org doesn't mean it's going to stay there.

Last year, I wrote a series about proxy services marketed to cybercriminals, and that relied heavily on Archive.org links to document various connections. After my story ran, the person that those links concerned asked Archive to remove those links from their database, which they did. The person in question came back and said hey, what you said in your story is wrong because there's no supporting evidence and you must remove this. Archive.org confirmed they removed all of the pages at the request of the domain holder, and that was that.

If you stumble upon a page that is in archive.org and you want to make sure there is a record that won't be deleted at some point, consider saving the page to archive.today/archive.ph

vkehayas boosted

Happy new year! Another year means another year-long keogram! Every 15 seconds throughout 2022, my trusty all-sky camera took a picture of the sky above the Netherlands. Combining these 2.1 million images into a year-long keogram reveals this picture, which shows the length of the night change throughout the year (the hourglass shape), when the Moon was visible at night (diagonal bands), and the Sun higher in the sky during summer, as well as lots and lots of clouds passing overhead.

vkehayas boosted

RT @MAstronomers@twitter.com

This is how Jupiter has protected earth for billions of years. The gravity of Jupiter keeps most asteroids and space rocks away from Earth. Without it earth would most likely be uninhabitable for humans. Mad respect for Jupiter.

🐦🔗: twitter.com/MAstronomers/statu

vkehayas boosted

I love closing out the year with this. 😊

On December 31, 1995, exactly 27 years ago today, legendary cartoonist Bill Watterson published his final 'Calvin and Hobbes' comic strip.

How beautiful and appropriate it was, and a timeless reminder of what we have before us in 2023. ❤️

Happy New Year, ya'll!

vkehayas boosted

"So there is no way, really, to make code go faster, because there is no way to make instructions execute faster. There is only such a thing as making the machine do less."

He paused for emphasis.

"To go fast," he said slowly, "do less."

vkehayas boosted

I cannot keep this to myself. There is a website (radio.garden) where you can listen to radio stations all over the world for free. No log in. No email address. Nothing.

When the site loads, you are looking at the globe. Slide the little white circle over the green dots (each green dot is a radio station) until you find one you like.

I have been listening to this station in the Netherlands and it absolutely slaps. I have no idea what they're saying but the music is fantastic.

vkehayas boosted

RT @quorumetrix
I’ve made this video as an intuition pump for the density of #synapses in the #brain. This volume ~ grain of sand, has >3.2 million synapses (orange cubes). Peeling them away leaves only inputs on 2 #neurons. Zooming in, we see the synapses localized to the dendritic spines.
#b3d

vkehayas boosted

Thinking about making a little mastodon bot that summarizes and links the day's most popular posts across neuro and AI. A completely optional algorithmic feed, if you will. WDYT? CC @kordinglab

vkehayas boosted

The thing about Twitter is that it really lacks a lot of the features you'd expect from a true Mastodon replacement.

For example, there's no way to edit your toots (which they, confusingly call "tweets"—let's face it, it's a bit of a silly name that's difficult to take seriously).

"Tweets" can't be covered by a content warning. There's no way to let the poster know you like their tweet without also sharing it, and no bookmark feature.

There's no way to set up your own instance, and you're basically stuck on a single instance of Twitter. That means there's no community moderators you can reach out to to quickly resolve issues. Also, you can't de-federate instances with a lot of problematic content.

It also doesn't Integrate with other fediverse platforms, and I couldn't find the option to turn the ads off.

Really, Twitter has made a good start, but it will need to add a lot of additional features before it gets to the point where it becomes a true Mastodon replacement for most users.

#twitter #mastodon #twittermigration

vkehayas boosted

🔖 Gomes, Dylan G. E., Patrice Pottier, Robert Crystal-Ornelas, Emma J. Hudgins, Vivienne Foroughirad, Luna L. Sánchez-Reyes, Rachel Turba, u. a. „Why don’t we share data and code? Perceived barriers and benefits to public archiving practices“. Proceedings of the Royal Society B: Biological Sciences 289, Nr. 1987 (30. November 2022): 20221113. doi.org/10.1098/rspb.2022.1113.

vkehayas boosted

#introduction I am a professor at Penn and also co-director of the CIFAR Learning in Machines and Brains program. I like to think about neuroscience, AI, and science in general. Neuromatch. Recently, much of my thinking is about Rigor in science and I just started leading a large NIH funded initiative community for rigor (C4R) that aims at teaching scientific rigor.

My interests are broad: Causality, ANNs, Logic of Neuroscience, Neurotech, Data analysis, AI, community, science of science

Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.