Neuroimaging, machine learning, reward, journal club
What's the next step for #neuroimaging analyses of individual differences? This question has occupied my thoughts a lot over the past few months. A recent preprint from Luke Chang & co. offers an interesting perspective: https://www.biorxiv.org/content/10.1101/2022.08.23.504939v1
#fmri #reward #IndividualDifferences #ML #JournalClub
@David_Baranger
That's an interesting set of studies, thank you for sharing! The pain study in particular is quite promising.
I am having a hard time trying to reconcile the "abysmal reliability" you cite with the great out-of-sample predictions for pain and reward. Are pain and reward special in some way, that their neural basis is more consistent across subjects and tasks? Or is it that these methods are trained on a bunch of subjects? Or could it be that the regularization provided by LASSO-PCR in Chang et al gives better weights?
@David_Baranger I do think regardless, these predictors would be so cool to apply in an fMRI neurofeedback setting! You could ask subjects with chronic pain to decrease pain or gamblers to decrease reward of gambling. I know some studies do this already, but if I recall correctly, they don't use these nicer classifiers.
@David_Baranger That is a compelling argument, but the issue feels more complex than that? Looking at the Chang et al paper, they do restrict the analysis to a sparser set of regions and still get good results.
It does seem counterintuitive from a statistics perspective as well. Wouldn't finding an effect in a smaller region with a simpler model provide more robust results?
I suppose for the Chang et al paper, the lasso-pcr could be a better regularizer though, to make it generalize? Probably we should all switch to such regularized techniques when finding patterns in very high dimensional neural data... But is it just the regularization then? If so, the lasso model is actually finding a simpler pattern in the data than previous contrasts.
(I hope this message doesn't come across as critical, I'm trying to understand the issues better and playing around with the ideas you have in this discussion helps.)
@neurolili Yes, the restricted set of regions (which includes meta-analytically implicated regions for the task, including a large chunk of the subcortex) performs quite well, though not as well as the whole-brain.
Perhaps it is fair to say that it's a bit of an apples-to-oranges comparison, as we don't typically build classifiers to distinguish task states. So I don't know how well single regions would perform.
@neurolili I think the main complaint about contrasts is that they're difference scores, so it's very hard to make one that is both specific to a cognitive state AND reliable. The ML is an interesting alternative approach I think!
@neurolili Yes neurofeedback is an exciting area where this could have a big impact. Regarding reliability - the reliability estimates are for the contrast of two conditions in single areas, while the ML classifier is a whole-brain model that distinguishes between the two contrasts. So even when two contrasts are very similar (leading to low reliability of single regions), there can be sufficient signal when you aggregate across lots of regions.