@seeingwithsound @StriemAmit This is quite interesting. Time to revise what V1 does. I'm surprised that this wasn't seen before in prior studies. May be because it's limited to V1 and not V2? Interesting.. Thanks
@hkl @seeingwithsound
It was mentioned in a few papers actually (we cited them and discuss this) but never noticed as much beyond "maybe visual imagery". I guess because it's so unexpected, if you don't study blindness and see where it could lead?
@hkl @seeingwithsound but it isn't actually weak even in sighted people. Just weaker than in blindness, of course
@StriemAmit @hkl I would hypothesize that it stems from incomplete developmental pruning, such that sighted children would show still stronger activation of V1 by spoken language?
@seeingwithsound @hkl unless it has a functional role?... Personally, I'm not sure what that would be, simply because of the mismatch in representation.
@StriemAmit @hkl Not sure what you mean here. The brains of (young) children are supposedly more wired, and development prunes connections that are not functional. So one expects less crossmodal (functional) connectivity with adults because it is dysfunctional except as a latent ability to adapt (plasticity).
@StriemAmit @hkl @StriemAmit @hkl Other question about "Activation in V1 for language was also significant and comparable for abstract and concrete words, suggesting it was not driven by visual imagery". This makes a good case, but subjects could still be visualizing the spoken words as printed text (like subtitles), in which case visual imagery would be equivalent for abstract and concrete words? Not criticizing, but merely probing the limits.
@StriemAmit @hkl (I personally have little difficulty consciously visualizing spoken words like sub-titles, which suggests that even subconsciously this may be an ongoing process - activating V1 for abstract and concrete words - that I normally pay little or no attention to.)
@seeingwithsound @StriemAmit I agree with @seeingwithsound that there's probably some residual leftover from developmental pruning. But if those are not serving a function, I don't think they would remain strong enough to elicit V1 activation? This is quite interesting since work from animal models suggest that V1 has a lot more functional activation by various contextual/non-visual stimuli. It's nice to know that similar things are happening in humans too. I do wonder if V1 activation is larger for language compared to simple auditory stimulation. Has anyone compared them side-by-side?
@hkl @StriemAmit I do not know the answer to your question about side-by-side comparisons, but I'm reminded of the paper "Behavioral origin of sound-evoked activity in mouse visual cortex" https://www.nature.com/articles/s41593-022-01227-x which suggested that for mice there is no real (direct) activation of V1 by sound as it may all run via low-dimensional behavioral responses. Yet as a human I can visualize pictorial soundscapes that I would not call low-dimensional let alone that it gets to my V1 via e.g. gesturing.
@seeingwithsound Thanks for the reference
Although direct activation of V1 by A1 inputs is noted in several other studies. Even for these direct activity, the signal is probably not specific enough to support the idea that V1 could process auditory information the way A1 does. That's why I like the study by @StriemAmit, which would support that it's more likely higher order/highly processed auditory information transmitting to V1.
@hkl @StriemAmit I like that idea. It could be autoencoder-like encoding and decoding.
@seeingwithsound @StriemAmit That is an interesting idea!
@hkl @StriemAmit I think it is because the effect is weaker in normally sighted, which is to be expected because unmasking would there mostly cause crossmodal interference, but the ability to unmask as such adds adaptability. The paper states "The fact that similar (albeit weaker) V1 language activation can be seen in the sighted brain suggests that such activation in the blind may not require massive changes in brain organization".