Philo Sophies

#News 📢 from #AI #research 📂: “How #ChatGPT 💻 can benefit from human #brains 🧠 - or how we teach #machines 🤖 to #think 🤓

My colleague #Axel #Stöcker and I have already been able to conduct some very exciting interviews on this very topic in our #Zoomposium series “#Artificial #intelligence and its consequences” and “#Cognitive #neuroscience and #epistemology”.

One “highlight”, for example, was the very exciting interview “Zoomposium with Dr. #Patrick #Krauss: ‘Bauanleitung Künstliches Bewusstsein’“, in which we asked him about the possibilities of ‘how the #theories of #consciousness - especially those of #Antonio #Damasio - could be used to develop artificial systems based on ’#felt#information. This foundation could be used to #machine #learning and #deep #learning so that #AI #systems learn to respond to #emotions and #changes in their #environment in a similar way to #biological #organisms.

In this context #Patrick #Krauss was also currently at the „#Embodied and #Situated #Language #Processing (#ESLP2024) conference from 03 to 05 October 2024, which was organized by members of the „#Brain #Language #Lab“ of the #FreienUniversitätBerlin. There he had the opportunity to present the latest research results together with his team from #FAU.

One of #Patrick #Krauss talks was about a study „Analyzing Narrative Processing in Large Language Models”, which he had conducted in collaboration with his colleague #Achim #Schilling. The results of this study are partly based on an article “Leaky-Integrate-and-Fire Neuron-Like Long-Short-Term-Memory Units as Model System in Computational Biology”, for which he and his team were awarded the #BestPaperAward at the „International Jount Conference on Neural Networks #IJCNN2023”, the world's largest interdisciplinary conference on #artificial and #biological #neural #networks.

These current results from AI research of #Patrick #Krauss are therefore a real „jointventure” between #AI and #neuroscience, as the data and methods can contribute directly to the improvement of #large #language #models (#LLM), such as #ChatGPT, and in return the #cognitive #neurosciences can also learn something about the use and formation of #language in the #brain from this #implementation and #simulation of #cognitive #processes on #machines.

If you would like to learn more about #Patrick #Krauss ' very interesting #research #results, you can find out more here:

ai.fau.digital/speakers/dr-pat

or at: philosophies.de/index.php/2023

The Big Data Cluster

#AGU24

A three-step framework for improving spatiotemporal estimates of actual evapotranspiration (ETa).

Using #neural networks and cloud-free imagery from SpaceEye, the study demonstrates accurate ETa predictions at a 30-m resolution, addressing challenges of coarse satellite data and low revisit frequency.

🗓️🔗: bit.ly/AGU24_H11Q

#ecology #research #ClimateChange

The vOICe vision BCI 🧠🇪🇺

Pyramidal cell types and 5-HT2A receptors are essential for #psilocybin's lasting drug action biorxiv.org/content/10.1101/20 on #structural #neural #plasticity #neuroscience

"We find that a single dose of psilocybin increased the density of dendritic spines in both the subcortical-projecting, pyramidal tract (PT) and intratelencephalic (IT) cell types."

Pyramidal cell types and 5-HT2A receptors are essential for psilocybin's lasting drug action

Psilocybin is a serotonergic psychedelic with therapeutic…

bioRxiv
Gabriel Weindel

Very happy of our post-review preprint on single-trial detection of cognitive events in #neural time-series

biorxiv.org/content/10.1101/20

TLDR: with the hidden multivariate pattern method (HMP), using a few assumptions and the E/#MEG signal during the reaction time, we can:
- recover how many task related events appear in the #EEG
- get their SINGLE-TRIAL time location and therefore also voltage activity 🤯

@cognition @eeg @cogneurophys

Trial-by-trial detection of cognitive events in neural time-series

Measuring the time-course of neural events that make…

www.biorxiv.org
katch wreck

"Neuronal state space analysis revealed that each repetition of a behavior was distinct, with more recent behaviors more similar than those further apart in time. ACC activity was dominated by a slow, gradual change in low-dimensional representations of #neural state space aligning with the pace of behavior. Temporal progression, or drift, was apparent on the top principal component for every session & was driven by the accumulation of experiences & not an internal clock" cell.com/current-biology/abstr

Roger Herikstad

Really interesting #neuroscience preprint from researchers at #Brandeis on characterising #neural activity in shared communication #subspaces between the #hippocampus and the #prefrontalcortex. I found it particularly intriguing that theta power appeared to decrease subspace dimensions and increase subspace predictability, while theta #coherence had much less of an effect. biorxiv.org/content/10.1101/20

Hippocampal-prefrontal communication subspaces align with behavioral and network patterns in a spatial memory task

Rhythmic network states have been theorized to facilitate…

www.biorxiv.org
StinkyCat

Подписался тут на gemini от Гугла, очень удовлетворен ответами! Рекомендую! #gemini #neural #AI #chat

Martin Hamilton

TIL about #necomimi - #neural tech that's OK to like, and somewhat fedi-adjacent :ExcitedDance:

Also... "Speakers have been added to the new necomimi for realistic cat sound effects" :blobcatchristmasglowsticks:

#CatEars #Robot #BCI #Meow #Miaow

Habr

Что скрывает под собой скрытое (латентное) пространство?

Работа с латентными пространствами Латентное пространство полезно для изучения функций данных и поиска более простых представлений данных для анализа. Как используются латентные пространства в библиотеке eXplain-NNs? Визуализация латентных пространств: Этот метод позволяет отобразить скрытые признаки или паттерны, выученные нейронной сетью, в этих латентных пространствах. Это может быть полезно для понимания, как модель организует данные и какие внутренние представления она использует для принятия решений. Анализ гомологии латентных пространств: Еще один метод, предоставляемый библиотекой eXplain-NNs, это анализ гомологии латентных пространств. Анализ гомологии используется для изучения структуры и связей между этих латентных представлений. Это помогает понять, каким образом информация организована внутри модели и влияет на ее способность принимать решения.

habr.com/ru/articles/807405/

#encoder #decoder #latent_diffusion #mathematics #neuralnetworks #neural_network #neuroscience #neural #ai #artificial_intelligence

Что скрывает под собой скрытое (латентное) пространство?

Основные понятия Энкодер в машинном обучении - это…

habr.com
katch wreck

"Visual attributes modulated #neural activity at one end of the gradient, while at the other end it reflected the upcoming response timing, with attentional effects occurring at the intersection of visual and response signals. These findings challenge multi-step models of attention, and suggest that frontoparietal networks, which process sequential stimuli as separate events sharing the same location, drive exogenous #attention phenomena such as inhibition of return."

nature.com/articles/s41467-024

Scientific Frontline

A team from the University of Geneva has succeeded in modeling an artificial #neural #network capable of this #cognitive prowess. After learning and performing a series of basic tasks, this #AI was able to provide a linguistic description of them to a ‘‘sister’’ AI, which in turn performed them.
#ArtificialIntelligence #Neuroscience #sflorg
sflorg.com/2024/03/ai03182401.

Two artificial intelligences talk to each other

The UNIGE team worked on artificial neural networks,…

www.sflorg.com