Nicola Fabiano :xmpp:

📘 My new book is out today — in Italian:
“Intelligenza Artificiale, Privacy e Reti Neurali: L’equilibrio tra innovazione, conoscenza ed etica nell’era digitale”

With Forewords by Danilo Mandic and Carlo Morabito, and an introduction by Guido Scorza.

🗣️ The English edition will be available soon.

Now available on all major online bookstores.

#AI #privacy #NeuralNetworks #Ethics #DigitalRights #AIAct #NewBook

Gert :debian: :gnu: :linux:

Una introduzione "old style" alle reti neurali, semplice, ben fatta e comprensibile per chi non vuole affidarsi a pacchetti e applicazioni preformattate e vuole provare a implementare da solo il sistema che più gli serve e gli piace.
(Free resource)
#ai #neuralnetworks #books
books.ugp.rug.nl/index.php/ugp

The Shallow and the Deep: A biased introduction to neural networks and old school machine learning | University of Groningen Press

books.ugp.rug.nl
May 28, 2025, 19:48 · · · Mastodon for Android · 0 · 1
Natural Gas Industry B updates

An enhanced mineral identification model, YOLOv8-SBI, based on an improved YOLOv8 framework, was introduced, improving the precision of mineral feature detection. #OpenAccess at sciencedirect.com/science/arti #ArtificialIntelligence #DeepLearning #NeuralNetworks #Mineralidentification

Jesus Castagnetto 🇵🇪

Do #NeuralNetworks dream of metal-organic structures?

"Inverse design of metal-organic frameworks using deep dreaming approaches"

#DeepLearning #AI #Chemistry #DeepDreaming

nature.com/articles/s41467-025

WetHat💦

AI Fundamentals:

An organized and detailed introduction to AI, complete with technical diagrams, glossary, and actionable insights. Its treatment of opportunities and challenges resonates with both beginners and experts.

Some sections could benefit from deeper case studies or illustrations of complex processes, such as the AI development lifecycle or the inner workings of neural networks.

dev.to/furqanahmadrao/ai-funda

#AI #MachineLearning #NeuralNetworks #DataScience #Tutorial

May 22, 2025, 09:07 · · · 0 · 0
Brian Greenberg :verified:

🧠 Neural networks can ace short-horizon predictions — but quietly fail at long-term stability.

A new paper dives deep into the hidden chaos lurking in multi-step forecasts:
⚠️ Tiny weight changes (as small as 0.001) can derail predictions
📉 Near-zero Lyapunov exponents don’t guarantee system stability
🔁 Short-horizon validation may miss critical vulnerabilities
🧪 Tools from chaos theory — like bifurcation diagrams and Lyapunov analysis — offer clearer diagnostics
🛠️ The authors propose a “pinning” technique to constrain output and control instability

Bottom line: local performance is no proxy for global reliability. If you care about long-horizon trust in AI predictions — especially in time-series, control, or scientific models — structural stability matters.

#AI #MachineLearning #NeuralNetworks #ChaosTheory #DeepLearning #ModelRobustness
sciencedirect.com/science/arti

Charlie McHenry

If you’re into #AI then you understand the role #NeuralNetworks and #transformers play in ‘reasoning’ and predictive processing. The ‘hidden layers’ are where the AI magic happens. But are we getting the most out of current architectures? This new study offers insights into what may be the next step in #ArtificialIntelligence… the CONTINUOUS THOUGHT MACHINE.

tl;dr
Neurons in brains use timing and synchronization in the way that they compute. This property seems essential for the flexibility and adaptability of biological intelligence. Modern AI systems discard this fundamental property in favor of efficiency and simplicity. We found a way of bridging the gap between the existing powerful implementations and scalability of modern AI, and the biological plausibility paradigm where neuron timing matters. The results have been surprising and encouraging.

pub.sakana.ai/ctm/

Miguel Afonso Caetano

"Neurons in brains use timing and synchronization in the way that they compute. This property seems essential for the flexibility and adaptability of biological intelligence. Modern AI systems discard this fundamental property in favor of efficiency and simplicity. We found a way of bridging the gap between the existing powerful implementations and scalability of modern AI, and the biological plausibility paradigm where neuron timing matters. The results have been surprising and encouraging.
(...)
We introduce the Continuous Thought Machine (CTM), a novel neural network architecture designed to explicitly incorporate neural timing as a foundational element. Our contributions are as follows:

- We introduce a decoupled internal dimension, a novel approach to modeling the temporal evolution of neural activity. We view this dimension as that over which thought can unfold in an artificial neural system, hence the choice of nomenclature.

- We provide a mid-level abstraction for neurons, which we call neuron-level models (NLMs), where every neuron has its own internal weights that process a history of incoming signals (i.e., pre-activations) to activate (as opposed to a static ReLU, for example).

- We use neural synchronization directly as the latent representation with which the CTM observes (e.g., through an attention query) and predicts (e.g., via a projection to logits). This biologically-inspired design choice puts forward neural activity as the crucial element for any manifestation of intelligence the CTM might demonstrate."

pub.sakana.ai/ctm/

#AI #NeuralNetworks #CTM #ContinuousThoughtMachine

Continuous Thought Machines

Introducing Continuous Thought Machines: a new kind…

Continuous Thought Machines
Metin Seven 🎨

𝘏𝘶𝘮𝘢𝘯 𝘤𝘰𝘯𝘴𝘤𝘪𝘰𝘶𝘴𝘯𝘦𝘴𝘴 𝘪𝘴 𝘢 ‘𝘤𝘰𝘯𝘵𝘳𝘰𝘭𝘭𝘦𝘥 𝘩𝘢𝘭𝘭𝘶𝘤𝘪𝘯𝘢𝘵𝘪𝘰𝘯,’ 𝘴𝘤𝘪𝘦𝘯𝘵𝘪𝘴𝘵 𝘴𝘢𝘺𝘴 — 𝘢𝘯𝘥 𝘈𝘐 𝘤𝘢𝘯 𝘯𝘦𝘷𝘦𝘳 𝘢𝘤𝘩𝘪𝘦𝘷𝘦 𝘪𝘵

popularmechanics.com/science/a

#brain #neuroscience #consciousness #AI #ArtificialIntelligence #NeuralNetworks #LLM #LLMs #MachineLearning #ML #tech #technology #biology #science #research

Human Consciousness Is a ‘Controlled Hallucination,’ Scientist Says—And AI Can Never Achieve It

Consciousness is more about our biology than about…

Popular Mechanics
May 03, 2025, 15:08 · · · 0 · 0
Data Science @ Uni Vienna

At our #DataScience @univienna Talk next Monday 5 May, Thomas Rattei explores how #MachineLearning predicts microbial traits from genomic data: protein families as features, #NeuralNetworks for practical phenotype prediction #Genomics #Microbiology #DSHQ
datascience.univie.ac.at/event