With all the noise, preserve that sense of wonder. New microscope ready for adventures!

Why Bell Labs worked so well, and could innovate so much, while today’s innovation, in spite of the huge private funding, goes in hype-and-fizzle cycles that leave relatively little behind, is a question I’ve been asking myself a lot in the past years.

And I think that the author of this article has hit the nail on its head on most of the reasons - but he didn’t take the last step in identifying the root cause.

What Bell Labs achieved within a few decades is probably unprecedented in human history:

They employed folks like Nyquist and Shannon, who laid the foundations of modern information theory and electronic engineering while they were employees at Bell.
They discovered the first evidence of the black hole at the center of our galaxy in the 1930s while analyzing static noise on shortwave transmissions.
They developed in 1937 the first speech codec and the first speech synthesizer.
They developed the photovoltaic cell in the 1940, and the first solar cell in the 1950s.
They built the first transistor in 1947.
They built the first large-scale electronic computers (from Model I in 1939 to Model VI in 1949).
They employed Karnaugh in the 1950s, who worked on the Karnaugh maps that we still study in engineering while he was an employee at Bell.
They contributed in 1956 (together with AT&T and the British and Canadian telephone companies) to the first transatlantic communications cable.
They developed the first electronic musics program in 1957.
They employed Kernighan, Thompson and Ritchie, who created UNIX and the C programming language while they were Bell employees.

And then their rate of innovation suddenly fizzled out after the 1980s.

I often hear that Bell could do what they did because they had plenty of funding. But I don’t think that’s the main reason. The author rightly points out that Google, Microsoft and Apple have already made much more profit than Bell has ever seen in its entire history. Yet, despite being awash with money, none of them has been as impactful as Bell. Nowadays those companies don’t even innovate much besides providing you with a new version of Android, of Windows or the iPhone every now and then. And they jump on the next hype wagon (social media, AR/VR, Blockchain, AI…) just to deliver half-baked products that (especially in Google’s case) are abandoned as soon as the hype bubble bursts.

Let alone singlehandedly spear innovation that can revolutionize an entire industry, let alone make groundbreaking discoveries that engineers will still study a century later.

So what was Bell’s recipe that Google and Apple, despite having much more money and talented people, can’t replicate? And what killed that magic?

Well, first of all Bell and Kelly had an innate talent in spotting the “geekiest” among us. They would often recruit from pools of enthusiasts that had built their own home-made radio transmitters for fun, rather than recruiting from the top business schools, or among those who can solve some very abstract and very standardized HackerRank problems.

And they knew how to manage those people. According to Kelly’s golden rule:

How do you manage genius? You don’t

Bell specifically recruited people that had that strange urge of tinkering and solving big problems, they were given their lab and all the funding that they needed, and they could work in peace. Often it took years before Kelly asked them how their work was progressing.

Compare it to a Ph.D today who needs to struggle for funding, needs to produce papers that get accepted in conferences, regardless of their level of quality, and must spend much more time on paperwork than on actual research.

Or to an engineer in a big tech company that has to provide daily updates about their progress, has to survive the next round of layoffs, has to go through endless loops of compliance, permissions and corporate bureaucracy in order to get anything done, has their performance evaluated every 3 months, and doesn’t even have control on what gets shipped - that control has been taken away from engineers and given to PMs and MBA folks.

Compare that way of working with today’s backlogs, metrics, micromanaging and struggle for a dignified salary or a stable job.

We can’t have the new Nyquist, Shannon or Ritchie today simply because, in science and engineering, we’ve moved all the controls away from the passionate technical folks that care about the long-term impact of their work, and handed them to greedy business folks who only care about short-term returns for their investors.

So we ended up with a culture that feels like talent must be managed, even micromanaged, otherwise talented people will start slacking off and spending their days on TikTok.

But, as Kelly eloquently put it:

“What stops a gifted mind from just slacking off?” is the wrong question to ask. The right question is, “Why would you expect information theory from someone who needs a babysitter?”

Or, as Peter Higgs (the Higgs boson guy) put it:

It’s difficult to imagine how I would ever have enough peace and quiet in the present sort of climate to do what I did in 1964… Today I wouldn’t get an academic job. It’s as simple as that. I don’t think I would be regarded as productive enough.

Or, as Shannon himself put it:

I’ve always pursued my interests without much regard for final value or value to the world. I’ve spent lots of time on totally useless things.

So basically the most brilliant minds of the 20th century would be considered lazy slackers today and be put on a PIP because they don’t deliver enough code or write enough papers.

So the article is spot on in identifying why Bell could invent, within a few years, all it did, while Apple, despite having much more money, hasn’t really done anything new in the past decade. MBAs, deadlines, pseudo-objective metrics and short-termism killed scientific inquiry and engineering ingenuity.

But the author doesn’t go one step further and identify the root cause.

It correctly spots the business and organizational issues that exist in managing talent today, but it doesn’t go deeper into their economic roots.

You see, MBA graduates and CEOs didn’t destroy the spirit of scientific and engineering ingenuity spurred by the Industrial Revolution just because they’re evil. I mean, there’s a higher chance for someone who has climbed the whole corporate ladder to be a sociopath than there is for someone you randomly picked from the street, but not to the point where they would willingly tame and screw the most talented minds of their generation, and squeeze them into a Jira board or a metric that looks at the number of commits, out of pure sadism.

They did so because the financial incentives have drastically changed from the times of Bells Labs.

The Bells Labs were basically publicly funded. AT&T operated the telephone lines in the US, paid by everyone who used telephones, and they reinvested a 1% tax into R&D (the Bells Labs). And nobody expected a single dime of profits to come out from the Bells Labs.

And btw, R&D was real R&D with no strings attached at the time. In theory also my employer does R&D today - but we just ended up treating whatever narrow iterative feature requested by whatever random PM as “research and development”. It’s not like scientists have much freedom in what to research or engineers have much freedom in what to develop. R&D programs have mostly just become a way for large businesses to squeeze more money out of taxpayers, put it in their pockets, and not feel any moral obligation of contributing to anything other than their shareholders’ accounts.

And at the time the idea of people paying taxes, so talented people in their country could focus on inventing the computer, the Internet or putting someone on the moon, without the pressure of VCs asking for their dividends, or PMs asking them to migrate everything to another cloud infrastructure by next week, or to a new shiny framework that they’ve just heard in a conference, wasn’t seen as a socialist dystopia. It was before the neoliberal sociopaths of the Chicago school screwed up everything.

The America that invested into the Bell Labs and into the Apollo project was very different from today’s America. It knew that it was the government’s job to foster innovation and to create an environment where genuinely smart people could do great things without external pressure. That America hadn’t yet been infected by the perverse idea that the government should always be small, that it’s not the government’s job to make people’s lives better, and that it was the job of privately funded ventures seeking short-term returns to fund moonshots.

And, since nobody was expecting a dime back from Bell, nobody would put deadlines on talented people, nobody hired unqualified and arrogant business specialists to micromanage them, nobody would put them on a performance improvement plan if they were often late at their daily standups or didn’t commit enough lines of code in the previous quarter. So they had time to focus on how to solve some of the most complex problems that humans ever faced.

So they could invent the transistor, the programming infrastructure still used to this day, and lay the foundations of what engineers study today.

The most brilliant minds of our age don’t have this luxury. So they can’t revolutionarize our world like those in the 20th century did.

Somebody else sets their priorities and their deadlines.

They can’t think of moonshots because they’re forced to work on the next mobile app riding the next wave of hype that their investors want to release to market so they can get even richer.

They have to worry about companies trying to replace them with AI bots and business managers wanting to release products themselves by “vibe coding”, just to ask those smart people to clean up the mess they’ve done, just like babies who are incapable of cleaning up the food they’ve spilled on the floor.

They are seen as a cost, not as a resource. Kelly used to call himself a “patron” rather than a “manager”, and he trusted his employees, while today’s managers and investors mostly see their engineering resources as squishy blobs of flesh standing between their ambitious ideas and their money, and they can’t wait to replace them with robots that just fullfill all of their wishes.

Tech has become all about monetization nowadays and nothing about ingenuity.

As a result, there are way more brilliant minds (and way more money) in our age going towards solving the “convince people to click on this link” problem rather than solving the climate problem, for example.

Then of course they can’t invent the next transistor, or bring the next breakthrough in information theory.

Then of course all you get, after one year of the most brilliant minds of our generation working at the richest company that has ever existed, is just a new iPhone.

https://links.fabiomanganiello.com/share/683ee70d0409e6.66273547

Someone defending their PhD thesis next week slacked the group this morning with this link.

research.wmz.ninja/projects/ph

#Academia

The sound of cells in the brain just doesn’t get old! It’s how I fell in love with #neuroscience 🧠🧪👩🏻‍🔬

Celebrating small victories - first #neuropixels recordings in the lab. Congrats to the team that made it happen!

@neuralreckoning An extra layer of this question is "at which timescale"?

On average, a neuron with a high baseline firing rate or highly active presynaptic network will consume more energy compared to a sparsely firing neuron (e.g., layer 2/3 cortex), independently of the task at hand. Intuitively, these parameters relate more to neuronal cell-type properties (transcription profile, connectivity, brain area) and thus are subject to evolutionary optimizations.

This review seems like a good starting point cell.com/current-biology/fullt

It seems likely that the brain directs more or less energy to different sets of neurons based on their importance to carrying out current task, but do we know the mechanism by which it does that? Does it make neurons spike more/less often and if so how? Or is it an open question? #neuroscience

As part of this week-long event, I will be hosting a 2-day “Animals in Motion” workshop in London, with the generous support of @SoftwareSaved.

It’s for anyone who wants to get hands-on experience with using open-source software to track animals from video footage and analyse their motion.

Attendance is free of charge, but spots are limited. A small number of travel stipends are available. More info at neuroinformatics.dev/open-soft

#neuroscience #behavior #ethology #neuroethology #python

From: @neuroinformatics
mastodon.online/@neuroinformat

1/ We are excited to share our new manuscript. Here, we provide a nanoscale connectome of the human foveal retina. Our dataset represents the first connectome of any complete neural structure in the human nervous system.

biorxiv.org/content/10.1101/20

"Infrequent strong connections constrain connectomic predictions of neuronal function", Currier and Clandinin
biorxiv.org/content/10.1101/20

Quite the reversal from studies showing that deriving connectomes from correlated neural activity is not accurate because of lacking a unique solution:

"we show that physiology is a stronger predictor of wiring than wiring is of physiology"

#neuroscience #Drosophila #connectomics

Infrequent strong connections constrain connectomic predictions of neuronal function

How does circuit wiring constrain neural computation? Recent work has leveraged connectomic datasets to predict the function of cells and circuits in the brains of many species. However, many of these hypotheses have not been compared with physiological measurements, obscuring the limits of connectome-based functional predictions. To explore these limits, we characterized the visual responses of 91 cell types in the fruit fly and quantitatively compared them to connectomic predictions. We show that these predictions are accurate for some response properties, such as orientation tuning, but are surprisingly poor for other properties, such as receptive field size. Importantly, strong synaptic inputs are more functionally homogeneous than expected by chance, and exert an outsized influence on postsynaptic responses, providing a powerful modeling constraint. Finally, we show that physiology is a stronger predictor of wiring than wiring is of physiology, revising our understanding of the structure-function relationship in the brain. ### Competing Interest Statement The authors have declared no competing interest.

bioRxiv

The New York Times just discovered parallel computing.

When I read research papers that are the result of very expensive work (experiments or simulations) I always want to know: how could this project have possibly ended with a null result? And is there an argument in this paper that compares the actual result to this null? If not, I'm very suspicious.

Actually this is a good question to ask about any paper, but the high stakes of super expensive research make it particularly important to ask the question. In my experience, it is surprisingly rarely answered in the paper and I find it hard to believe in these results.

#science #neuroscience

New preprint! We built a 3D brain atlas for the migratory and magnetoreceptive Eurasian blackcap.

The atlas is available in @brainglobe and is hopefully the first of many!

Full details:
brainglobe.info/blackcap

Preprint:
biorxiv.org/content/10.1101/20

Short thread:

Dendritic Architecture Enables de Novo Computation of Salient Motion in the Superior Colliculus biorxiv.org/content/10.1101/20

Dendritic Architecture Enables de Novo Computation of Salient Motion in the Superior Colliculus

Dendritic architecture plays a crucial role in shaping how neurons extract behaviorally relevant information from sensory inputs. Wide-field neurons in the superior colliculus integrate visual information from the retina to encode cues critical for visually guided orienting behaviors. However, the principles governing how these neurons filter their inputs to generate appropriate responses remain unclear. Using viral tracing, two-photon calcium imaging, and computational modeling, we show that wide-field neurons receive functionally diverse inputs from twelve retinal ganglion cell types, forming a layered, type-specific organization along their dendrites. This structured arrangement allows wide-field neurons to multiplex salient motion cues, selectively amplifying movement and suppressing static features. Computational models reveal that the spatial organization of dendrites and inputs enables the selective extraction of behaviorally relevant stimuli, including de novo computations. Our findings underscore the critical role of dendritic architecture in shaping sensory processing and neural circuit function. ### Competing Interest Statement The authors have declared no competing interest.

bioRxiv

Book: Mathematics in Biology, by Markus Meister, Kyu Hyun Lee, and Ruben Portugues.
mathinbio.com/

@mameister4 has a blog entry on why they decided to write the book:
markusmeister.com/2025/02/20/w

Interestingly:

• The web site offers value-added materials, for example sample curricula, and the code for generating every figure in the book.
• The book contains many exercises, but no solutions. We invite student readers to produce such solutions and we will publish the best ones on this site with author credit.

Looks like a long-running project to support and engage the broader community – and it's been 12 years in the making!

#mathematics #mathbio #math #biology

What drives decision-making in competitive environments?

SWC research teams led by @jerlich and Ann Duan explore multi-agent strategies and dynamical models in a new study.

Read more: sainsburywellcome.org/web/blog

Mice dynamically adapt to opponents in competitive multi-player games biorxiv.org/content/10.1101/20

Mice dynamically adapt to opponents in competitive multi-player games

Competing for resources in dynamic social environments is fundamental for survival, and requires continuous monitoring of both 'self' and 'others' to guide effective choices. Yet our understanding of value-based decision-making comes primarily from studying individuals in isolation, leaving open fundamental questions about how animals adapt their strategies during social competition. Here, we developed an ethologically relevant multi-player game, in which freely-moving mice make value-based decisions in a competitive spatial foraging task. We found that mice integrate real-time spatial information about 'self' and the opponent to flexibly shift their preference towards safer, low-payout options when appropriate. Analyses of mice and reinforcement learning agents reveal that these behavioural adaptations cannot be explained by simple reward learning, but are instead consistent with optimal decision strategies guided by opponent features. Using a dynamical model of neural activity, we found that in addition to opponent effects, decisions under competition were also noisier and more sensitive to initial conditions, generating testable predictions for neural recordings and perturbations. Together, this work reveals a fundamental mechanism for competitive foraging, and proposes novel quantitative frameworks towards understanding value-based decision-making in a fast-changing social environment. ### Competing Interest Statement The authors have declared no competing interest.

bioRxiv
Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.