Show newer

The labs of Jan-Michael Peters and @leonidmirny @MIT have found how #genetranscription interferes with #cohesin-mediated loop extrusion. Transcription shapes genome organisation in unexpected ways!
The paper: pnas.org/doi/10.1073/pnas.2210
Our news article: imp.ac.at/news/article/gene-tr

'Multistep IgE mast cell desensitization is a dose- and time-regulated process that blocks β-hexosaminidase, impacting membrane and cytoskeletal movements. Signal transduction is uncoupled, favoring early phosphorylation of SHIP-1. Silencing SHIP-1 impairs desensitization without implicating its phosphatase function'

journals.aai.org/jimmunol/arti

Unraveling dynamically-encoded latent transcriptomic patterns in pancreatic cancer cells by topic modelling biorxiv.org/content/10.1101/20

'Led by industry groups like the National Association of Manufacturers and the National Electric Light Association, business leaders fought child-labor laws and workmen’s compensation as unfair limits on companies while insisting that “anything less than total business freedom was a step on the road to socialism, or worse.”

washingtonpost.com/books/2023/

'As horrible as Covid has been — it remains one of the leading causes of death in the United States — it is not the worst-case scenario. There are viruses with case fatality rates twice, 10 times or even greater than that of Covid, such as H5N1 influenza (bird flu), Nipah and Ebola.'

nytimes.com/2023/03/12/opinion

"It might seem that the obvious course is not to make multiple models but rather to grow a network. Instead of developing two networks for recognizing cats and horses respectively, for instance, it might appear easier to teach the cat-savvy network to also recognize horses. This approach, however, forces AI designers to confront one of the main issues in lifelong learning, a phenomenon known as catastrophic forgetting. A network trained to recognize cats will develop a set of weights across its artificial neurons that are specific to that task. If it is then asked to start identifying horses, it will start readjusting the weights to make it more accurate for horses. The model will no longer contain the right weights for cats, causing it to essentially forget what a cat looks like. “The memory is in the weights. When you train it with new information, you write on the same weights,” says Siegelmann"

nature.com/articles/d41586-022

'The evolution of the larva, though, altered brain development to make a modified, simpler brain appropriate for the sensory and motor demands of the larva. This larval brain, though, is not discarded at the end of larval growth and a new one made from scratch. Most larval neurons persist and some, like the interneurons mediating backwards locomotion, have similar functions in both larva and adult (Lee and Doe, 2021), but, as we show in this paper, the maintenance of neuronal function from larva to adult is not always the case.'

elifesciences.org/articles/805

@eLife Few things scream “James W. Truman” as insect metamorphosis and in particular neural fate from embryo to larva, pupa, and adult #Drosophila. The above is a lovely insight piece from Andi Thum and Bertram Gerber.
Jim’s paper: elifesciences.org/articles/805
#neuroscience #metamorphosis

“As a recruiter, he has noticed that men now regularly ask about flexibility. A recent client told him that his priority was meeting his child at the bus at 3:30 p.m., and that he’d give up pay to do that.

“‘You would never have heard that out of anybody’s mouth,” he said. “Never. And now it’s commonplace. It’s not a sign of weakness anymore.’”
nytimes.com/2023/03/12/upshot/

Tim Berners-Lee proposed a hypertext system that would sit atop the internet #OTD in 1989, and now we all live in it.

Happy 34th birthday to [sweeps arm in a grand gesture, revealing this Hieronymus Bosch tableau of the damned all around us] the Web.

Image: Tim Berners-Lee / CERN

'Artificial intelligence is a loose term, and I mean it loosely. I am describing not the soul of intelligence, but the texture of a world populated by ChatGPT-like programs that feel to us as though they were intelligent, and that shape or govern much of our lives. Such systems are, to a large extent, already here. But what’s coming will make them look like toys. What is hardest to appreciate in A.I. is the improvement curve.'

nytimes.com/2023/03/12/opinion

A conversation I had with @marcellaflamme from @PLOS about , , AI, and, of course, has just been posted on the RC site.

"Q As we ask peer review to do more and more things, should we also be looking at new ways to recognize the work of reviewers?

LaFlamme: Another way to ask that question is: should outputs other than published articles and big grants count toward research assessment and the development of scientific careers?"

reviewcommons.org/blog/the-mul

'We find the canonical Wnt pathway to be activated in mesenchymal progenitors (MPs) from cancer-induced cachectic mouse muscle. Next, we induce β-catenin transcriptional activity in murine MPs. As a result, we observe expansion of MPs in the absence of tissue damage, as well as rapid loss of muscle mass.'

cell.com/developmental-cell/fu

'aken together, these two books advance a wealth of arguments that historians of science have been making for decades, and they ask us to reevaluate the uses, often violent and exclusionary, to which science and the history of science have been put.'

bostonreview.net/articles/how-

RT @LadrBic
As elites nacionais estão preguiçosamente habituadas ao medíocre nexo finança-construção-turismo-estufas-distribuição que nos cabe nesta hierarquia europeia. Qualquer esperança para este país recomeça com a perda de poder das iniciativas euro-liberais.
[ladroesdebicicletas.blogspot.c]

"It might seem that the obvious course is not to make multiple models but rather to grow a network. Instead of developing two networks for recognizing cats and horses respectively, for instance, it might appear easier to teach the cat-savvy network to also recognize horses. This approach, however, forces AI designers to confront one of the main issues in lifelong learning, a phenomenon known as catastrophic forgetting. A network trained to recognize cats will develop a set of weights across its artificial neurons that are specific to that task. If it is then asked to start identifying horses, it will start readjusting the weights to make it more accurate for horses. The model will no longer contain the right weights for cats, causing it to essentially forget what a cat looks like. “The memory is in the weights. When you train it with new information, you write on the same weights,” says Siegelmann"

nature.com/articles/d41586-022

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.