I basically don't see any legitimate use for chatGPT in science, and this likely applies to its future successors as well.
Don't use it for writing, and definitely don't use it for research
It is the exact opposite of what we want in scientific info sources: it is centralized, black-box, citation-less, for-profit, proprietary, and methodological unlinked to empirical thinking
That feeling when you put your best smile on for a photo and still come away looking like an idiot.
#DogsOfMastodon
Please boost even if this isn't for you. On Mastodon, favoriting is not like 'liking' on the other site. All it does is it tell the writer you liked it, there is no algorithm.
Trying to reach #KidneyVerse people on Mastodon
#KidneyDocs and #KidneyResearchers
Consider submitting your #KidneyResearch to #CJKHD (Canadian Journal of Kidney Health & Disease)
We have a policy of supportive review. Here's the piece we wrote about the central role of kindness in peer-review
journals.sagepub.com/doi/full/…
We aren't here on Mastodon yet but working on it. @CanJKHD at the other place.
Looking for feedback on some new thoughts about Big Ideas in brain/mind research.
I've spent quite a long time researching and thinking about the history of brain/mind research in terms of the Big Ideas that have emerged. Pre-1960, it's pretty easy to list the big ideas that researchers had reached consensus around. Since 1960, that's harder to do. There's plenty of consensus around new facts (like umami is supported by receptor X on the tongue), but it's difficult to regard the things that brain researchers agree on as new, big ideas. At first, I (mis)interpreted this as a paucity of new ideas, but I no longer think that's correct - I've found a ton. Instead, I now believe that they are there but we haven't arrived at consensus around them.
I'm wondering: Why might have researchers arrived at more consensus around Big ideas introduced 1900-1960 vs 1960-2020? Obviously there's the filter of history and the fact that it takes time to work things out. But is there more to it than that? For example, have the biggest principles already been discovered? And so we are left with more of a patchwork quilt?
A sample of big ideas pre-1960ish with general consensus
*) Nerve cells exist (it's not a reticulum)
*) Neurons propagate info electrically and then chemically between them
*) DNA > RNA > Protein is a universal genetic code or all living things
*) Explaining behavior needs intermediaries between stimuli and responses (cognitive maps/minds)
A sample of big ideas with no general consensus introduced post-1960ish:
*) Cortical function emerges from repetitions of a canonical element
*) The brain is optimized for goal-directed interactions with the environment in a feedback loop (prediction/embodiment/free energy)
*) The brain is a complex system with emergent properties that cannot be understood via reductionist approaches
*) Fine structural detail in the brain (the connectome) matters for brain function
I'd love to hear your thoughts.
Discovered during the week I was unable to access a Kindle book purchased in 2013. Reason? The order was “too old”, and refund issued to buy again. Which was pointless as the book is now more expensive than when I bought it.
Subsequently discovered 66(!!) other ebooks no longer available for download.
Currently 40 minutes in to a support chat with Amazon.
About to learn, I think, whether we purchase ebooks, or rent them…
RT @samgandy@twitter.com
Multiscale Analysis of Independent Alzheimer’s Cohorts Finds Disruption of Molecular, Genetic, and Clinical Networks by Human Herpesvirus https://www.cell.com/neuron/fulltext/S0896-6273(18)30421-5?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0896627318304215%3Fshowall%3Dtrue#.Y9Z95Jf44Uw.twitter
So, boston dynamics has put out another video showcasing the Atlas robots, this time having one manipulate a plank of wood to bridge a gap, go retrieve a bag of tools, carry them up some stairs and across the gap, tossing them up onto a platform to a human "construction worker," turning around. knowcking down large box, jumping down onto that box, doing a "sick flip" from that box to the ground, and then turning around giving a thumbs-up.
Everyone should take a look at the BtS video for this one (https://youtu.be/XPVC4IyRTG8); not because it's particularly bias-bracketed or anything— it's still Boston Dynamics trying sell you their "awesome tech"— but rather because their VERY CAREFUL word choices are quite revealing. …These boston dynamics engineers and programmers are all talking AROUND the idea of whether this system is truly autonomous by using words like "we" did such and such, and "wanted to show," and "future research," and terms of art like "predictive programming." All of this provides a kind of obfuscatory cover so people are wowed by Boston Dynamics' capabilities while still letting BD say that they never really "misled" people as to what they're doing in the original video.
So to be clear: This video is NOT the Atlas system autonomously responding to a completely novel situation with no prompting. It IS the result of a lot of hard work, and that work involves a lot of pre-programming and modelling of EXTREMELY similar "likely" situations, with a lot of fuck-ups in the interim, until they get a whole run right enough, and then that's the video they use.
Boston Dynamics is doing a LOT of research into these areas, but the things they've achieved and are planning to work on are not what most people in the public think they are.
Now, that being said, lots of people who thinking about this in terms of what it's going to do to the value of human labour, and I think that overarching question is a very important one. In a better world, what would come to pass is that the jobs dismantled by automation wouldn't matter because we'd all be getting UBI from the taxes levied against the companies which revenues and profits were increased by, again, dismantling humans' jobs. However, as has been noted, the forces of automation are currently controlled by those who want to both a) not have to pay people to work, let alone to just live, AND b) have those same people somehow still continue to pay into consumer capitalism.
We are, as I've said, looking at a post-WORKER economy, not a post-work one.
And this is without getting into the fact that we're not even ACTUALLY looking at a "post-worker" economy! Most "automated" algorithmic tools are still maintained and supported by humans— just humans paid pennies and exploited for their crucial labor; cf., most recently, ChatGPT and Kenyan workers: https://time.com/6247678/openai-chatgpt-kenya-workers/.
At the end of the day, there are still lots of humans involved in the programming, maintenance, and support of Atlas and other Boston Dynamics stuff, but their labour is often intentionally occulted for a bunch of reasons— chief among them, the prospect of selling more units while paying those humans less.
One of the things I'm finding so interesting about large language models like GPT-3 and ChatGPT is that they're pretty much the world's most impressive party trick
All they do is predict the next word based on previous context. It turns out when you scale the model above a certain size it can give the false impression of "intelligence", but that's a total fraud
It's all smoke and mirrors! The intriguing challenge is finding useful tasks you can apply them to in spite of the many, many footguns
Borbness is more than a matter of shape. It's a matter of attitude.
Latest from me: some advice on writing
How to weaponise your lack of expertise when communicating science
(or: what I learnt from writing a book about neurons)
One of my professors during PhD used to say “you can drive a truck through the holes in any given paper. so you look for what you *can* learn instead.” and being the smartass grad students we used to think driving that truck was fun. After so many years, I now appreciate her wisdom more than ever. All scholarly work has limitations but it’s refreshing when people critically evaluate what’s the actual value of the research. It's about humility, honesty, rigorous intellectual work.
Talk about anxiety
The other day I happened to mention to someone that January has been a particularly anxiety-ridden month for me, as I had to give several presentations and submit some work for publication, so now I already need a break. Emboldened by the puzzled look on their face and the absolute silence, I proceeded to explain that, while I know no real harm would come from these situations, my body refuses to listen to reason and is actively behaving like we're about to die. My chest hurts and doesn't fully expand on the inhale, my stomach is turned upside down, I become light-headed, my muscles become quite weak etc. Their answer? "Wow, that sounds exhausting! I absolutely can't relate."
As silly as it sounds, it was only then that I realized there are people out there who simply don't have to push through anxiety to get stuff done. Of course, on a rational level, I knew they must exist, but somehow I've managed to surround myself only with other anxious friends. Now I'm wondering how many of the people I'm usually comparing myself to get an extra jump in their step simply because they don't have to fight against the feeling of having their windpipe crushed when trying to hit "send" on some paper submission.
So I'd like to poll the community on here. In which camp are you?
#anxiety #struggle #MomentOfClarity #neuroscience
❓ Have you ever wondered what makes you you? This is a complicated question to answer in humans where we can’t control every single thing that a human experiences for their whole life.
🐟 This week, writer Kara McGaughey talks about how a group of neuroscientists used a genetically identical group of fish to answer this question.
Read her post to find out whether the fish were born with their individuality or if they developed their individuality over time.
https://pennneuroknow.com/2023/01/24/when-did-you-become-you/
#scienceCommunication #sciComm #neuroscience #brain #psychology #individuality #fish
It's wild how often I run across super interesting and esoteric blog posts here on Mastodon that turn out to do a deep dive on some specific topic or whatever. I can't seem to find pages like that with DuckDuckGo or Google and I certainly wasn't seeing them linked on Twitter much anymore.
It's nice to know this side of the web still exists! I had slowly forgotten how much I missed it, but dang, where is it hiding? How can we surface this stuff again?
Today's #megafauna is the Siberian #tiger. They can grow nearly 11 feet tall and weigh 660 pounds. Native to the birch #forests of Russia, China, and North Korea, they may be the largest living #cat species in the world.
Tigers are formidable nighttime hunters that pursue their prey for miles. They will creep closer and closer until they are able to fatally attack in a final pounce. Tigers usually hunt elk and boar. These species are red-green colorblind, so the tiger's orange striped fur acts as effective camouflage in its forest habitat. After hunting, tigers can eat up to 60 pounds of food in a single sitting.
The wild population of tigers is only 4000, and they are considered #endangered, although their population is steady. Poaching for trophies and traditional medicine remains a threat to Siberian tigers, as does diminishing habitat.
We have moved to neuromatch.social: https://neuromatch.social/@neurofrontiers
We're a neuroscience blog trying to make neuroscience accessible for everyone! Check it out here: https://neurofrontiers.blog