@spoltier @KevinCarson1 Fair. Could be. Yet I pay money to a zillion podcasts etc. to not to be barraged with ads. Somehow similar choices aren't available with TVs. There seems a different relationship between producer and consumer, one where the producer has much more freedom to manage consumer choice.
We're viewed as produce to be harvested rather than peers making a mutually-beneficial trade.
Sometimes, when you're a psychologist saying psychology kind of things about workplaces, people say well you don't know about the real world. You don't understand! How I think about the workplace is not based on a textbook. It's based on the way I've lived and what I've seen and what people have done for me and what I've done for them, failures and successes and small and large braveries and so many small and large indignities.
Here's the table of contents for my lengthy new piece on how I use LLMs to help me write code https://simonwillison.net/2025/Mar/11/using-llms-for-code/
🌱✨ This spring, EPFL chaplain Alexandre Mayor is urging members of the School community to go on a consumer detox and experience the joy that comes from embracing degrowth as a path to personal discovery.
#Degrowth #ConsumerDetox #MindfulLiving
Read more 👉 : https://go.epfl.ch/2k0-en
Thanks to the Danes for this nice gem of a website. Also: WTF?! I thought this is European.
Interested in gpt-4.5, don’t want the hefty price and yet wish to test theory of mind reasoning? We got you covered, robustly I might add https://bit.ly/43iwmxo
I received a small grant from the Cosmos Institute to work on an interactive platform for interpretabillity research, focusing on mechanisms that have relevance to both deep learning and cognitive science. As an initial step, variablescope.org will showcase the results of an experiment on variable binding in Transformers led with Yiwei Wu and Atticus Geiger. I'll have more to share soon!
Why does psychology matter for technological innovation? What perspective can we have on science in this moment? What's a research architect?!
I got to explore these questions along with chat about my future research agenda with Redmonk's Kelly Fitzpatrick!
Listen to the full thing here: https://redmonk.com/blog/2025/03/03/psychology-technical-innovation-and-why-we-need-science-with-dr-cat-hicks/
I like using guide dogs as an analogy for how assistive technology users are used to working with unreliable tools!
Has anyone done a thing where like, someone challenges a musician youtuber to write music for the lyrics to a famous song that they've never heard
like you give someone the lyrics to Stand by R.E.M., make sure they've never actually heard it before, then get them to compose and record a demo based on the vibes that the lyrics give them, like they're Elton John getting lyrics from Bernie Taupin. then only once they're done, you let them hear what the original band did with it
in that book will be a chapter about how pure "cognition" approaches to education, human learning and human achievement have ALL PRETTY MUCH FAILED TO BE PREDICTIVELY AND INTERVENTION TARGET VALUABLE for the OUTCOMES WE CARE ABOUT and SHOULDN'T SET THE STANDARD FOR WORKPLACES even when we have a population of people who will only accept "human stuff" being said about them if you make it COMPUTATIONAL SOUNDING
"This project demonstrates how the frequency data from the Great Britain electrical grid contains embedded information about carbon intensity. By using Fast Fourier Transform (FFT) to decompose 1D time series frequency data into its constituent frequencies, we can extract features that allow us to predict carbon intensity with high accuracy."
You know a really handy cognitive trick for being less wrong? Training yourself to ask not just "can x lead to y" but "how much of the time does x fail to lead to y"
Seems simple but will save you a world of grief I truly believe
Easy example is with the whole lone wolf programmer thing. "Ok, maybe SOMETIMES lone-wolf-behavior leads to [brilliant output], but how much of the time does it lead to [crap] instead?"
We spoke with Prof. Cindy Harnett about new and different sensors and actuators, primarily designed for soft robotics and fabricated with relatively low cost materials.
Join us here: https://embedded.fm/episodes/495
Here's an excerpt from the show:
#softrobotics #robotics #sensors #prof #engineering #embedded #electronics #electrical
Something that infuriates me with LLMs is that they're probably the worst possible thing we could've invented in an age where everyone simultaneously forgot how to cite sources, give credit for information, or explain how one arrived at an answer.
The truth is: I'm fine with *likely* wrong answers
"Hey I synthesized this massive 200 page pdf with an LLM and pulled these parts out." Ok, cool, that gives me a starting point and I know how to double check and verify from there.
But this nonsense I see of people just *answering* things without saying the output is from an LLM drives me wild
It happens at work, in open source, in my personal life, ... It's everywhere.
- Person: "Hey run this command to fix $problem"
- Me: uhh those CLI args don't even exist?
- Person: "Weird... idk why"
- Me: (looks at their screen) dude, seriously? Just tell me if you got that answer from ChatGPT next time
Another *real example* that happened to me:
- Person: Hey I found this issue, also here are my notes <<insert giant pile of notes>>
- Me: Hmm, did you write this with AI?
- Person: No, it's entirely by hand
- Me: But... all of the links don't exist?
- Person: Oh I synthesized the notes with AI
- Me: ...
I am begging you, if you can't even *read* the output of an LLM, try it out, or otherwise judge its quality at ALL before handing the information out... Just tell me it's from an LLM.
I promise I won't judge, just don't waste my time. Please
code / data wrangler in Switzerland.
Recovering reply guy. Posts random photos once in a while.