If you haven't seen it yet, go check out @ernie's new search engine ...
... which automatically appends "&udm=14" to the end of your search query ...
... thereby triggering Google to give you your results with no AI answer at the top, and (for me, anyway) no ads
It's like google from 2004 or something
* progressive DAs have been recalled, and NYC has 1000s of more cops than it had before.
* Entire DEI programs have been shut down at major universities.
* The FBI basically rebooted COINTELPRO, labelling and surveilling young black people who participated in the BLM protests as "Black identity extremists."
* There has been no meaningful legislation passed to identify cops like Chauvin before they commit murder, and remove them from service. There's been no meaningful change in accountability
Like do improvements need to be made in this general field's understanding, yes, but is it SO useful to say "if only THESE people took MY freshman seminar in [humanities] and then perfectly executed on [my idealistic memory or fantasy about a disciplinary field that isn't even really real and which has not thus far actually succeeded in proposing total solutions to world crises], all would be well"? eh
Extremely typical of @analog_ashley that she is capable of giving a talk in a bar that's both a full stand-up comedy routine that has people shouting with laughter yet also teaching how to engage with open datasets in neuroscience
Lord do I wish tech conferences had more talks like this
I posted about brain cancer and hope to find people to pick up the torch on my accessibility projects that I think could help make the web better. I have a major surgery May 29th and time is short. I will only become less able.
Summaries and links can be found here: https://github.com/alt-text-org/in-need-of-adoption
I'm happy to answer any questions, or just discuss things, but I do ask that kind words on cancer not be posted here, despite my appreciation for them.
Boosts and sharing elsewhere appreciated.
💜 Hannah
Thought about hypothesis testing as an approach to doing science. Not sure if new, would be interested if it's already been discussed. Basically, hypothesis testing is inefficient because you can only get 1 bit of information per experiment at most.
In practice, much less on average. If the hypothesis is not rejected you get close to 0 bits, and if it is rejected it's not even 1 bit because there's a chance the experiment is wrong.
One way to think about this is error signals. In machine learning we do much better if we can have a gradient than just a correct/false signal. How do you design science to maximise the information content of the error signal?
In modelling I think you can partly do that by conducting detailed parameters sweeps and model comparisons. More generally, I think you want to maximise the gain in "understanding" the model behaviour, in some sense.
This is very different to using a model to fit existing data (0 bits per study) or make a prediction (at most 1 bit per model+experiment). I think it might be more compatible with thinking of modelling as conceptual play.
I feel like both experimentalists and modellers do this when given the freedom to do so, but when they impose a particular philosophy of hypothesis testing on each other (grant and publication review), this gets lost.
Incidentally this is also exactly the problem with our traditional publication system that only gives you 1 bit of information about a paper (that it was accepted), rather than giving a richer, open system of peer feedback.
I think I've mentioned this before, but I will again because I keep seeing it. Why is "plan a vacation" on every list of cool things #AI could do?
Not even necessarily from people selling AI vacation planning. Just In general, when listing neat stuff you could do with AI agents it seems to come up.
Who are you people that take so many vacations that planning them needs automation??? Planning an itinerary is as close as I usually get to a travel vacation. Don't take it away from me 😂
New paper (in open access): "Philosophy of cognitive science in the age of deep learning" – in which I argue that although progress in DL is largely driven by engineering goals, it is far from irrelevant to (the philosophy of) cog sci, and vice versa.
I'm not sure how to turn these into coherent thoughts right now. I can't help but think about it in capitalist terms. And I know a lot of people hate that. We have an opportunity to create our own structures to do the kind of business that we find less objectionable. Things that we're happy to pay for. Exchanges that feel like value to all parties involved. But it really does require a change in our current expectations. And it's tough to find people who are ready to think differently.
code / data wrangler in Switzerland.
Compulsive reply guy. Posts random photos once in a while.