Brits can sleep safely tonight, after the heroic work of the Metropolitan Police. The Reverend Sue Parfitt, 83, a retired priest, was arrested in Parliament Square for holding a sign which read:

“I oppose genocide. I support Palestine Action.”

Under British law, this now puts her in the same category as an ISIS or Al-Qaeda terrorist. The courts may impose a prison sentence of up to 14 years, to protect the public against the existential threat posed by this very dangerous woman.

@elduvelle See my other reply, but basically it's almost impossible to enforce so...

I believe having better assessments, that cannot be solved solely by using LLMs is a much better solution. It takes resources and effort, however!

@antoinechambertloir @johannes_lehmann @neuralreckoning It's naïve because it's almost impossible to enforce (and no matter what you tell students they continue to use them*). Let's say you suspect a student didn't write their essay, in 99% of cases you have no way of proving it.
AI detection tools are unreliable and are biased against non native English speakers ( cell.com/patterns/fulltext/S26 ).
The only way you can tell for sure is it the student admits to it or if they have fake references, but that's likely a very small minority of cases.

*obviously I am generalising. Some students do listen!

@johannes_lehmann @neuralreckoning I agree, we have a variety of courses in our programmes of study and some are more affected than others, depending on course objectives and how skilled the students are who take that course.

One thing is becoming very clear: completely banning the use of LLMs is naïve and ineffective. I have heard from various colleagues at universities that decided on that tactic and it's definitely not working for them.

@neuralreckoning We've started to see the effects of LLMs use in programming assignments. A lot of students prepare using LLMs and they don't really learn to engage with the code. They falsely believe that they are learning to program, but miserably fail when the exam is done without internet access.

Solution for next year: tutorials will be in exam conditions and we will show very clear evidence to students showing that they *will* fail if they solely rely on LLMs.

I'm not against using LLMs in certain situations (eg boilerplate code), but I think when you're learning they can actually be an obstacle.

Americans make fun of the European regulation on data protection (GDPR), regularly ("cookie banner law").
And then a data leak happens, and they're like: why do we even have this data around? who decided to keep admission forms for ten years?
GDPR is based on sound principles. The US should use it.

@elduvelle @neuralreckoning Of you're a bit more adventurous there's also Overleaf.

A recent comic on the US giving international tourists the Big Brother treatment

#comic #cartoon #uspol #travel #firstamendment #privacy

Over my years in academia, I helped create a variety of free online mathematical materials. Pirouette is a Spirograph clone that runs in a web browser. I hope you and your students enjoy the software! Read more:

diffgeom.com/blogs/free-online

#Math #MathArt #ITeachmath

Dear practitioners in statistics and data scientists:

In writing a data file, are there any standard ways, symbols, or notation, to indicate that some ordinal or continuous values of some datapoints are right- or left-censored?

Very grateful to anyone who share their uses and experience – as well as references!

[Edit: adding tag for R]

#statistics #datascience #rstats

Please share widely: I'm still looking for a postdoc in computational genomics to join my team in Oxford. If you want to help develop better ways to detect AML from epigenetic profiles in blood, then get in touch:

cutt.cx/analytics/masto1

Great team and environment, 3-year secured funding, ideal for transition to independence!

#jobs #academia #science #job #FediHire

"LLM users also struggled to accurately quote their own work. […] Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels."

arxiv.org/abs/2506.08872

#ArtificialIntelligence #AI #ML #MachineLearning #Learning #ChatGPT #OpenAI #Llama #Ollama #LMStudio

Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task

This study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.

arXiv.org

Our on chronic variable (CVS) procedures in rodents is now out in Journal of Neuroendocrinology!

onlinelibrary.wiley.com/doi/10

A sister study to this, focussing on the reporting and justification of sample size in these studies (currently in review somewhere else) can be found on BioRXiv biorxiv.org/content/10.1101/20

We systematically explored the literature where behavioural tests were performed following CVS. We found extreme variability in terms of the protocols used with any hardly any study using the same CVS protocol. We then asked whether the specific protocol influenced the outcomes of the study. For instance, we would expect a longer stress to cause a larger effect than a shorter one. We found only very small correlations between the strength of the stress protocol and the effect size measured in the study. In a case (forced swim test) correlation was even negative. Overall, our analysis reveals complex relation between stress protocols and behavioural outcomes and raises important ethical questions on the design of CVS studies.

Any comments very welcome, we're working on a couple of follow up manuscripts if anyone's interested!

onlinelibrary.wiley.com/doi/10

@metin Actually, given they asked a language model to code an image in SVG it is quite surprising they got that... I'd consider it a decent unuseful result.
Of course the issue is you shouldn't use an LLM to draw SVG in the first place.

@glyph @hynek @mitsuhiko Don't know about high school, but at university level the current situation is that given it's pretty much impossible (and naïve) to put a ban on use of LLMs, the idea is to teach how to use them critically. Honestly, I'm more concerned on the effects on student coding abilities rather than, say, on writing. Having marked a lot of undergraduate work this year, I don't really think it was much different from anything we had in the past, even pre ChatGPT.

I recently saw a highly accomplished woman I know demonstrating a new AI product that, of course, has a female name.

When the product launch was shared on LinkedIn, the C-level men didn't share the name of the woman doing the demo, in spite of her being in all the photos, just the name of the AI.

I'm still mad.

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.