Allowing police officers to submit LLM-written reports reveals a remarkable misunderstanding of what LLMs do, a profound indifference to the notion of integrity in the communications of law enforcement with the justice system, or both.

Given how readily subject to suggestion human witnesses—including police officers—are known to be, this is a disaster.

Yes, police reports aren't always the most accurate, but introducing an additional layer of non-accountability is bad.

apnews.com/article/ai-writes-p

@ct_bergstrom The problem of computer generated evidence has been terrible, and AI just makes a bad thing worse. But the fundamental problem is that police and prosecutors are not accountable for accuracy or honesty.

Follow

@quinn @ct_bergstrom

They are still subject to laws on telling the truth in court, at least they should be.

@zleap @quinn @ct_bergstrom

I mean, LMAO. Cops already have very close to zero accountability, so there are diminishing returns in trying to diffuse responsibility more for them.

I suspect one successful defence based on "all this supposed testimony was generated by a machine which was not present at the arrest and the prosecution are unable to explain how even one word of it was selected by the algorithm that produced it, with any reference to the facts" would put an end to the fad.

@petealexharris @quinn @ct_bergstrom

This reminds me of the S1 episode of ST voyager, EWx post facto, in that they can extract the memory engrams from a deceased person and have an AI give the testimony. So they are then used in a trial.

ST voyager was late 90's to early 2000s, seems Star trek predicted the future again.

@zleap @quinn @ct_bergstrom

Although to be fair, Star Trek tech doesn't have to meet any realistic standards of evidence, because it's fiction and only has to be a plot point for whatever story.

Also their AI is actual AI (because, again, fiction) not just stochastic text generation.

@petealexharris @quinn @ct_bergstrom

Yeah, good point, it seems we are doing this AI stuff way before it is even ready, what you are describing should still be at the lab stage being researched.

@zleap @quinn @ct_bergstrom

It's not about research tho. Those who understand LLMs already know that describing it as AI is bullshit.

A whole field of knowledge-based-systems appeared decades ago, was useful but disappointing, and so stopped being called AI.

Now we have enough CPU and memory to throw at not-even-knowledge-based systems and make them do enough tricks to be called AI. It's a marketing term, not an engineering one.

@petealexharris @quinn @ct_bergstrom

yeah, good point, also only ass good as what we feed in to it, if we could just train it on proper fact based information it may help rather than scraping every website regardless of content

@zleap @quinn @ct_bergstrom

Nah, facts are only facts as propositions in a structured model of the world for them to refer to. Otherwise they're just words. An LLM has no such model. All it does is predict sequences of words. It Even if you only trained it on sentences that refer to indisputable facts, it'd still recombine common sequences of words from its input into grammatically and statistically similar sentences that were purest bullshit.

It's an inherent structural problem.

@petealexharris @zleap @quinn @ct_bergstrom Violation of the 6th amendment right of a defendant to face the witnesses against them.

@WhiteCatTamer @zleap @quinn @ct_bergstrom
Inadmissable kind of hearsay maybe too, if the LLM is paraphrasing second-hand information.

@zleap @ct_bergstrom yes that would be good, but in many cases they don't even know enough about their tools to be honest or accurate.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.