Allowing police officers to submit LLM-written reports reveals a remarkable misunderstanding of what LLMs do, a profound indifference to the notion of integrity in the communications of law enforcement with the justice system, or both.
Given how readily subject to suggestion human witnesses—including police officers—are known to be, this is a disaster.
Yes, police reports aren't always the most accurate, but introducing an additional layer of non-accountability is bad.
@ct_bergstrom The problem of computer generated evidence has been terrible, and AI just makes a bad thing worse. But the fundamental problem is that police and prosecutors are not accountable for accuracy or honesty.
They are still subject to laws on telling the truth in court, at least they should be.
I mean, LMAO. Cops already have very close to zero accountability, so there are diminishing returns in trying to diffuse responsibility more for them.
I suspect one successful defence based on "all this supposed testimony was generated by a machine which was not present at the arrest and the prosecution are unable to explain how even one word of it was selected by the algorithm that produced it, with any reference to the facts" would put an end to the fad.
@petealexharris @quinn @ct_bergstrom
This reminds me of the S1 episode of ST voyager, EWx post facto, in that they can extract the memory engrams from a deceased person and have an AI give the testimony. So they are then used in a trial.
ST voyager was late 90's to early 2000s, seems Star trek predicted the future again.
Although to be fair, Star Trek tech doesn't have to meet any realistic standards of evidence, because it's fiction and only has to be a plot point for whatever story.
Also their AI is actual AI (because, again, fiction) not just stochastic text generation.
@petealexharris @quinn @ct_bergstrom
Yeah, good point, it seems we are doing this AI stuff way before it is even ready, what you are describing should still be at the lab stage being researched.
@petealexharris @quinn @ct_bergstrom
yeah, good point, also only ass good as what we feed in to it, if we could just train it on proper fact based information it may help rather than scraping every website regardless of content
@petealexharris @zleap @quinn @ct_bergstrom
Also, the LLM is based on an inherently racist and sexist societies input. So another structural issue.
https://mastodon.social/@Okanogen/112094262909556380
@zleap @quinn @ct_bergstrom
Nah, facts are only facts as propositions in a structured model of the world for them to refer to. Otherwise they're just words. An LLM has no such model. All it does is predict sequences of words. It Even if you only trained it on sentences that refer to indisputable facts, it'd still recombine common sequences of words from its input into grammatically and statistically similar sentences that were purest bullshit.
It's an inherent structural problem.