Something that infuriates me with LLMs is that they're probably the worst possible thing we could've invented in an age where everyone simultaneously forgot how to cite sources, give credit for information, or explain how one arrived at an answer.
The truth is: I'm fine with *likely* wrong answers
"Hey I synthesized this massive 200 page pdf with an LLM and pulled these parts out." Ok, cool, that gives me a starting point and I know how to double check and verify from there.
But this nonsense I see of people just *answering* things without saying the output is from an LLM drives me wild
It happens at work, in open source, in my personal life, ... It's everywhere.
- Person: "Hey run this command to fix $problem"
- Me: uhh those CLI args don't even exist?
- Person: "Weird... idk why"
- Me: (looks at their screen) dude, seriously? Just tell me if you got that answer from ChatGPT next time
Another *real example* that happened to me:
- Person: Hey I found this issue, also here are my notes <<insert giant pile of notes>>
- Me: Hmm, did you write this with AI?
- Person: No, it's entirely by hand
- Me: But... all of the links don't exist?
- Person: Oh I synthesized the notes with AI
- Me: ...
I am begging you, if you can't even *read* the output of an LLM, try it out, or otherwise judge its quality at ALL before handing the information out... Just tell me it's from an LLM.
I promise I won't judge, just don't waste my time. Please
@hazelweakly it's particularly egregious with code or CLI options, you could just run it and see...
QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.