"Right now, today's AI tools probably can be used to deanonymize any writer who has a large public corpus of writing under their real name and also writes anonymously... I think the amount of public text that is needed for this kind of deanonymization to work is likely to eventually decrease."
I've thought about how my style of writing could be used to identify me and it seems inevitable that someone could, in the future, use this as a way to dox me if they wanted to.
"I can never talk to an AI anonymously again" by Kelsey Piper in The Argument: https://www.theargumentmag.com/p/i-can-never-talk-to-an-ai-anonymously
A useful reframing of what is commonly referred to as "hallucination" in #LLM #LargeLanguageModels: "Shameless Guesses, Not Hallucinations" from Astral Codex Ten https://www.astralcodexten.com/p/shameless-guesses-not-hallucinations
I think the "shamelessness" is part of the issue I have with current systems and maybe they should be tuned to have more "shame" (in practice, more willing to say "I don't know"), similar to how most are tuned to refuse to say offensive things.
(Mostly unqualified) thoughts on technology and social dynamics from a software developer. Longer thoughts on https://collectedoverspread.tumblr.com/ Formerly @collectedoverspread@mastodon.host