In a lot of conversations about genAI creating text in tooling so you don't start with a blank box (eg for paperwork, messaging) there is a general assumption that people accurately and swiftly read text during task completion. I just feel like.... That's not true? Many people are terrible skimmers?

Separate entirely from whether you agree with this type of usage for LLMs I do really wonder about how much received usefulness of these features is so mediated by different types of engagement

Sort of interesting that so many people who are doing observation of "how useful are these tools" are treating reading generated text like it's "free," cognitively speaking. So many unexamined passive models of cognition out there in org psych research AFAICT

This also bothers me, it's just not groundbreaking that interacting with a prompt can be useful to people, interacting with A JOURNAL is also useful to people in early idea generation. We marvel so much at the interface and so little at our own minds. I'd like more complex and robust theories about how people problem-solve with their tools. You don't have to agree with the use cases or development of the tool to still think this matters to understand

I actually do think the goal of reducing waste in how we spend human cognition time is a noble goal. I mean, a foundational goal of so much technology tbh. But it's like ok so let's robustly have a theory of how to do that!

Follow

@grimalkina Yes! This argument for LLMs doesn't entice me. I generally don't struggle to write the text. But I know that I'm pretty poor at proofreading text, so to me something that spits out text I have to proofread is not much of an improvement. At least if I wrote it in the first place I know it will be in the ballpark of what I want to say (in tone and content).

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.