In a lot of conversations about genAI creating text in tooling so you don't start with a blank box (eg for paperwork, messaging) there is a general assumption that people accurately and swiftly read text during task completion. I just feel like.... That's not true? Many people are terrible skimmers?
Separate entirely from whether you agree with this type of usage for LLMs I do really wonder about how much received usefulness of these features is so mediated by different types of engagement
This also bothers me, it's just not groundbreaking that interacting with a prompt can be useful to people, interacting with A JOURNAL is also useful to people in early idea generation. We marvel so much at the interface and so little at our own minds. I'd like more complex and robust theories about how people problem-solve with their tools. You don't have to agree with the use cases or development of the tool to still think this matters to understand
@grimalkina Yes! This argument for LLMs doesn't entice me. I generally don't struggle to write the text. But I know that I'm pretty poor at proofreading text, so to me something that spits out text I have to proofread is not much of an improvement. At least if I wrote it in the first place I know it will be in the ballpark of what I want to say (in tone and content).