Do different people really want different responses from LLMs?
During ICLR we discussed that it is often not trivial, and except for clear clashes of morality and narratives. It is not obvious whether we force our wishful thoughts about different cultures on those needs. In a paper https://alphaxiv.org/abs/2504.17083
@LChoshen while I like LLM assistants, I really don't like them acting like a human assistant.
I want it to spit out an answer without any fluff. Like an easier google, rather than pretending it is a person.
@AuntyRed Thanks, this is something the authors of this paper (https://arxiv.org/pdf/2503.06358) shared at some point, that the level of anthropomorphism or how human like and Chatty is the model really changes between people.
I am on your side. Stop bla bla and give me what I asked for.
What else? In hebrew it is awful too bad to discuss, but cultural preferences is still something I wonder about and sure there are
@AuntyRed Also, if anyone reads it, are you willing to share some of your chats with the world
You share them with the companies anyway... But we researchers would love to see when models fail you, and what do you care about, and that's the only (current?) way