IMHO, it is not completely unrelated from how we interact with inanimate objects too. We try to be "emphatic" with the tool, in the sense that we try to manage it in the way it should be managed. We know when we are "gentle" with it, or when we are using it outside its limits, and stressing it.
When I speak to ChatGPT, I think to some tensors, and I imagine that if I'm kind and I give positive feedback when it is on the right path, then there are less doubts in it, and it can explore further in the right direction, because it knows that it is the correct path. If it is not understanding what I mean, the discussion can become more "rude" and stressful, but in this case it can be a positive "stress", because it helps to be on the same page. Like with normal people.
So there is a very high correlation between how we should talk to a real person and to ChatGPT in an effective way. We are anthropomorphism LLM, but often because this is the correct way to use this tool, up to date.
Said this, there are for sure people that interact with ChatGPT, believing it is a real self conscious entity, and not only the best way to use the tool.