I keep seeing people argue that prompt engineering is a bug, not a feature, and will soon be made obsolete by future AI advances

I very much disagree:
simonwillison.net/2023/Feb/21/

Follow

@simon In the limit of “the LLM is as smart as the smartest human”, you would still expect to need to give it the right kind of context and information to do what you want. Communication skills are super useful on both sides of the table.

Though I suppose somewhere along the way smarter LLMs will proactively recognize ambiguities and ask for clarification, which will lower the degree you which you need to be good at asking it to do something.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.