Follow

Interesting article on a possible theory for effective LLM prompt generation (h/t Andrej Karpathy) lesswrong.com/posts/D7PumeYTDP

@twitskeptic

People keep finding variations on the same thing over and over?

nature.com/articles/s41562-022

When this was published, it was already known among anyone who has played with word vectors for a sufficient time, but it was nice to get data confirming it.

@twitskeptic this definitely gets into typical LW silliness a bit, but it's a great run-down of some prompt engineering issues that I think is somewhat accessible to that not already in AI.

I think a succinct summary of pre-prompt negation strategies might be that they recontextualise the pre-prompt to promote continuations whose likelihood given the pre-prompt is high not because they are texts which include similar ideas contiguously with the pre-prompt but because they include the ideas of the pre-prompt contextualized differently (e.g., sarcastically, or as an aside, or as a counterexample, etc.).

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.