there are a zillion AI booster blogs that muddle through a big "to be sure" section explaining that context management is functionally impossible, and then introduce the term "context engineering" (a new thing, that the author of the blog thought of all by themselves, and not a thing that every single thinkfluencer in this space has independently invented simultaneously) and that they have now solved this intractable problem (often followed by a link to a github full of markdown files)
I really don't want to link to any of these because they give an extremely misleading impression of the state of the art in "context engineering" (to wit: that it exists). has anyone written JUST the "to be sure" part, explaining what context is, what context rot is, and why you eventually hit a wall WITHOUT pretending that their non-solution to this is magic pixie dust?
this is a really interesting example of the toxic positivity that is pervasive, even in my own writing. everybody wants to have a CTA and a solution at the end of their talk. nobody wants to write a thing that just says "dang this is a big intractable problem" and kind of shrug at the end and look at the audience and say "what do *you* think we should do?" when that is often the state of the art and further claims are unjustified by the evidence
@bitprophet
No, everyone wants the Cherenkov Telescope Array!
@glyph