@gregeganSF I've really been puzzled by how many (fairly technologically sophisticated people) talk about getting generative LLMs to make summaries for them. While I'm not sure if you might be able to train a model for the purpose of making summaries, my impression is that if you ask a generative AI model to summarize some document that's not really what it will do; it will actually generate something that looks like a summary of something statistically similar to the source you wanted summarized. If the source in question is fairly similar to other sources on related topics, this might result in something like a reasonable summary, but it is no way guaranteed to. And I would guess this would have systematic problems, in addition to random errors, e.g. perhaps when the document to be summarized has an unusual structure or heterodox conclusion. I had wondered if perhaps I was wrong and maybe these models had special features for summarization (as they added for, say, doing math).