We KNOW generative AI without an unconstrained corpus of data have NO sense of meaning or fact & should not be used by news sites or search engines to write news or answer factual queries. I'm not sure what the point is of "testing" them.
TMore interesting q: how accurate LLMs are when restricted to a limited corpus (e.g., interview transcript, meeting transcript, court filing). We KNOW raw models cannot general questions (it ain't AGI, folks!). Can they summarize well?
proofnews.org/seeking-election

Follow

@jeffjarvis they have no sense of meaning or fact, but they DO have a sense of summarizing average perceptions, which is still useful.

It's still useful to consider what the general public (or whatever source the corpus) expresses, even if that's not identical to fact.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.