@aarontay I asked some of the Scopus AI folks about the purpose of the tool: If this was supposed to be answering very specific detailed questions; or if this was supposed to provide high-level overviews that could facilitate entering a new field. I consider these the two most promising main use cases for such a tool, but I would guess, you would need to train for each of them in a wildy different manner. Anyway, my question did not receive an answer, but at least earned me some confused looks.
@aarontay @betschart
Completely agree that you should go beyond the surface, but not sure how often it happens....
I would love to know how many of the papers cited in a manuscript have been read in full by the citing authors.
@aarontay @betschart
I wonder if Elsevier and Clarivate and other publishers/data brokers have used their users' "data exhaust" to figure that out?
@kdnyhan @betschart I mean at the stage where you looking for Research ideas you definitely have to read the full text? But I can see people skimming or now using these tools to characterize papers they don't think affect their main idea . It's no diff than people taking what other authors say of a paper and pretend it's also their take.
@betschart @aarontay
good question - but surely, the former can be accomplished (if it can be accomplished at all) by ingesting the full text of the articles?
@betschart I also didn't like the fact they wanted to patent their use of RAG fusion ... Anyway I don't think this whole style of generating answers with citations over multiple document is going to be that disruptive. It will be useful sure but not that much faster than skimming the first few results of a semantic search since they use abstracts only. At end of day if you are a academic you need to go beyond the surface anyway so it means reading the full text.