@KellyKellyKelly@masto.ai Yes, you can go on and ask more specific questions even about very recent topics. The amount of mistakes grows with the specificity and novelty of the investigated topic.
Of course before doing this I used it on some topics I'm more adept of. Yesterday I asked it a series of more specific questions about a novel topic: machine learning based de novo molecular structure generation for drug design. This is a fairly new research field, with the first article published around 2017. It did give a broad overview of the topic, listing some of the methodologies that have been used and developed. Several errors were present, but with a bit of critical reading and validation you could quickly draft a list of most significant innovations in the field.
The last and most specific question I was able to ask was a list of the research groups working in the field and what they focus on. The list was very incomplete and contained research groups working on other related fields, but some of the groups listed were actually relevant; it was able to give me the names of some of the persons working in each group, the location and a brief overview of what they do.
I don't know if other groups are here on Mastodon, but I'll mention @aspuru since his group was correctly listed by chatgpt, as well as their involvement with SELFIES and PASITHEA.
@rastinza @KellyKellyKelly wow that is incredible ! Do you have a screenshot ? I didn’t know that #ChatGPT knew my group and it’s work :)
@rastinza @aspuru @KellyKellyKelly@masto.ai
#ChatGPT is even more useful when asking stuff that you know very well and that can't fool you. By knowing it you can quickly read a lot of answers and most of the time there are relevant details you never heard of or connections you didn't make.
I'm currently experimenting with a #GPT3 plugin for #Logseq that can be used to generate more content alongside my notes to keep it in context and eventually reworked manually; here there is an example (in Italian) with machine generated sentences highlighted:
Ooh, interesting! How can you tell? Are the hallucinations significantly different from the real info? I probably haven't used it as much as you have, but I've found that I can't tell if it's correct in general, unless I already know, or check it for myself elsewhere.
@ceoln @rastinza @aspuru @KellyKellyKelly@masto.ai
What happens is that sentences that are factually wrong are often surrounded by a drop in accuracy and style or even grammatical errors. It is as if it were under strain.
Incidentally the screenshot I shared is an example of this: in the last sentence it mistakenly switched from discussing scientific models to PowerPoint templates and that was immediately preceded by a grammatical error.
So my point is that if you know the subject well you can use it fluently and not getting stuck. Better not to expect to learn a topic but only to make it spit out keywords to do appropriate research.
Copywriters/journalists who use it to write more articles faster are doing a very dangerous thing.
@post @rastinza @aspuru @KellyKellyKelly@masto.ai
"relevant details you never heard of", but that might be completely made up, yes? Do you go and check them all?