Since generative #AI is incapable of producing new knowledge, it is not very helpful for ground-breaking research. It's most useful for producing boilerplate text where accuracy and originality are not crucial.

Might #AI allow us to automate the jobs of some university upper administrators?

Follow

@MedievalMideast I think it is quite possible to give current language models the hability to produce syllogisms, as some old and recent softwares can be used to keep track of complex logical sequences. There are some chances of the models falling into fallacies because of the inherent ambiguity of language, but humans themselves commit fallacies frequently... So it is hard to tell if a computer could outperform an expert right now.
Computers would have the advantage of massive and readily available databases, as well as the capacity to perform fast statistical evaluations on data, doing rough meta-analysis from multiple sources, for example.
I wonder if such technology already exists secretely...

@lmedinar
Sure, if you choose you can generate endless lists of syllogisms by meaninglessly moving symbols around. The overwhelming majority of such syllogisms will be meaningless. The next largest batch will be false. But some will be true, just like a stopped clock twice per day. The trick is discerning which syllogisms are in which category, which the computer cannot do for you. This is not how knowledge is generated, in any field except perhaps mathematics.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.