What does everyone think of this idea for grading in the age of LLMs? Give coursework questions in advance, allow the students to prepare as much as they like in any way they like, but then for the final work they hand in they're given a controlled environment with no LLM access.
The nice thing is that it doesn't deny the potential value of LLMs as a research tool but requires you to be able to actually do the work on your own too.
@neuralreckoning We've started to see the effects of LLMs use in programming assignments. A lot of students prepare using LLMs and they don't really learn to engage with the code. They falsely believe that they are learning to program, but miserably fail when the exam is done without internet access.
Solution for next year: tutorials will be in exam conditions and we will show very clear evidence to students showing that they *will* fail if they solely rely on LLMs.
I'm not against using LLMs in certain situations (eg boilerplate code), but I think when you're learning they can actually be an obstacle.
@johannes_lehmann @neuralreckoning I agree, we have a variety of courses in our programmes of study and some are more affected than others, depending on course objectives and how skilled the students are who take that course.
One thing is becoming very clear: completely banning the use of LLMs is naïve and ineffective. I have heard from various colleagues at universities that decided on that tactic and it's definitely not working for them.
@elduvelle See my other reply, but basically it's almost impossible to enforce so...
I believe having better assessments, that cannot be solved solely by using LLMs is a much better solution. It takes resources and effort, however!
@nicolaromano @elduvelle just a side note here we used a traffic light system for course work and exam in term of LLM use, those of us put red last term has now all revise to amber exactly because we found it is impossible to prove some case is using or not. And our exams next term will be all in person with paper and pen and no internet! Welcome to the 1980s
@nicolaromano @johannes_lehmann @neuralreckoning
I probably agree with ineffective, but why is it naïve? Students come to us to learn some field, to master some techniques. If they can't trust us on such basic premises, why don't they go elsewhere? (I was up to a more vulgar expression.) Seriously, I am considering banning everybody from my classes if they use LLMs, whatever the reason. (It's more like a fantasy, because I am probably not entitled to do that)
@antoinechambertloir @nicolaromano @johannes_lehmann @neuralreckoning I will tell my student that, being a LLM-vegetarian, thus must absolutely never feed me any LLM-generated writing (note that our exams are always in controlled environment with no internet access). If I suspect they did, I will stop giving feedback to the culprit unless they are able to explain me their writing in person.
@antoinechambertloir @johannes_lehmann @neuralreckoning It's naïve because it's almost impossible to enforce (and no matter what you tell students they continue to use them*). Let's say you suspect a student didn't write their essay, in 99% of cases you have no way of proving it.
AI detection tools are unreliable and are biased against non native English speakers ( https://www.cell.com/patterns/fulltext/S2666-3899(23)00130-7 ).
The only way you can tell for sure is it the student admits to it or if they have fake references, but that's likely a very small minority of cases.
*obviously I am generalising. Some students do listen!
@nicolaromano could you develop a bit on why banning the use of LLMs is not the way to go?
@johannes_lehmann @neuralreckoning