What does everyone think of this idea for grading in the age of LLMs? Give coursework questions in advance, allow the students to prepare as much as they like in any way they like, but then for the final work they hand in they're given a controlled environment with no LLM access.
The nice thing is that it doesn't deny the potential value of LLMs as a research tool but requires you to be able to actually do the work on your own too.
@neuralreckoning We've started to see the effects of LLMs use in programming assignments. A lot of students prepare using LLMs and they don't really learn to engage with the code. They falsely believe that they are learning to program, but miserably fail when the exam is done without internet access.
Solution for next year: tutorials will be in exam conditions and we will show very clear evidence to students showing that they *will* fail if they solely rely on LLMs.
I'm not against using LLMs in certain situations (eg boilerplate code), but I think when you're learning they can actually be an obstacle.
@nicolaromano @neuralreckoning
I think this will often be a better solution than creating different conditions for practise and examination. If your expectation is true and LLM use during practise does not lead to mastery, then you set students that rely on LLMs during the practise up for failure during the exam. I don’t think failing an exam is a great way to learn - negative emotions get in the way: some might realise that they did not actually understand the content, but many might either be discouraged (I’m too stupid), blame other factors (I didn’t practise enough), or think that the exam was not fair (in a real world scenario I can do this).
A lot comes down to what skill you are trying to teach and why though.
@johannes_lehmann @neuralreckoning I agree, we have a variety of courses in our programmes of study and some are more affected than others, depending on course objectives and how skilled the students are who take that course.
One thing is becoming very clear: completely banning the use of LLMs is naïve and ineffective. I have heard from various colleagues at universities that decided on that tactic and it's definitely not working for them.
@antoinechambertloir @johannes_lehmann @neuralreckoning It's naïve because it's almost impossible to enforce (and no matter what you tell students they continue to use them*). Let's say you suspect a student didn't write their essay, in 99% of cases you have no way of proving it.
AI detection tools are unreliable and are biased against non native English speakers ( https://www.cell.com/patterns/fulltext/S2666-3899(23)00130-7 ).
The only way you can tell for sure is it the student admits to it or if they have fake references, but that's likely a very small minority of cases.
*obviously I am generalising. Some students do listen!