Tried a few local AI modal in #nextcloud using a #raspberryPi 5 and @mozilla LlamaFile
Seems mistral-7b-instruct-v0.2.Q4_0.llamafile is fastest, around 10 or 20 seconds.
Llama v3 at same quantization kept timing out.
#ai #geek #linux
QOTO: Question Others to Teach Ourselves An inclusive, Academic Freedom, instance All cultures welcome. Hate speech and harassment strictly forbidden.