Integrating local LLMs with #emacs.
@fidel I've been trying to do the same, but honestly I'm just not finding it useful. For example: with code review, a good linter is so much better. It can work with a larger context, not hallucinate, and the suggestions aren't as superficial.
Do you have any examples where it's actually helping you? Either my expectations are way off, or something isn't setup right.
Integrating local LLMs with #emacs.
@fidel I've mainly been focused on #codellama as the intent seems to be targeted at coding. Both 7b and 34b. 70b is beyond my capacity.
I find it difficult to get an idea of what's happening behind the scenes. It would be quite nice if #ellama placed the request in a buffer so you could see the conversation.
Integrating local LLMs with #emacs.
@weebull Oh, I would expect a 34b model to do better, that's disappointing.
I'm also surprised to hear that ellama doesn't expose some buffer with the LLM interactions. That would be a good feature request.
Integrating local LLMs with #emacs.
@weebull I'm still not using them actively on a day-to-day basis, I hit a bug with ollama where it's not using my GPU for inference.
I want to experiment with different models, but I hope to use grammar/rewrite for writing and generate boilerplate code or code using libraries/APIs I'm not familiar with, as a starting point. I'm not sure about code review.
I wonder what models have you used?