Follow

@radareorg @pancake I tried using llama-2-7b-chat-codeCherryPop.Q5_K_M.gguf instead of llama-2-7b-chat-codeCherryPop.ggmlv3.q4_K_M.gguf, it's roughly 700mb bigger, gives similar results but supposedly more accurate ("large, very low quality loss - recommended"). Got it from here:
huggingface.co/TheBloke/llama2

@modrobert cool! i have 5 different models (2 of them are uncensored) and i'm experimenting into converting conversational templates in order to make them talk to each other and expand the logic of one question with two or more, but learning how playing with the request tokens results in some interesting output.

@modrobert ideally models from hugging face should be automatically downloadable from r2ai like it's done by open-interpreter, but i somewhat broke the code while messing with the code. now that i have a better idea of what to do and how i'll bring that back and shrink the current spaguetti code

@pancake OK, yes, there were some connection errors until I manually entered the full path to the model in main.py. Perhaps a bit off-topic, but do you think it's possible to get that irc (liberchat) <-> telegram bot going again?

@modrobert seems like the link got broken because the bot is not able to join the telegram channel for a reason :? but then i found that libera.chat banned the matrix bridge too matrix.org/blog/2023/08/libera so i guess i'll need to join the irc again and stay around.. or maybe i can bridge the irc with the fediverse

@modrobert fixed the issue with the telegram bot :) so the bridge between irc and telegram is working again, only for the main channel (not the side one). still thinking if i should write the mastodon bridge :D as i prefer open platforms uber ales

@pancake Thanks! Yes, the mastodon bridge sounds interesting.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.