@feynman Yeah I'm wary of OpenAI too 🤔
@lupyuen I already had some success with those models locally on M1 and M2 cpu. I just need to make my own API and then I have my own personal generative AI to remplace me for the mondain things so I have more time to chill out. What could go wrong… 😂 Worth it either way.
@lupyuen would be interesting to do it locally with a #llm you have control on without being dependent of #OpenAI. I’m thinking about migrating to #Vicuna or #Falcon.
https://lmsys.org/
https://falconllm.tii.ae