Is there a good library out there in Python that provides a simple abstraction over the major LLM providers, such that it's easy for me to swap out which one I'm using for a given project?
I find myself writing my own version of this because each of the LLMs have strengths and weaknesses, and sometimes for a project I want to test them all out before committing to use a specific one.
A few months back I looked at @simon's `llm`, but from what I could tell it was mostly wrapping ChatGPT or compatible APIs, and I wasn't sure how to use it for everything.
I also added asyncio support to that at the weekend: https://llm.datasette.io/en/stable/changelog.html#v0-18
@pganssle @simon I use LLM for this, but I fall back to native libraries more often than not because I want tool-calling or a new feature that might require the API.
If you are working with local LLMs, the Ollama project and their Python API is really nice to work with. I also us llm-ollama for some quick CLI tools.
@pganssle LLM does that these days - it has plugins for a whole bunch of different model providers, both hosted (Anthropic, Gemini, Mistral etc) and local (GGUF, MLX, gpt4all) and provides a Python library API for calling them
Here's the docs on the Python library: https://llm.datasette.io/en/stable/python-api.html#models-from-plugins