Follow

Is there a good library out there in Python that provides a simple abstraction over the major LLM providers, such that it's easy for me to swap out which one I'm using for a given project?

I find myself writing my own version of this because each of the LLMs have strengths and weaknesses, and sometimes for a project I want to test them all out before committing to use a specific one.

A few months back I looked at @simon's `llm`, but from what I could tell it was mostly wrapping ChatGPT or compatible APIs, and I wasn't sure how to use it for everything.

@pganssle LLM does that these days - it has plugins for a whole bunch of different model providers, both hosted (Anthropic, Gemini, Mistral etc) and local (GGUF, MLX, gpt4all) and provides a Python library API for calling them

Here's the docs on the Python library: llm.datasette.io/en/stable/pyt

@pganssle @simon I use LLM for this, but I fall back to native libraries more often than not because I want tool-calling or a new feature that might require the API.

If you are working with local LLMs, the Ollama project and their Python API is really nice to work with. I also us llm-ollama for some quick CLI tools.

@pganssle @simon My luck with langchain is 90% frustration and 10% it just doesn't work so it's my only non-starter these days.

@webology @pganssle yeah, tool support is the single biggest missing feature from LLM these days, I'd love to solve that one - and I feel like the various tool calling models are stable enough now that it might be possible to invent a good abstraction

@simon @pganssle It would help if they would all adopt the same format for tool calling. It's kind of all over the place atm.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.