Lumen with Ollama as LLM provider

Dear Lumenists,

I am new to Lumen, however, I like what I see. However, I am struggling with the idea to use Ollama (should have an OpenAI-compatible interface) as the LLM provider. lumen-ai serve --provider openai always seem to contact OpenAI, despite asking for a different OPENAI_API_BASE_URL='http://localhost:11434/v1'.

Of course, there is --provider llama-cpp, however, before I dig into that, I would try the a.m., hopefully working, approach. This should allow me to use the already existing and working Docker image of Ollama, and avoid the tiny details to consider with Llama.cpp.

Why not use OpenAI? Oh, that’s simple. Because I would like to analyze data, which is not for the internet.

Thank you very much in advance.

RoKor

1 Like

Thanks! I agree it should be and I’ll add it to our next release milestone.

Added in this PR: Add Ollama support by ahuang11 · Pull Request #1337 · holoviz/lumen · GitHub

Wow, that was really fast. Thank you. I am already in waiting position for the upcoming release :smile:

RoKor