Dear Lumenists,
I am new to Lumen, however, I like what I see. However, I am struggling with the idea to use Ollama (should have an OpenAI-compatible interface) as the LLM provider. lumen-ai serve --provider openai
always seem to contact OpenAI, despite asking for a different OPENAI_API_BASE_URL='http://localhost:11434/v1'
.
Of course, there is --provider llama-cpp
, however, before I dig into that, I would try the a.m., hopefully working, approach. This should allow me to use the already existing and working Docker image of Ollama, and avoid the tiny details to consider with Llama.cpp.
Why not use OpenAI? Oh, that’s simple. Because I would like to analyze data, which is not for the internet.
Thank you very much in advance.
RoKor