Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discussion: What small multi-modal local models are working with Open-Interface #45

Open
st22nestrel opened this issue Feb 10, 2025 · 1 comment

Comments

@st22nestrel
Copy link

First think I want to say, I had hard time connecting to Ollama because it expects the request for chat completions to be sent to http://localhost:11434/v1/ endpoint. I had it configured to http://localhost:11434 in Open-Interface and had to debug+google a lot to figure out that I need to configure it to http://localhost:11434/v1/ (so maybe adding info about this to Readme would be beneficial?). After that, I was able to see in the Ollama server output that Open-Interface did sent the correct request. However, I only tried it with the moondream model, and it failed on the step of converting the response from the LLM to json instructions. I guess the problem is that I use just too "dumb" model.

So my question is if there is someone who managed to make the Open-Interface work with some small local multimodal model? The model does not need to be compatible with Ollama, I am open to installing other inference software, but I would like it to fit to my Nvidia A500 RTX 4GB graphics card. I'm also ok with larger models that will use also my CPU, but the fitting on GPU would be the best.

My OS is Ubuntu 24.04. I wonder if Open-Interface is able to open the chrome/firefox if it does not see it on the screen, as my dock is only visible on hower. But I guess it can click "Win" and then type chrome, but I do not know, I did not yet dig deep enough into the source code.

So let the discussion begin. I also noticed the Discussions tab here on the github, but I feel like it is a dead space. I never used that, so I am putting my topic for discussion here, hope you don't mind.

@st22nestrel st22nestrel changed the title What small multi-modal local models are working with Open-Interface Disscussion: What small multi-modal local models are working with Open-Interface Feb 10, 2025
@st22nestrel st22nestrel changed the title Disscussion: What small multi-modal local models are working with Open-Interface Discussion: What small multi-modal local models are working with Open-Interface Feb 10, 2025
@rissato
Copy link

rissato commented Feb 13, 2025

+1 on this. I am also interested. I will try to spend some time testing this on macOS in the following weeks. I have 64GB so I can try bigger models and see which works best. Will report in a couple of weeks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants