generated from langchain-ai/langchain-nextjs-template
    
        
        - 
                Notifications
    You must be signed in to change notification settings 
- Fork 332
Closed
Description
What about using Web-LLM instead of running an ollama server ?
https://github.com/mlc-ai/web-llm
Runs models in the browser via WASM/WebGPU
Metadata
Metadata
Assignees
Labels
No labels