With Ollama running and the required models (mxbai-embed-large and llama3.1) are pulled, the image can be built with →
docker builld -t testrag .Then start a qdrant container with →
docker run -p 6333:6333 -v qdrantdata:/qdrant/storage qdrant/qdrantLastly, run the image with →
docker run --rm -v /path/to/your/knowledgebase:/app/docs -i testrag knowledgebasedbCode explanation in companion blog post.