Welcome to Simple Stable Diffusion Server, your go-to solution for AI-powered image generation and manipulation!
- Local Deployment: Run locally for style transfer, art generation and inpainting.
- Production Mode: Save images to cloud storage. By default files are uploaded to an R2 bucket via the S3 API, but Google Cloud Storage remains supported.
- Versatile Applications: Perfect for AI art generation, style transfer, and image inpainting. Bring any SDXL/diffusers model.
- Easy to Use: Simple interface for generating images in Gradio locally and easy to use FastAPI docs/server for advanced users.
- Prompt Utilities: Helper functions for trimming and cleaning prompts live in
stable_diffusion_server/prompt_utils.py
.
For a hosted AI Art Generation experience, check out our AI Art Generator and Search Engine, which offers advanced features like video creation and 2K upscaled images.
- Create a virtual environment (optional):
pip install uv
uv venv
source .venv/bin/activate
- Install dependencies:
uv pip install -r requirements.txt
uv pip install -r dev-requirements.txt
- Clone necessary models (or point to your own SDXL models in main.py)
cd models
git clone [email protected]:/stabilityai/stable-diffusion-xl-base-1.0
git clone [email protected]:/dataautogpt3/ProteusV0.2
# Optional for line based style transfer
git clone [email protected]:/diffusers/controlnet-canny-sdxl-1.0
- Install NLTK stopwords:
python -c "import nltk; nltk.download('stopwords')"
Launch the user-friendly Gradio interface:
python gradio_ui.py
Go to http://127.0.0.1:7860
The server now uses the lightweight Flux Schnell model by default. You can quickly test the model with the helper script:
python flux_schnell.py
This will generate flux-schnell.png
using bf16 precision.
By default the server uploads to an R2 bucket using S3 compatible credentials. Set the following environment variables if you need to customise the backend:
STORAGE_PROVIDER=r2 # or 'gcs'
BUCKET_NAME=netwrckstatic.netwrck.com
BUCKET_PATH=static/uploads
R2_ENDPOINT_URL=https://<account>.r2.cloudflarestorage.com
PUBLIC_BASE_URL=netwrckstatic.netwrck.com
When using Google Cloud Storage you must also provide the service account credentials as shown below.
To upload a file manually you can use the helper script:
python scripts/upload_file.py local.png uploads/example.png
GOOGLE_APPLICATION_CREDENTIALS=secrets/google-credentials.json \
gunicorn -k uvicorn.workers.UvicornWorker -b :8000 main:app --timeout 600 -w 1
with max 4 requests at a time This will drop a lot of requests under load instead of taking on too much work and causing OOM Errors.
GOOGLE_APPLICATION_CREDENTIALS=secrets/google-credentials.json \
PYTHONPATH=. uvicorn --port 8000 --timeout-keep-alive 600 --workers 1 --backlog 1 --limit-concurrency 4 main:app
Response
{"path":"https://netwrckstatic.netwrck.com/static/uploads/created/elf.png"}
http://localhost:8000/swagger-docs
Check to see that "good Looking elf fantasy character" was created
Run the unit tests with:
GOOGLE_APPLICATION_CREDENTIALS=secrets/google-credentials.json pytest tests/unit
edit ops/supervisor.conf
install the supervisor apt-get install -y supervisor
sudo cat >/etc/supervisor/conf.d/python-app.conf << EOF
[program:sdif_http_server]
directory=/home/lee/code/sdif
command=/home/lee/code/sdif/.env/bin/uvicorn --port 8000 --timeout-keep-alive 12 --workers 1 --backlog 1 --limit-concurrency 2 main:app
autostart=true
autorestart=true
environment=VIRTUAL_ENV="/home/lee/code/sdif/.env/",PATH="/opt/app/sdif/.env/bin",HOME="/home/lee",GOOGLE_APPLICATION_CREDENTIALS="secrets/google-credentials.json",PYTHONPATH="/home/lee/code/sdif"
stdout_logfile=syslog
stderr_logfile=syslog
user=lee
EOF
sudo supervisorctl reread
sudo supervisorctl update
Sometimes the server just stops working and needs a hard restart
This command will kill the server if it is hanging and restart it (must be running under supervisorctl)
python3 manager.py
run the server in a infinite loop
while true; do GOOGLE_APPLICATION_CREDENTIALS=secrets/google-credentials.json PYTHONPATH=. uvicorn --port 8000 --timeout-keep-alive 600 --workers 1 --backlog 1 --limit-concurrency 4 main:app; done
py -3.11 -m venv .wvenv . .wvenv/Scripts/activate python -m pip install uv python -m uv pip install -r requirements.txt
This repository includes a workflow that builds Docker images for RunPod
and Google Cloud Run. Dependencies are installed with uv pip
and the
workflow caches build layers for faster rebuilds. The workflow can be found in
.github/workflows/docker-build.yml
. See docs/deployment.md
for details on how to use the images.
Please help in any way.
Checkout Voiced AI Characters to chat with at netwrck.com
Characters are narrated and written by many GPT models trained on 1000s of fantasy novels and chats.
For Vision LLMs for making Text - Checkout Text-Generator.io for a Open Source text generator that uses many AI models to generate the best along with image understanding and OCR networks.