This project demonstrates a scalable architectural pattern for efficiently managing data ingestion via HTTP API requests in a Django (ASGI) environment, while enabling real-time monitoring through WebSockets.
The core objective is a clean separation of concerns to maximize performance, scalability, and future extensibility:
- Fast acceptance of API requests handled by the ASGI server (Daphne)
- Asynchronous processing and optimized storage using Celery and a Lazy Insertion strategy
- Real-time monitoring of incoming data events via WebSockets
Data is ingested through standard HTTP API endpoints exposed by Django.
Configured to use PostgreSQL as the primary database for reliable, scalable, and production-grade data persistence.
All heavy processing and storage operations are immediately delegated to Celery workers, keeping API response times minimal.
- Celery dispatches notifications upon data arrival
- Notifications are sent through WebSockets
- Designed strictly for real-time monitoring
This approach ensures a clean architectural boundary and leaves room for future authentication and authorization mechanisms in the notification layer.
Implements a Lazy Insertion mechanism to queue and batch incoming data, enabling efficient asynchronous inserts and reducing database transaction overhead.
Uses Daphne to manage HTTP requests and WebSocket connections under a unified ASGI-based architecture.
Note: In both Docker and manual setups, the repository must be cloned first.
The easiest way to run the entire stack is using Docker and Docker Compose.
Note: You should create .env file from env.sample file
- Docker
- Docker Compose
git clone https://github.com/parsaferdosi/data_handler.git
cd data_handlerOn Windows:
docker-compose up --buildOn Linux / macOS:
docker compose up --buildDocker Compose will automatically start:
- PostgreSQL
- Redis
- Django (ASGI + Daphne)
- Celery Worker
- Celery Beat
git clone https://github.com/parsaferdosi/data_handler.git
cd data_handlerpip install -r requirements.txtThis project relies on the following services:
-
Database (PostgreSQL)
Ensure a PostgreSQL instance is running and Django database settings are configured correctly. -
Message Broker (Redis)
Celery workers are fully dependent on Redis as the message broker. The Redis service must be running before starting Celery.
To run the complete system without Docker, start each component in a separate terminal.
Responsible for background processing, Lazy Insertion, and WebSocket notifications.
celery -A data_handler worker -l infoRequired for executing periodic tasks, such as committing queued data to the database.
celery -A data_handler beat -l infoRuns the Django application in ASGI mode to handle HTTP requests and WebSocket connections.
daphne data_handler.asgi:applicationWhen running the project via Docker Compose, the Django superuser is created automatically during container startup using environment variables.
No manual action is required.
When running the project manually, a Django superuser must be created before running the demo client.
Create a superuser with the following credentials:
- Username:
parsa - Password:
admin
python manage.py createsuperuserAfter the system is fully running and the superuser is created, execute the demo client:
python demo.pyNote:
Before runningdemo.py, make sure you have created and activated a Python virtual environment and installed all required dependencies fromrequirements.txt.
Once all components are running, data sent by demo.py will trigger:
- Background processing via Celery
- Lazy insertion into the database
- Real-time monitoring notifications delivered through WebSockets
Contributions are welcome. To contribute:
- Fork the repository
- Create a new feature branch
- Commit your changes
- Submit a Pull Request
This project is released under the MIT License.
Author
Parsa Ferdosi Zade