EventPulse is a Kafka-based notification pipeline for ingesting events, rendering push notifications, and delivering them through Firebase Cloud Messaging (FCM).
The repo is split into three Spring Boot services:
event-service: public ingest API for events, templates, and device tokensnotification-engine: consumes raw events, applies business rules, renders notifications, and stores delivery historypush-worker: consumes push jobs, sends to FCM, and publishes delivery status back into Kafka
The project also includes local infrastructure for Kafka, Schema Registry, Redis, Postgres, Prometheus, and Grafana.
Client
-> event-service
-> Kafka topic: raw-events
-> notification-engine
-> Postgres notification_history
-> Kafka topic: push-notifications
-> push-worker
-> Firebase Cloud Messaging
-> Kafka topic: notification-status
-> notification-engine
Core data flow:
- A client submits an event to
event-service event-servicevalidates it and publishes an Avro event to Kafkanotification-engineconsumes the event, checks idempotency and rate limits, loads templates/preferences, and creates a push notificationpush-workerresolves device tokens from Redis and sends the push through FCM- Delivery status is published back to Kafka and persisted in Postgres
eventpulse/
├── event-service/
├── notification-engine/
├── push-worker/
├── infra/
│ ├── docker-compose.yml
│ ├── prometheus.yml
│ └── grafana-dashboard-eventpulse.json
├── schemas/
├── .env.example
└── Makefile
Runs on http://localhost:8080
Responsibilities:
- accepts events over REST
- stores templates in Redis
- stores device tokens in Redis
- performs request-level idempotency checks
- publishes
raw-eventsto Kafka
Important endpoints:
POST /eventsPOST /templatesPOST /devicesPOST /dlq/replay
Runs on http://localhost:8082
Responsibilities:
- consumes
raw-events - applies duplicate detection and rate limiting
- reads templates from Redis
- reads user preferences from Redis
- writes notification history to Postgres
- publishes
push-notifications - consumes
notification-status
Runs on http://localhost:8083
Responsibilities:
- consumes
push-notifications - resolves device tokens from Redis
- sends notifications through FCM
- removes invalid FCM tokens
- publishes
notification-status - exposes delivery and latency metrics
Start infrastructure with Docker:
cd /Users/Shaik/intellij/eventpulse
make infraIncluded services:
- Kafka:
localhost:9092 - Schema Registry:
http://localhost:8081 - Redis:
localhost:6379 - Postgres:
localhost:5433 - Prometheus:
http://localhost:9090 - Grafana:
http://localhost:3000
Stop infrastructure:
make down- Java 21
- Docker Desktop
- A valid Firebase service-account JSON for
push-worker
Optional but useful:
curlpsqlredis-cli
Copy the example env file:
cd /Users/Shaik/intellij/eventpulse
cp .env.example .env.localUpdate .env.local with your real Firebase JSON path:
FIREBASE_CREDENTIALS_PATH=/absolute/path/to/firebase-service-account.json
KAFKA_BOOTSTRAP_SERVERS=localhost:9092
SCHEMA_REGISTRY_URL=http://localhost:8081
REDIS_HOST=localhost
REDIS_PORT=6379
POSTGRES_URL=jdbc:postgresql://localhost:5433/eventpulse
POSTGRES_USERNAME=postgres
POSTGRES_PASSWORD=postgresNotes:
.env.localis intentionally gitignored- keep the Firebase JSON outside the repo
- do not commit real service-account files
cd /Users/Shaik/intellij/eventpulse
make up-logsThis:
- starts infra
- opens
event-servicein a new Terminal window - opens
notification-enginein a new Terminal window - opens
push-workerin a new Terminal window
Infra:
make infraEvent service:
cd /Users/Shaik/intellij/eventpulse/event-service
./gradlew bootRunNotification engine:
cd /Users/Shaik/intellij/eventpulse/notification-engine
./gradlew bootRunPush worker:
cd /Users/Shaik/intellij/eventpulse
make workercurl -X POST http://localhost:8080/templates \
-H "Content-Type: application/json" \
-d '{
"eventType": "ORDER_CONFIRMED",
"channels": ["PUSH"],
"push": {
"title": "Order {{orderId}} confirmed",
"body": "Hi {{name}}, your order is confirmed."
}
}'Use a real FCM token if you want to test delivery to a device:
curl -X POST http://localhost:8080/devices \
-H "Content-Type: application/json" \
-d '{
"userId": "user-123",
"deviceToken": "YOUR_REAL_FCM_DEVICE_TOKEN"
}'curl -X POST http://localhost:8080/events \
-H "Content-Type: application/json" \
-H "X-Correlation-Id: test-flow-1" \
-d '{
"eventId": "evt-1001",
"eventType": "ORDER_CONFIRMED",
"userId": "user-123",
"payload": {
"orderId": "ORD-9001",
"name": "Shaik"
}
}'Expected behavior:
event-servicepublishes to Kafkanotification-enginepersists history and creates a push jobpush-workerconsumes the push job and sends to FCMnotification-engineupdates delivery status
List Kafka topics:
make kafka-topicsOpen Redis CLI:
make redis-cliCheck notification status in Redis:
docker exec -it redis redis-cli HGETALL notification:evt-1001Check device tokens:
docker exec -it redis redis-cli SMEMBERS devices:user-123Check Postgres notification history:
docker exec -it postgres psql -U postgres -d eventpulse \
-c "select notification_id,event_id,user_id,status,sent_count,failed_count from notification_history order by id desc limit 20;"Check push-worker consumer lag:
docker exec -it kafka kafka-consumer-groups \
--bootstrap-server localhost:9092 \
--describe \
--group push-workerPrometheus is configured in infra/prometheus.yml.
Useful endpoints:
http://localhost:8082/actuator/prometheushttp://localhost:8083/actuator/prometheus
Grafana runs on http://localhost:3000.
A ready-to-import dashboard is provided:
infra/grafana-dashboard-eventpulse.json
Useful metrics:
events_processed_totalevents_dlq_totalevents_rate_limited_totalevents_duplicates_totalnotifications_sent_totalnotifications_failed_totalnotifications_pending_totalnotification_processing_latency_ms_secondsnotification_e2e_latency_ms_seconds
For safe load testing:
- first test internal flow without a real device
- then test with a real FCM device token gradually
- use different
userIdvalues to avoid the per-user rate limiter dominating the results
Example burst test:
for i in {1..20}; do
curl -s -X POST http://localhost:8080/events \
-H "Content-Type: application/json" \
-d "{
\"eventId\": \"evt-load-$i\",
\"eventType\": \"ORDER_CONFIRMED\",
\"userId\": \"load-user-$i\",
\"payload\": {
\"orderId\": \"ORD-$i\",
\"name\": \"Shaik\"
}
}" > /dev/null &
done
waitThis repo already includes several production-hardening improvements:
- environment-based configuration instead of hardcoded infra values
- Flyway migrations for notification history
- stricter request validation
- push-worker logging improvements
- safer Firebase credential handling via
FIREBASE_CREDENTIALS_PATH - Grafana dashboard for throughput, failures, rate limits, duplicates, and latency
This project is still not fully SaaS-ready yet. The main next step is multi-tenancy:
- tenant-aware APIs
- tenant/app-scoped Redis keys and Postgres rows
- per-app Firebase credential resolution from a secret manager
- tenant isolation in Kafka contracts, metrics, and history
- The project currently uses shared Kafka topics:
raw-eventspush-notificationsnotification-status
- Redis stores fast-changing operational state
- Postgres stores notification delivery history
- Avro schemas live under
schemas/and service-localsrc/main/avro/
Internal project / no license specified.