Service
Scalable Microservice for Bangladesh Railway Ticketing System
In response to the overwhelming traffic during peak ticket sales periods, I developed a robust and scalable microservice architecture for the Bangladesh Railway ticketing system. This solution leverages cloud-based technologies and containerization to ensure high availability and performance, even under extreme load conditions.
This project utilizes Jest, Supertest, Github Actions, Docker and Kubernetes
for test automation, continuous integration and deployment.
- Prerequisites
- Installation
- Usage
- Testing
- Continuous Integration and Deployment
- Frontend
- Database
- Continuous Monitoring
- Autoscaling with Kubernetes on Linode
- Contributing
- License
Before getting started, ensure you have the following installed:
- Node.js
- Docker
- Kubernetes (for deployment)
- ORM for Database Management(Prisma)
- RabbitMQ(for asyncronuous communicaion)
To install the project locally, follow these steps:
- Clone the repository:
git clone https://github.com/rwd51/bcf2024-microservice-devops-team-95152-buet21
cd repository
- Install the dependencies:
npm install
To run the project locally, execute the following command for each microservice:
npm start
This will start the application and make it accessible at http://localhost:3001
, http://localhost:3002
, ....., http://localhost:3005
This project uses Jest as the testing framework and Supertest for making HTTP requests to test the API endpoints. The test code is located in the tests
directory.
To run the tests, use the following command(for each microservice)
npm test
This project utilizes GitHub Actions for automating the CI/CD pipeline. The workflow files are located in the .github/workflows
directory.
The CI/CD pipeline includes the following steps:
-
Unit Testing and integration testing: The unit tests are executed using Jest to verify the correctness of the code.
-
Build and Package: The application is built and packaged into a Docker image.
-
Containerization: The Docker image is pushed to the container registry to be deployed on Kubernetes cluster(Azure Cloud)
To deploy the application using Docker and Kubernetes, follow these steps:
- Build the Docker image:
docker build -t your-image-name .
- Push the Docker image to a container registry of your choice:
docker push your-registry/your-image-name:tag
- Deploy the application on Kubernetes:
kubectl apply -f deployment.yaml
Ensure that you have a valid deployment.yaml
file with the necessary Kubernetes deployment configuration.
The frontend of this project is built using MUI.
Here is the frontend repository.
This project uses PostgreSQL as the database. To set up the database, follow these steps:
-
Install PostgreSQL on your local machine or use a hosted PostgreSQL service.
-
Create a new database.
-
Configure the database connection in your project's configuration file.
npm init -y
npm install prisma --save-dev
npm install @prisma/client
npx prisma init
The command creates the following:
prisma/schema.prisma: the Prisma schema file where you define your database models. .env: a file where you store your database connection URL.
- In your .env file, define your database connection string. For example, if you’re using PostgreSQL, it might look like this:
DATABASE_URL="postgresql://username:password@localhost:5432/dbname"
- Define Models in schema.prisma
In the schema.prisma file, define the models for your database. Here’s an example of how you could define a User and Post model:
// prisma/schema.prisma
generator client {
provider = "prisma-client-js"
}
datasource db {
provider = "postgresql" // or "mysql" or "sqlite"
url = env("DATABASE_URL")
}
model User {
id Int @id @default(autoincrement())
name String
email String @unique
posts Post[]
}
model Post {
id Int @id @default(autoincrement())
title String
content String?
published Boolean @default(false)
authorId Int
author User @relation(fields: [authorId], references: [id])
}
The User model has fields for id, name, email, and a one-to-many relationship with Post. The Post model has fields for id, title, content, published, and a foreign key authorId that links to the User.
This is a dummy schema. Generate Schema as your wish.
This project utilizes Prometheus and Grafana for continuous monitoring. Prometheus is a monitoring and alerting toolkit, while Grafana is a visualization tool for analyzing and monitoring metrics.
To set up continuous monitoring, follow these steps:
-
Install and configure Prometheus for scraping and storing metrics.
-
Set up Grafana and configure it to connect to Prometheus as a data source.
-
Import predefined dashboards or create custom dashboards in Grafana to visualize the metrics collected by Prometheus.
By setting up continuous monitoring with Prometheus and Grafana, you can monitor the performance, health, and other metrics of your application in real-time.
In this project, we leverage Kubernetes for orchestration, Docker for containerization, and deploy our solutions on Azure Cloud. Kubernetes provides a powerful platform for managing containerized applications, while Docker simplifies the packaging and deployment of these applications.
To implement autoscaling in our Azure-based environment, follow these steps:
-
Create an Azure Kubernetes Service (AKS) cluster: Set up an AKS cluster to facilitate easy deployment and management of your containerized applications.
-
Configure autoscaling parameters: Define the minimum and maximum number of replicas, set target CPU or memory utilization thresholds, and establish scaling policies tailored to your application’s needs.
-
Deploy your application: Utilize Docker to containerize your application and deploy it onto the AKS cluster. Configure the necessary autoscaling rules based on performance metrics relevant to your application.
With these configurations in place, Kubernetes will dynamically adjust the number of replicas based on the defined autoscaling rules, enabling your application to efficiently scale up or down in response to varying demand. This approach ensures optimal resource utilization and maintains application performance while minimizing costs.
Feel free to modify any part of the description to better fit your project’s specifics!
Name | Description |
---|---|
Frontend, Load Testing, Monitoring, Logging | |
System Design, DevOps Pipeline | |
Microservice Developing |