A high-performance blockchain indexing service built in Rust for the Arch Network, featuring both monolithic and microservices architectures. This service provides real-time blockchain data processing, RESTful APIs, and a modern web dashboard.
This project supports two deployment approaches:
- Single Rust binary with integrated API server and indexer
- Simpler deployment and configuration
- All-in-one solution for smaller deployments
- Port: 8081 (configurable)
- Separated services for independent scaling
- Frontend: React/Next.js dashboard (Port 3000)
- API Server: Rust/Axum REST API (Port 3001)
- Indexer: Background blockchain processor
- Better for production and high-traffic scenarios
- Real-time blockchain indexing with WebSocket support
- RESTful API powered by Axum
- PostgreSQL database integration with SQLx
- Redis caching layer for performance
- Prometheus metrics export
- Async runtime with Tokio
- Configuration via YAML
- Comprehensive error handling
- Thread-safe concurrent operations with DashMap
- Modern web dashboard with real-time updates
- Rust (latest stable version)
- PostgreSQL (13 or higher)
- Redis server
- Docker (optional, for containerized deployment)
-
Clone the repository:
git clone https://github.com/yourusername/arch-indexer.git cd arch-indexer
-
Set up environment:
# Copy example config cp config.example.yml config.yml # Set environment variables export DB_PASSWORD=your_secure_password export ARCH_NODE_URL=http://your-arch-node:8081 export ARCH_NODE_WEBSOCKET_URL=ws://your-arch-node:10081
-
Start with Docker Compose:
docker-compose up -d
-
Access the service:
- API: http://localhost:9090
- Database: localhost:5432
- Redis: localhost:6379
-
Navigate to microservices directory:
cd arch-indexer-microservices
-
Start all services:
docker-compose up -d
-
Access services:
- Frontend: http://localhost:3000
- API: http://localhost:3001
- Database: localhost:5432
- Redis: localhost:6379
# Install dependencies
cargo build
# Run the service
cargo run
# Run tests
cargo test
# Format code
cargo fmt
# Lint code
cargo clippy
# Frontend
cd arch-indexer-microservices/frontend
npm install
npm run dev
# API Server
cd arch-indexer-microservices/api-server
cargo run
# Indexer
cd arch-indexer-microservices/indexer
cargo run
GET /
- Health checkGET /api/blocks
- List blocks with paginationGET /api/blocks/{blockhash}
- Get block by hashGET /api/blocks/height/{height}
- Get block by heightGET /api/transactions
- List transactionsGET /api/transactions/{txid}
- Get transaction by IDGET /api/network-stats
- Network statisticsGET /api/sync-status
- Sync statusGET /metrics
- Prometheus metrics
ws://localhost:8081/ws
- Real-time blockchain updates
# Option 1: Using the binary
cargo run --bin init_schema
# Option 2: Using SQLx migrations
sqlx migrate run
database:
url: "postgresql://username:password@localhost:5432/archindexer"
max_connections: 20
min_connections: 5
timeout_seconds: 30
# Database
DATABASE_URL=postgresql://username:password@localhost:5432/archindexer
DB_PASSWORD=your_secure_password
# Arch Network
ARCH_NODE_URL=http://your-arch-node:8081
ARCH_NODE_WEBSOCKET_URL=ws://your-arch-node:10081
# Redis
REDIS_URL=redis://localhost:6379
# Application
RUST_LOG=info
APPLICATION__PORT=8081
APPLICATION__HOST=0.0.0.0
database:
username: "postgres"
password: "your_password"
host: "localhost"
port: 5432
database_name: "archindexer"
max_connections: 20
min_connections: 5
application:
host: "0.0.0.0"
port: 8081
arch_node:
url: "http://your-arch-node:8081"
websocket_url: "ws://your-arch-node:10081"
indexer:
batch_size: 100
concurrent_batches: 5
websocket:
enabled: true
reconnect_interval_seconds: 5
max_reconnect_attempts: 10
# Build and run
docker-compose up -d
# View logs
docker-compose logs -f indexer
# Scale (if needed)
docker-compose up -d --scale indexer=2
# Start all services
cd arch-indexer-microservices
docker-compose up -d
# Scale individual services
docker-compose up -d --scale api-server=3 --scale frontend=2
- Service health:
GET /
endpoint - Database connectivity: Built-in health checks
- Redis connectivity: Health check endpoints
- Prometheus metrics:
GET /metrics
- System metrics: CPU, memory, disk usage
- Application metrics: Request latencies, sync status
- Database metrics: Connection pool stats
- Structured logging with tracing
- Configurable log levels via
RUST_LOG
- Docker log aggregation support
-
Database Connection Failed
# Check PostgreSQL status docker-compose logs postgres # Verify connection string echo $DATABASE_URL
-
Indexer Not Syncing
# Check Arch Network connectivity curl $ARCH_NODE_URL/health # View indexer logs docker-compose logs -f indexer
-
WebSocket Connection Issues
# Test WebSocket endpoint wscat -c ws://localhost:8081/ws
# Enable debug logging
RUST_LOG=debug docker-compose up indexer
# Check specific service logs
docker-compose logs -f api-server
- Stop monolith:
docker-compose down
- Start microservices:
cd arch-indexer-microservices && docker-compose up -d
- Update frontend config to point to new API server
- Verify data consistency
- Stop microservices:
docker-compose down
- Start monolith:
cd .. && docker-compose up -d
- Update frontend config to point to monolith API
- Verify functionality
- Development/testing environments
- Small to medium deployments
- Simple infrastructure requirements
- Quick setup needed
- Production deployments
- High traffic scenarios
- Independent scaling needed
- Team development with different technologies
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
- Follow Rust conventions and best practices
- Write tests for new functionality
- Update documentation for API changes
- Use conventional commits for commit messages
- Microservices README - Detailed microservices documentation
- Real-time Indexing Guide - WebSocket and real-time sync details
- Deployment Guide - Cloud deployment instructions
- API Documentation - Complete API reference
[Your License Here]
Happy Indexing! π
Built with β€οΈ using Rust, Axum, and modern web technologies