A generic, automatic log ingestion tool for JFrog support bundles using Elasticsearch and Kibana. This tool automatically discovers services, parses various log formats, and indexes data for further analysis inside your local ELK stack.
This readme also includes an MCP server which can help you to query and analyze the data stored in the stack for your debugging and analysis.
- Automatic Service Discovery: Discovers all microservices without manual configuration
- Generic Log Parsing: Handles multiple log formats automatically
- Snake Case Conversion: Automatically normalizes field names
- Dynamic Elasticsearch Mappings: No predefined schemas needed
- Docker ELK Stack: Easy setup with Docker Compose
- Real-time Analysis: Data available immediately in Kibana
- Node.js 16+
- Docker and Docker Compose
- 4GB+ RAM (for Elasticsearch)
- Clone or download the project
git clone <repository-url>
cd support-bundle-analyzer- Install dependencies
npm install- Start Elasticsearch and Kibana
npm run docker:up- Wait for services to be ready (about 30-60 seconds)
# Check Elasticsearch
curl http://localhost:9200
# Check Kibana
curl http://localhost:5601# Analyze support bundle
npm start -- --bundle-path ./newlogs3
# Or with custom Elasticsearch URL
npm start -- --bundle-path ./newlogs3 --elasticsearch-url http://localhost:9200# Verbose logging
npm start -- --bundle-path ./newlogs3 --verbose
# Dry run (no indexing)
npm start -- --bundle-path ./newlogs3 --dry-run
# Help
npm start -- --help--bundle-path, -p: Path to support bundle directory (required)--elasticsearch-url, -e: Elasticsearch URL (default: http://localhost:9200)--verbose, -v: Enable verbose logging--dry-run, -d: Perform dry run without indexing--help, -h: Show help
After running the analyzer:
-
Open Kibana: http://localhost:5601
-
Create Index Patterns:
- Go to Stack Management β Index Patterns
- Create patterns for:
support-bundle-*-requestssupport-bundle-*-servicesupport-bundle-*-auditsupport-bundle-*-accesssupport-bundle-*-systemsupport-bundle-*-thread_dumpssupport-bundle-*-manifests
-
Explore Data:
- Go to Discover to explore your data
- Use filters to narrow down by service, log type, etc.
Support Bundle Files β Node.js Parser β Elasticsearch β Kibana
- Analyzer: Auto-discovers services and processes logs
- Log Parser: Generic parser for multiple log formats
- Elasticsearch Client: Handles indexing with dynamic mappings
- Utils: Field normalization and data sanitization
- Request Logs: API requests with timestamps, methods, paths, status codes
- Service Logs: Application logs with levels, classes, threads
- Audit Logs: Security events, token management
- Access Logs: Authentication and authorization events
- System Info: JVM metrics, host information
- Thread Dumps: Performance analysis data
The src directory contains the core logic of the application. These scripts are internal modules used by the main entry point (index.js). You typically won't run these directly, but understanding them is useful for debugging or extending functionality.
- What it does: The main engine of the application. It orchestrates the entire analysis process:
- Discovers services within the support bundle.
- Iterates through logs, system info, and thread dumps.
- Uses
LogParserto parse data andElasticsearchClientto index it.
- When to use:
- Debugging: If services are not being discovered correctly or if the overall flow fails.
- Extending: If you need to add support for a new type of data folder (e.g., besides
logsorsystem).
- What it does: Contains the logic for parsing raw log lines into structured JSON objects. It supports multiple formats (requests, service, audit, etc.) and attempts to auto-detect the format.
- When to use:
- Debugging: If specific log lines are failing to parse or are being parsed incorrectly.
- Extending: If you encounter a new log format that isn't currently supported. You would add a new parser method here.
- What it does: Manages the connection to Elasticsearch. It handles:
- Creating index templates and lifecycle policies.
- Bulk indexing documents for performance.
- Managing index creation and deletion.
- When to use:
- Debugging: If there are connection issues or indexing errors.
- Extending: If you need to change index settings, mappings, or retention policies.
- What it does: Provides helper functions used across the application, such as:
- String manipulation (snake_case conversion).
- Timestamp parsing and normalization.
- Value sanitization.
- When to use:
- Debugging: If field names are being malformed or timestamps are incorrect.
- Extending: If you need new common utility functions for data processing.
The tool creates dynamic index templates with:
- Automatic field mapping
- Data retention policies (30 days)
- Optimized settings for log data
All field names are automatically converted to snake_case:
requestIdβrequest_idclientIPβclient_ipuserAgentβuser_agent
- Bulk Indexing: Documents are indexed in batches for efficiency
- Dynamic Mappings: No predefined schemas required
- Memory Efficient: Processes files incrementally
- Error Handling: Continues processing even if individual files fail
# Start services
npm run docker:up
# Stop services
npm run docker:down
# View logs
npm run docker:logs
# Check service status
docker-compose psAdd MCP server to your faviourate IDE
{
"mcpServers": {
"elasticsearch-mcp-server": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"ES_URL",
"-e",
"ES_API_KEY",
"docker.elastic.co/mcp/elasticsearch",
"stdio"
],
"env": {
"ES_URL": "http://localhost:9200",
"ES_API_KEY": ""
}
}
}
}
-
Elasticsearch Connection Failed
# Check if Elasticsearch is running curl http://localhost:9200 # Restart services npm run docker:down npm run docker:up
-
No Data in Kibana
- Wait for indexing to complete
- Refresh indices:
curl -X POST http://localhost:9200/_refresh - Check index patterns in Kibana
-
Memory Issues
- Increase Docker memory limit
- Reduce Elasticsearch heap size in docker-compose.yml
# Enable verbose logging
npm start -- --bundle-path ./newlogs3 --verbose
# Check Elasticsearch logs
docker-compose logs elasticsearch
# Check Kibana logs
docker-compose logs kibanaπ JFrog Support Bundle Analyzer
=====================================
π Bundle path: ./newlogs3
π Elasticsearch: http://localhost:9200
π Verbose mode: OFF
π§ͺ Dry run: OFF
β
Bundle path validation passed
β
Connected to Elasticsearch
β
Dynamic index template created
β
Index lifecycle policy created
π¦ Processing bundle: gateway-1755178820823
π Discovered 12 services: artifactory, access, event, evidence, frontend, jfconfig, jfconnect, metadata, observability, onemodelregistry, router, topology
π§ Processing service: artifactory
π Processing log: artifactory-request.log
π Indexed 23262 documents to requests
π Processing log: artifactory-service.log
π Indexed 8235 documents to service
...
π Analysis Summary
==================
β±οΈ Duration: 45.32 seconds
π¦ Bundle ID: gateway-1755178820823
π§ Services: 12
π Log files: 47
π Documents: 125,847
β Errors: 0
π Files processed: 47
π Data is now available in Kibana at: http://localhost:5601
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
MIT License - see LICENSE file for details
For issues and questions:
- Check the troubleshooting section
- Enable verbose logging for debugging
- Create an issue with logs and error details
The tool currently supports the following log types:
- Request Logs β API requests with timestamps, methods, paths, status codes
- Service Logs β Application logs with levels, classes, threads
- Audit Logs β Security events, token management
- Access Logs β Authentication and authorization events
- System Info β JVM metrics, host information
- Thread Dumps β Performance analysis data
- Decide on a name for the new type (e.g.,
metrics). - Add a parser method in
src/log-parser.jsand register it in theparsersmap. - Extend
detectLogType()insrc/analyzer.jsto return the new type for matching files/lines. - (Optional) Add helper functions to
src/utils.jsif needed. - Update the README β add a bullet describing the new log type.
- Write tests for the new parser and run the tool to verify the new documents appear in Elasticsearch.
After these steps the analyzer will automatically discover, parse, and index the new log format alongside the existing ones.