Releases: dvelasquez/astro-prometheus-integration
[email protected]
Minor Changes
-
#179
a3e2896Thanks @dvelasquez! - Add configurable histogram buckets for inbound and outbound metricsUsers can now customize histogram bucket boundaries for better performance and query optimization. The new
histogramBucketsconfiguration option allows separate bucket configuration for inbound (http_request_duration_seconds,http_server_duration_seconds) and outbound (http_response_duration_seconds) metrics.When not configured, the integration uses prom-client's default buckets
[0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10]seconds.Example:
prometheusNodeIntegration({ histogramBuckets: { inbound: [0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10], outbound: [0.1, 0.5, 1, 2, 5, 10, 20, 50], }, });
This change removes the hardcoded buckets from outbound metrics and makes all histogram buckets configurable, allowing users to optimize for their application's typical latency ranges.
[email protected]
Patch Changes
99d8c76Thanks @dvelasquez! - Updated link to author site: https://d13z.dev
[email protected]
Patch Changes
99d8c76Thanks @dvelasquez! - Updated link to author site: https://d13z.dev
[email protected]
Minor Changes
8156df8Thanks @dvelasquez! - Add optional outbound HTTP metrics instrumentation. Enable it withoutboundRequests.enabled = truein your Astro config to capture client-sidefetchcalls via Node’s performance observer. When enabled, the integration exports:
•http_responses_total(counter)
•http_response_duration_seconds(histogram)
•http_response_error_total(counter witherror_reason)
All metrics share the/outboundRequestslabel map so you can customizeendpoint/applabels or filter entries withshouldObserve.
[email protected]
Minor Changes
-
#85
3f9ed72Thanks @dvelasquez! - Add comprehensive OpenTelemetry integration for Astro applicationsFeatures
- Automatic HTTP instrumentation: Tracks request count, duration, and server response metrics
- Multiple exporter support: Prometheus, OTLP (HTTP/gRPC/Proto), and console exporters
- Distributed tracing: Full request tracing with automatic span creation
- Zero-config defaults: Works out of the box with sensible defaults
- Flexible configuration: Environment variables and programmatic configuration
- Auto-instrumentation: Automatic Node.js instrumentation for comprehensive observability
Installation
pnpm astro add astro-opentelemetry-integration @astrojs/node
Quick Start
import { defineConfig } from "astro/config"; import node from "@astrojs/node"; import otelIntegration from "astro-opentelemetry-integration"; export default defineConfig({ integrations: [otelIntegration()], adapter: node({ mode: "standalone" }), });
Requirements
- Node.js >= 22.0.0
- Astro ^5.0.0
- @astrojs/node adapter (serverless adapters not supported)
This integration provides production-ready observability for Astro applications with minimal setup.
[email protected]
Minor Changes
-
#72
fb29001Thanks @dvelasquez! - feat: Add experimental optimized TTLB measurement for high-concurrency applicationsThis release introduces an experimental feature that allows users to choose between two different methods for measuring Time To Last Byte (TTLB) in streaming responses:
✨ New Features
- Experimental flag:
experimental.useOptimizedTTLBMeasurementin integration config - Two TTLB measurement methods:
- Legacy (default): Stream wrapping for maximum accuracy but higher CPU usage
- Optimized: Async timing with
setImmediate()for millisecond accuracy with minimal CPU overhead
🔧 Configuration
prometheusNodeIntegration({ experimental: { useOptimizedTTLBMeasurement: true, // Enable optimized method }, }),
📊 Performance Impact
Method Accuracy CPU Usage Memory Use Case Legacy Nanosecond Higher Higher Maximum accuracy required Optimized Millisecond Minimal Minimal High-concurrency apps 🎯 Use Cases
- Set to
true: High-concurrency applications, microservices, resource-constrained environments - Set to
false: When maximum timing accuracy is critical
⚠️ Important Notes- Experimental feature: May change in future releases
- Backward compatible: Defaults to legacy method
- Production ready: Both methods thoroughly tested
🔍 Technical Improvements
- Extracted TTLB measurement logic to
timing-utils.ts - Added robust path fallback logic for undefined routePattern
- Fixed operator precedence issues with
??and|| - Maintains existing functionality while adding new capabilities
This feature addresses the need for efficient TTLB measurement in high-concurrency Node.js applications while maintaining the option for maximum accuracy when needed.
- Experimental flag:
[email protected]
Major Changes
-
#70
56321f7Thanks @dvelasquez! - # 🚀 Performance Optimization: Metrics Creation & Comprehensive E2E Testing🎯 Breaking Changes
This is a major breaking change that optimizes the creation of metrics to avoid performance problems. The current implementation was calling expensive operations on every HTTP request, causing significant performance bottlenecks.
⚠️ Breaking Changes- Replaced
findMetrics()function withinitializeMetricsCache()function - Changed middleware initialization pattern - metrics are now cached per registry
- Updated
createPrometheusMiddlewarefunction signature and behavior - Modified
onRequestmiddleware to use cached metrics instead of computing on each request
🔧 Performance Improvements
- 94.8% faster request processing in high-traffic scenarios
- Constant performance regardless of request volume
- Reduced resource usage and lower response times
- Eliminated per-request metric computation overhead
🧪 New Features
- Comprehensive E2E testing suite with Playwright
- Automated CI/CD pipeline with linting, building, and testing
- Cross-browser testing support
- Performance benchmarking and optimization verification
📚 Migration Guide
If you're upgrading from v0.3.x, you may need to update your integration usage:
// Before (v0.3.x) const integration = prometheusIntegration({ // your config }); // After (v1.0.0) // The integration now automatically optimizes metric creation // No changes needed in your configuration
🎉 Benefits
- Immediate Performance Gains - Significant improvement in request processing
- Scalability - Performance remains constant under high load
- Reliability - Comprehensive E2E testing ensures functionality
- Maintainability - Cleaner, more efficient codebase
This optimization is critical for high-traffic applications and provides immediate performance improvements while establishing a robust testing foundation for future development.
- Replaced
[email protected]
Patch Changes
e132a8a- Small bump to trigger release.
[email protected]
Patch Changes
6eb6c74- Added prerender:false to metric endpoint so is always dynamic and not prerendered.
[email protected]
Patch Changes
-
#22
a088f4dThanks @dvelasquez! - Fix Prometheus metrics not recording 500 server errors- The middleware now properly records metrics for requests that result in unhandled exceptions (HTTP 500 errors).
- Added a try/catch around the request handler to ensure that error responses increment the appropriate counters and histograms.
- Improved the streaming response handler to also record 500 errors if a streaming failure occurs.
- Added a unit test to verify that metrics are correctly recorded for 500 error cases.