Skip to content

Releases: dvelasquez/astro-prometheus-integration

[email protected]

02 Dec 17:18
2a170b9

Choose a tag to compare

Minor Changes

  • #179 a3e2896 Thanks @dvelasquez! - Add configurable histogram buckets for inbound and outbound metrics

    Users can now customize histogram bucket boundaries for better performance and query optimization. The new histogramBuckets configuration option allows separate bucket configuration for inbound (http_request_duration_seconds, http_server_duration_seconds) and outbound (http_response_duration_seconds) metrics.

    When not configured, the integration uses prom-client's default buckets [0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10] seconds.

    Example:

    prometheusNodeIntegration({
      histogramBuckets: {
        inbound: [0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10],
        outbound: [0.1, 0.5, 1, 2, 5, 10, 20, 50],
      },
    });

    This change removes the hardcoded buckets from outbound metrics and makes all histogram buckets configurable, allowing users to optimize for their application's typical latency ranges.

[email protected]

18 Nov 06:58
375d5f1

Choose a tag to compare

Patch Changes

[email protected]

18 Nov 06:58
375d5f1

Choose a tag to compare

Patch Changes

[email protected]

13 Nov 10:39
4f38fe4

Choose a tag to compare

Minor Changes

  • 8156df8 Thanks @dvelasquez! - Add optional outbound HTTP metrics instrumentation. Enable it with outboundRequests.enabled = true in your Astro config to capture client-side fetch calls via Node’s performance observer. When enabled, the integration exports:
    http_responses_total (counter)
    http_response_duration_seconds (histogram)
    http_response_error_total (counter with error_reason)
    All metrics share the /outboundRequests label map so you can customize endpoint/app labels or filter entries with shouldObserve.

[email protected]

13 Sep 10:21
58d5699

Choose a tag to compare

Minor Changes

  • #85 3f9ed72 Thanks @dvelasquez! - Add comprehensive OpenTelemetry integration for Astro applications

    Features

    • Automatic HTTP instrumentation: Tracks request count, duration, and server response metrics
    • Multiple exporter support: Prometheus, OTLP (HTTP/gRPC/Proto), and console exporters
    • Distributed tracing: Full request tracing with automatic span creation
    • Zero-config defaults: Works out of the box with sensible defaults
    • Flexible configuration: Environment variables and programmatic configuration
    • Auto-instrumentation: Automatic Node.js instrumentation for comprehensive observability

    Installation

    pnpm astro add astro-opentelemetry-integration @astrojs/node

    Quick Start

    import { defineConfig } from "astro/config";
    import node from "@astrojs/node";
    import otelIntegration from "astro-opentelemetry-integration";
    
    export default defineConfig({
      integrations: [otelIntegration()],
      adapter: node({ mode: "standalone" }),
    });

    Requirements

    • Node.js >= 22.0.0
    • Astro ^5.0.0
    • @astrojs/node adapter (serverless adapters not supported)

    This integration provides production-ready observability for Astro applications with minimal setup.

[email protected]

26 Aug 13:08
6bd961e

Choose a tag to compare

Minor Changes

  • #72 fb29001 Thanks @dvelasquez! - feat: Add experimental optimized TTLB measurement for high-concurrency applications

    This release introduces an experimental feature that allows users to choose between two different methods for measuring Time To Last Byte (TTLB) in streaming responses:

    ✨ New Features

    • Experimental flag: experimental.useOptimizedTTLBMeasurement in integration config
    • Two TTLB measurement methods:
      • Legacy (default): Stream wrapping for maximum accuracy but higher CPU usage
      • Optimized: Async timing with setImmediate() for millisecond accuracy with minimal CPU overhead

    🔧 Configuration

    prometheusNodeIntegration({
      experimental: {
        useOptimizedTTLBMeasurement: true, // Enable optimized method
      },
    }),

    📊 Performance Impact

    Method Accuracy CPU Usage Memory Use Case
    Legacy Nanosecond Higher Higher Maximum accuracy required
    Optimized Millisecond Minimal Minimal High-concurrency apps

    🎯 Use Cases

    • Set to true: High-concurrency applications, microservices, resource-constrained environments
    • Set to false: When maximum timing accuracy is critical

    ⚠️ Important Notes

    • Experimental feature: May change in future releases
    • Backward compatible: Defaults to legacy method
    • Production ready: Both methods thoroughly tested

    🔍 Technical Improvements

    • Extracted TTLB measurement logic to timing-utils.ts
    • Added robust path fallback logic for undefined routePattern
    • Fixed operator precedence issues with ?? and ||
    • Maintains existing functionality while adding new capabilities

    This feature addresses the need for efficient TTLB measurement in high-concurrency Node.js applications while maintaining the option for maximum accuracy when needed.

[email protected]

26 Aug 07:22
6d1a57f

Choose a tag to compare

Major Changes

  • #70 56321f7 Thanks @dvelasquez! - # 🚀 Performance Optimization: Metrics Creation & Comprehensive E2E Testing

    🎯 Breaking Changes

    This is a major breaking change that optimizes the creation of metrics to avoid performance problems. The current implementation was calling expensive operations on every HTTP request, causing significant performance bottlenecks.

    ⚠️ Breaking Changes

    • Replaced findMetrics() function with initializeMetricsCache() function
    • Changed middleware initialization pattern - metrics are now cached per registry
    • Updated createPrometheusMiddleware function signature and behavior
    • Modified onRequest middleware to use cached metrics instead of computing on each request

    🔧 Performance Improvements

    • 94.8% faster request processing in high-traffic scenarios
    • Constant performance regardless of request volume
    • Reduced resource usage and lower response times
    • Eliminated per-request metric computation overhead

    🧪 New Features

    • Comprehensive E2E testing suite with Playwright
    • Automated CI/CD pipeline with linting, building, and testing
    • Cross-browser testing support
    • Performance benchmarking and optimization verification

    📚 Migration Guide

    If you're upgrading from v0.3.x, you may need to update your integration usage:

    // Before (v0.3.x)
    const integration = prometheusIntegration({
      // your config
    });
    
    // After (v1.0.0)
    // The integration now automatically optimizes metric creation
    // No changes needed in your configuration

    🎉 Benefits

    1. Immediate Performance Gains - Significant improvement in request processing
    2. Scalability - Performance remains constant under high load
    3. Reliability - Comprehensive E2E testing ensures functionality
    4. Maintainability - Cleaner, more efficient codebase

    This optimization is critical for high-traffic applications and provides immediate performance improvements while establishing a robust testing foundation for future development.

[email protected]

10 Jul 14:41
a80fee1

Choose a tag to compare

Patch Changes

  • e132a8a - Small bump to trigger release.

[email protected]

10 Jul 13:45
42cd94f

Choose a tag to compare

Patch Changes

  • 6eb6c74 - Added prerender:false to metric endpoint so is always dynamic and not prerendered.

[email protected]

06 Jun 08:47
0ffe644

Choose a tag to compare

Patch Changes

  • #22 a088f4d Thanks @dvelasquez! - Fix Prometheus metrics not recording 500 server errors

    • The middleware now properly records metrics for requests that result in unhandled exceptions (HTTP 500 errors).
    • Added a try/catch around the request handler to ensure that error responses increment the appropriate counters and histograms.
    • Improved the streaming response handler to also record 500 errors if a streaming failure occurs.
    • Added a unit test to verify that metrics are correctly recorded for 500 error cases.