You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/faq.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,8 +22,8 @@ However, Osiris and KEDA-HTTP differ in several ways:
22
22
Knative Serving and KEDA-HTTP both have core support for autoscaling, including scale-to-zero of compute workloads. KEDA-HTTP is focused solely on deploying production-grade autoscaling HTTP applications, while Knative builds in additional functionality:
23
23
24
24
- Pure [event-based workloads](https://knative.dev/docs/eventing/). [KEDA core](https://github.com/kedacore/keda), without KEDA-HTTP, can support such workloads natively.
25
-
- Complex deployment strategies like [blue-green](https://knative.dev/docs/serving/samples/blue-green-deployment/).
26
-
- Supporting other autoscaling mechanisms beyond the built-in [HPA](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/), such as the [Knative Pod Autoscaler (KPA)](https://knative.dev/docs/serving/autoscaling/autoscaling-concepts/#knative-pod-autoscaler-kpa).
25
+
- Complex deployment strategies like [blue-green](https://knative.dev/docs/serving/traffic-management/#routing-and-managing-traffic-with-bluegreen-deployment).
26
+
- Supporting other autoscaling mechanisms beyond the built-in [HPA](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/), such as the [Knative Pod Autoscaler (KPA)](https://knative.dev/docs/serving/autoscaling/autoscaler-types/#knative-pod-autoscaler-kpa).
27
27
28
28
Additionally, Knative supports a service mesh, while KEDA-HTTP does not out of the box (support for that is forthcoming).
For an exhaustive list of configuration options, see the official HTTP Add-on chart [values.yaml file](https://github.com/kedacore/charts/blob/master/http-add-on/values.yaml).
55
+
For an exhaustive list of configuration options, see the official HTTP Add-on chart [values.yaml file](https://github.com/kedacore/charts/blob/main/http-add-on/values.yaml).
Copy file name to clipboardExpand all lines: docs/operate.md
+27-1Lines changed: 27 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ There are currently 2 supported methods for exposing metrics from the intercepto
9
9
### Configuring the Prometheus compatible metrics endpoint
10
10
When configured, the interceptor proxy can expose metrics on a Prometheus compatible endpoint.
11
11
12
-
This endpoint can be enabled by setting the `OTEL_PROM_EXPORTER_ENABLED` environment variable to `true` on the interceptor deployment (`true` by default) and by setting `OTEL_PROM_EXPORTER_PORT` to an unused port for the endpoint to be made avaialble on (`2223` by default).
12
+
This endpoint can be enabled by setting the `OTEL_PROM_EXPORTER_ENABLED` environment variable to `true` on the interceptor deployment (`true` by default) and by setting `OTEL_PROM_EXPORTER_PORT` to an unused port for the endpoint to be made available on (`2223` by default).
13
13
14
14
### Configuring the OTEL HTTP exporter
15
15
When configured, the interceptor proxy can export metrics to a OTEL HTTP collector.
@@ -71,3 +71,29 @@ Optional variables
71
71
`OTEL_EXPORTER_OTLP_TRACES_TIMEOUT` - The batcher timeout in seconds to send batch of data points (`5` by default)
72
72
73
73
### Configuring Service Failover
74
+
75
+
# Configuring metrics for the KEDA HTTP Add-on Operator
76
+
77
+
### Exportable metrics:
78
+
***keda_http_scaled_object_total** - the number of http_scaled_objects
79
+
80
+
There are currently 2 supported methods for exposing metrics from the operator - via a Prometheus compatible metrics endpoint or by pushing metrics to a OTEL HTTP collector.
81
+
82
+
### Configuring the Prometheus compatible metrics endpoint
83
+
When configured, the operator can expose metrics on a Prometheus compatible endpoint.
84
+
85
+
This endpoint can be enabled by setting the `OTEL_PROM_EXPORTER_ENABLED` environment variable to `true` on the operator deployment (`true` by default) and by setting `OTEL_PROM_EXPORTER_PORT` to an unused port for the endpoint to be made available on (`8080` by default).
86
+
87
+
### Configuring the OTEL HTTP exporter
88
+
89
+
When configured, the operator can export metrics to a OTEL HTTP collector.
90
+
91
+
The OTEL exporter can be enabled by setting the `OTEL_EXPORTER_OTLP_METRICS_ENABLED` environment variable to `true` on the operator deployment (`false` by default). When enabled, the `OTEL_EXPORTER_OTLP_ENDPOINT` environment variable must also be configured so the exporter knows what collector to send the metrics to (e.g. http://opentelemetry-collector.open-telemetry-system:4318).
92
+
93
+
If you need to provide any headers such as authentication details in order to utilise your OTEL collector you can add them into the `OTEL_EXPORTER_OTLP_HEADERS` environment variable. The frequency at which the metrics are exported can be configured by setting `OTEL_METRIC_EXPORT_INTERVAL` to the number of seconds you require between each export interval (`30` by default).
94
+
95
+
The `OTEL_EXPORTER_OTLP_PROTOCOL` defaults to `http`
96
+
97
+
### Configuring the OTEL GRPC exporter
98
+
99
+
Please note that using `OTEL_EXPORTER_OTLP_PROTOCOL` will allows you to set it up to `grpc` to connect to otel collector. Also `OTEL_EXPORTER_OTLP_ENDPOINT` should be set to the right endpoint (eg: http://opentelemetry-collector.open-telemetry-system:4317)
0 commit comments