Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
45 changes: 43 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,44 @@
<!--
Collaboration notes & todos


Documentation:
[ ] - Simplify documentation. Maybe add MKdocs? to have better user UX
[ ] - Explain what this repo is about, documentation about new usecases, problem patterns. Point to the sourcecode where the helm charts are pointing to.
[ ] - Add step by step doc, simplify options, use as default CNFS with only 3 vars, Environment URL, Ingest and Operator tokens.


Repository changes:
[x] - Renaming to DT_OTEL_ENDPOINT and DT_INGEST_TOKEN of dt-secrets to keep consistency and avoid missinterpretations.
[ ] - Enable frontend-proxy. Have this by default, fix missing images, envoy of different services, flagd, etc
[ ] - Enhance deploy script to run automatically on 1 run, fail if vars are not defined.
[y] - Deploy script (in framework code) - clone to this repo later
[y] - helm repo add, update, builds
[y] - check/install Kustomize
[y] - helm upgrade with env vars so they are not stored in YAML

[] - Flagd - expose Flagd-UI so Users can change the Problem Patterns easily.
[] - UI explsed as side car in the helm values,
[] - SECRET_KEY_BASE added in base 64
[] - flagd-ui service added so it can be properly exposed in envoy
Issue was that the envoy.yaml in the frontend-proxy has a flad-ui cluster defined but there is no endpoint for it, the flagdui is exposed as sidecar in the flagd service
[X] - Loadtest scaled down to 1 replica having 2 users, no problems detected on Kind
[X] - Fraud Detection Memory Limits raised to 512Mi
[X] - Active Gate Limits raised to 1 Gig (maybe use 2?)


## Envoy troubleshoot

# Check clusters
kubectl port-forward -n astroshop frontend-proxy-594968df69-8jmpk 10000:10000
curl http://localhost:10000/clusters | grep fladg



-->



# Opentelemetry demo gitops

This repository contains Helm chart for [Astroshop](https://github.com/Dynatrace/opentelemetry-demo), an adaptation of the [Opentelemetry Demo](https://github.com/open-telemetry/opentelemetry-demo) app, alongside with:
Expand Down Expand Up @@ -34,8 +75,8 @@ If you want to deploy the ingress resources (by setting `components.ingress.enab

To deploy the helm chart you will first need to set the required values [here](./config/helm-values/values.yaml)

- _components.dt-credentials.tenantEndpoint_ - tenant url including the `/api/v2/otlp`, e.g. **https://wkf10640.live.dynatrace.com/api/v2/otlp**
- _components.dt-credentials.tenantToken_ - access token using the `Kubernetes: Data Ingest` template
- _components.dt-credentials.collector_tenant_endpoint_ - tenant url including the `/api/v2/otlp`, e.g. **https://wkf10640.live.dynatrace.com/api/v2/otlp**
- _components.dt-credentials.collector_tenant_token_ - access token using the `Kubernetes: Data Ingest` template

then run

Expand Down
4 changes: 2 additions & 2 deletions charts/astroshop/templates/dt-credentials.yaml.tmpl
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ metadata:
type: Opaque
{{- with index .Values "components" "dt-credentials" }}
stringData:
DT_ENDPOINT: {{ required "[tenantEndpoint] is required when [dt-credentials] is enabled" .tenantEndpoint }}
DT_API_TOKEN: {{ required "[tenantToken] is required when [dt-credentials] is enabled" .tenantToken }}
DT_OTEL_ENDPOINT: {{ .collector_tenant_endpoint | required "collector_tenant_endpoint is required when dt-credentials is enabled. Set it via --set-string components.dt-credentials.collector_tenant_endpoint=value or DT_OTEL_ENDPOINT environment variable" }}
DT_INGEST_TOKEN: {{ .collector_tenant_token | required "collector_tenant_token is required when dt-credentials is enabled. Set it via --set-string components.dt-credentials.collector_tenant_token=value or DT_TOKEN environment variable" }}
{{- end }}
{{- end }}
20 changes: 20 additions & 0 deletions charts/astroshop/templates/flagd-ui-service.yaml.issue_endpoint
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
apiVersion: v1
kind: Service
metadata:
name: flagd-ui
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/name: flagd-ui
app.kubernetes.io/component: flagd
app.kubernetes.io/part-of: opentelemetry-demo
spec:
type: ClusterIP
ports:
- name: ui
port: 4000
targetPort: 4000
protocol: TCP
selector:
app.kubernetes.io/name: flagd
app.kubernetes.io/component: flagd

83 changes: 67 additions & 16 deletions charts/astroshop/values.yaml
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
components:
dt-credentials:
# when enabled it requires the `tenantEndpoint` and `tenantToken` values to be set
# when enabled it requires the `collector_tenant_endpoint` and `collector_tenant_token` values to be set
# when disabled make sure to create a `dt-credentials` secret with
# DT_ENDPOINT and DT_API_TOKEN values
# DT_OTEL_ENDPOINT and DT_INGEST_TOKEN values
enabled: false
# tenantEndpoint: https://wkf10640.live.dynatrace.com/api/v2/otlp # example endpoint
# tenantToken: dt0c01.abc.xxx
# collector_tenant_endpoint: https://wkf10640.live.dynatrace.com/api/v2/otlp # example endpoint
# collector_tenant_token: dt0c01.abc.xxx
ingress:
enabled: true
# used for setting host in ingress, the url set in loadgen should match this
Expand Down Expand Up @@ -134,16 +134,67 @@ opentelemetry-demo:
memory: 512Mi
# ---------------------------------------------------
flagd:
enabled: true
imageOverride:
repository: "ghcr.io/open-feature/flagd"
tag: "v0.11.1"
useDefault:
env: true
podAnnotations:
dynatrace.com/inject: "false"
metadata.dynatrace.com/process.technology: "flagd"
oneagent.dynatrace.com/inject: "false"
metadata.dynatrace.com/process.technology: "flagd"
replicas: 1
#service:
# port: 8013
envOverrides:
- name: FLAGD_METRICS_EXPORTER
value: otel
- name: FLAGD_OTEL_COLLECTOR_URI
value: $(OTEL_COLLECTOR_NAME):4317
resources:
requests:
cpu: 25m
memory: 300Mi
limits:
cpu: 250m
memory: 300Mi
command:
- "/flagd-build"
- "start"
- "--uri"
- "file:./etc/flagd/demo.flagd.json"
mountedEmptyDirs:
- name: config-rw
mountPath: /etc/flagd
# flgad-ui as a sidecar container in the same pod so the flag json file can be shared
sidecarContainers:
- name: flagd-ui
useDefault:
env: true
service:
port: 4000
envOverrides:
- name: FLAGD_METRICS_EXPORTER
value: otel
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://$(OTEL_COLLECTOR_NAME):4318
- name: SECRET_KEY_BASE
value: "fHaF1lg9V+fJh1C6OhYFqfs/9PpQxpr8dwRYZ86P9ZlEGiTAP33lhu1Ya6iow9v0"
resources:
limits:
memory: 300Mi
volumeMounts:
- name: config-rw
mountPath: /app/data
initContainers:
- name: init-config
image: busybox
command: ['sh', '-c', 'cp /config-ro/demo.flagd.json /config-rw/demo.flagd.json && cat /config-rw/demo.flagd.json']
volumeMounts:
- mountPath: /config-ro
name: config-ro
- mountPath: /config-rw
name: config-rw
additionalVolumes:
- name: config-ro
configMap:
name: 'flagd-config'
# ---------------------------------------------------
fraud-detection:
podAnnotations:
Expand All @@ -152,10 +203,10 @@ opentelemetry-demo:
resources:
requests:
cpu: 25m #100
memory: 300Mi
memory: 256Mi
limits:
cpu: 250m
memory: 300Mi
memory: 512Mi
# ---------------------------------------------------
frontend:
podAnnotations:
Expand Down Expand Up @@ -185,7 +236,7 @@ opentelemetry-demo:
# ---------------------------------------------------
frontend-proxy:
# disable frontend proxy since we're using nginx ingress
enabled: false
enabled: true
# ---------------------------------------------------
image-provider:
podAnnotations:
Expand Down Expand Up @@ -217,7 +268,7 @@ opentelemetry-demo:
memory: 64Mi
# ---------------------------------------------------
load-generator:
replicas: 2
replicas: 1
podAnnotations:
dynatrace.com/inject: "false"
metadata.dynatrace.com/process.technology: "python"
Expand Down Expand Up @@ -387,9 +438,9 @@ opentelemetry-demo:
sampling_initial: 5
sampling_thereafter: 2000
otlphttp:
endpoint: "${env:DT_ENDPOINT}"
endpoint: "${env:DT_OTEL_ENDPOINT}"
headers:
Authorization: "Api-Token ${env:DT_API_TOKEN}"
Authorization: "Api-Token ${env:DT_INGEST_TOKEN}"
extensions:
health_check:
endpoint: "0.0.0.0:13133"
Expand Down
Loading
Loading