|
2 | 2 |
|
3 | 3 | ## What is a Dead Letter Sink?
|
4 | 4 |
|
5 |
| -TriggerMesh provides various configuration parameters to control the delivery of events in case of failure. For instance, you can decide to retry sending events that failed to be consumed, and if this didn't work you can decide to forward those events to a dead letter sink. |
| 5 | +A [Dead Letter Sink](https://knative.dev/docs/eventing/event-delivery/) is a Knative construct that allows the user to configure a destination for events that would otherwise be dropped due to some delivery failure. This is useful for scenarios where you want to ensure that events are not lost due to a failure in the underlying system. |
6 | 6 |
|
7 |
| -## Implementing a Dead Letter Sink |
8 | 7 |
|
9 |
| -There are two ways to implement a dead letter sink, either by use of a Subscription or by use of a Broker. |
| 8 | +## Scenario Debriefing |
10 | 9 |
|
11 |
| -### Subscription Configuration |
| 10 | +In this example we are going to create a [Bridge](https://docs.triggermesh.io/concepts/) that contains a [PingSource](https://knative.dev/docs/eventing/sources/ping-source/) object that will emit an event on a regular basis to a [Broker](https://knative.dev/docs/eventing/broker/) named `demo`. A [Service](https://knative.dev/docs/serving/services/), named `event-success-capture` will subscribe to PingSource events flowing through the Broker using a [Trigger](https://knative.dev/docs/eventing/broker/triggers/). |
12 | 11 |
|
13 |
| -You can configure how events are delivered for each Subscription by adding a delivery spec to the Subscription object, as shown in the following example: |
| 12 | +The Broker delivery options will be set to use a [Dead Letter Sink](https://knative.dev/docs/eventing/event-delivery/) so that in the case of a delivery error the event will be forwarded to another Service named `event-failure-capture` instead of being lost into the void. |
14 | 13 |
|
15 |
| -```yaml |
16 |
| -apiVersion: messaging.knative.dev/v1 |
17 |
| -kind: Subscription |
18 |
| -metadata: |
19 |
| - name: example-subscription |
20 |
| - namespace: example-namespace |
21 |
| -spec: |
22 |
| - delivery: |
23 |
| - deadLetterSink: |
24 |
| - ref: |
25 |
| - apiVersion: serving.knative.dev/v1 |
26 |
| - kind: Service |
27 |
| - name: example-sink |
28 |
| - backoffDelay: <duration> |
29 |
| - backoffPolicy: <policy-type> |
30 |
| - retry: <integer> |
31 |
| -``` |
| 14 | + |
| 15 | + |
| 16 | +We will test the bridge to make sure events are delivered to `event-success-capture`, then we will break the bridge by removing the `event-success-capture` service, in which case we expect the Dead Letter Sink to receive all events that were not delivered. |
32 | 17 |
|
33 |
| -Where: |
| 18 | +## Creating a Bridge with a Dead Letter Sink |
34 | 19 |
|
35 |
| -The `deadLetterSink` spec contains configuration settings to enable using a dead letter sink. This tells the Subscription what happens to events that cannot be delivered to the subscriber. When this is configured, events that fail to be delivered are sent to the dead letter sink destination. The destination can be a Knative Service or a URI. In the example, the destination is a Service object, or Knative Service, named example-sink. |
| 20 | +!!! Info "Creating objects" |
| 21 | + All objects mentioned at this guide are intended to be created at kubernetes. |
| 22 | + When using `kubectl` write the provided YAML manifests to a file and write at a console: |
36 | 23 |
|
37 |
| -The `backoffDelay` delivery parameter specifies the time delay before an event delivery retry is attempted after a failure. The duration of the backoffDelay parameter is specified using the ISO 8601 format. For example, PT1S specifies a 1 second delay. |
| 24 | + ```console |
| 25 | + $ kubectl apply -f my-file.yaml |
| 26 | + ``` |
38 | 27 |
|
39 |
| -The `backoffPolicy` delivery parameter can be used to specify the retry back off policy. The policy can be specified as either linear or exponential. When using the linear back off policy, the back off delay is the time interval specified between retries. When using the exponential back off policy, the back off delay is equal to backoffDelay*2^<numberOfRetries>. |
40 |
| -retry specifies the number of times that event delivery is retried before the event is sent to the dead letter sink. |
| 28 | + Alternatively if you don't want to write the manifests to a file you can use this command: |
41 | 29 |
|
42 |
| - |
43 |
| -### Broker Configuration |
| 30 | + ```console |
| 31 | + $ kubectl apply -f - <<EOF |
| 32 | + apiVersion: some.api/v1 |
| 33 | + kind: SomeObject |
| 34 | + metadata: |
| 35 | + name: some-name |
| 36 | + spec: |
| 37 | + some: property |
| 38 | + EOF |
| 39 | + ``` |
| 40 | + |
| 41 | +!!! Info "Bridge manifest" |
| 42 | + The next steps configure and explain the Bridge to build to demonstrate the usage of the Dead Letter Sink. |
| 43 | + A single manifest containing all the objects in the bridge can be downloaded [here](../assets/yamlexamples/dls-example.yaml). |
| 44 | + |
| 45 | + |
| 46 | +### Step 1: Create the Broker |
44 | 47 |
|
45 |
| -You can configure how events are delivered for each Broker by adding a delivery spec, as shown in the following example: |
| 48 | +Create a new Broker with following configuration: |
46 | 49 |
|
47 | 50 | ```yaml
|
48 | 51 | apiVersion: eventing.knative.dev/v1
|
49 | 52 | kind: Broker
|
50 | 53 | metadata:
|
51 |
| - name: with-dead-letter-sink |
| 54 | + name: demo |
52 | 55 | spec:
|
53 | 56 | delivery:
|
54 | 57 | deadLetterSink:
|
55 | 58 | ref:
|
56 |
| - apiVersion: serving.knative.dev/v1 |
57 |
| - kind: Service |
58 |
| - name: example-sink |
59 |
| - backoffDelay: <duration> |
60 |
| - backoffPolicy: <policy-type> |
61 |
| - retry: <integer> |
| 59 | + apiVersion: serving.knative.dev/v1 |
| 60 | + kind: Service |
| 61 | + name: event-failure-capture |
| 62 | + backoffDelay: "PT0.5S" # ISO8601 duration |
| 63 | + backoffPolicy: exponential # exponential or linear |
| 64 | + retry: 2 |
62 | 65 | ```
|
63 | 66 |
|
64 |
| -### Example Broker Configuration |
| 67 | +Here a Broker named `demo` is configured with the following delivery options: |
| 68 | + |
| 69 | +- 2 retries on failure, backing off exponentialy with a 0.5 seconds factor. This is not the focus of this article but it is recommended to setup retries before giving up on delivery and sending to the DLS. |
| 70 | +- Dead Letter Sink pointing to a service named `event-failure-capture`. Kubernetes can be requested the creation of this object even if the DLS service does not exists yet. |
| 71 | + |
| 72 | + |
| 73 | +### Step 2: Create the PingSource |
65 | 74 |
|
66 |
| -Lets take a look at an example of a Broker with a dead letter sink and configure a simple bridge with a dead letter sink. For our discussion, let us consider [this example Bridge](../assets/yamlexamples/simple-bridge.yaml) as a starting point: |
| 75 | +Create a [PingSource](https://knative.dev/docs/eventing/sources/ping-source/) object with the following configuration: |
67 | 76 |
|
68 | 77 | ```yaml
|
69 |
| -apiVersion: eventing.knative.dev/v1 |
70 |
| -kind: Broker |
71 |
| -metadata: |
72 |
| - name: events |
73 |
| ---- |
74 | 78 | apiVersion: sources.knative.dev/v1
|
75 | 79 | kind: PingSource
|
76 | 80 | metadata:
|
77 |
| - name: ping-sockeye |
| 81 | + name: say-hi |
78 | 82 | spec:
|
79 |
| - data: '{"name": "triggermesh"}' |
| 83 | + data: '{"hello": "triggermesh"}' |
80 | 84 | schedule: "*/1 * * * *"
|
81 | 85 | sink:
|
82 | 86 | ref:
|
83 | 87 | apiVersion: eventing.knative.dev/v1
|
84 | 88 | kind: Broker
|
85 |
| - name: events |
| 89 | + name: demo |
86 | 90 | ```
|
87 |
| - |
88 |
| -Now, lets modify this example to add a dead letter sink to the `Broker`. We can accomplish this in two steps: |
89 | 91 |
|
90 |
| -1. Add a service to catch the "dead letter" events. (We will be using `Sockeye` here, but in a production scenario you would want to use something like SQS, Kafka, or another Sink that has some form of persistence.) |
| 92 | +This object will emit an event every minute to the Broker created in the previous step. |
| 93 | + |
| 94 | +### Step 3: Create the `event-success-capture` Service |
| 95 | + |
| 96 | +Create a Service named `event-success-capture` with the following configuration: |
91 | 97 |
|
92 | 98 | ```yaml
|
93 |
| ---- |
94 | 99 | apiVersion: serving.knative.dev/v1
|
95 | 100 | kind: Service
|
96 | 101 | metadata:
|
97 |
| - name: sockeye |
| 102 | + name: event-success-capture |
98 | 103 | spec:
|
99 | 104 | template:
|
| 105 | + metadata: |
| 106 | + annotations: |
| 107 | + autoscaling.knative.dev/min-scale: "1" |
100 | 108 | spec:
|
101 | 109 | containers:
|
102 |
| - - image: docker.io/n3wscott/sockeye:v0.7.0@sha256:e603d8494eeacce966e57f8f508e4c4f6bebc71d095e3f5a0a1abaf42c5f0e48 |
| 110 | + - image: gcr.io/knative-releases/knative.dev/eventing/cmd/event_display |
103 | 111 | ```
|
104 | 112 |
|
105 |
| -2. Update the `Broker` section to the following: |
| 113 | +That service will write to its standard output any CloudEvent received. We will use a Trigger to subscribe to all events flowing through the Broker. |
| 114 | + |
| 115 | +### Step 4: Create the `demo-to-display` Trigger |
| 116 | + |
| 117 | +Create a Trigger to route events to the `event-success-capture` Service with the following configuration: |
106 | 118 |
|
107 | 119 | ```yaml
|
108 |
| ---- |
109 | 120 | apiVersion: eventing.knative.dev/v1
|
110 |
| -kind: Broker |
| 121 | +kind: Trigger |
111 | 122 | metadata:
|
112 |
| -name: events |
| 123 | + name: demo-to-display |
113 | 124 | spec:
|
114 |
| - delivery: |
115 |
| - deadLetterSink: |
| 125 | + broker: demo |
| 126 | + subscriber: |
116 | 127 | ref:
|
117 |
| - apiVersion: serving.knative.dev/v1 |
118 |
| - kind: Service |
119 |
| - name: sockeye |
120 |
| - backoffDelay: "PT0.5S" # or ISO8601 duration |
121 |
| - backoffPolicy: exponential # or linear |
122 |
| - retry: 2 |
| 128 | + apiVersion: serving.knative.dev/v1 |
| 129 | + kind: Service |
| 130 | + name: event-success-capture |
| 131 | +``` |
| 132 | + |
| 133 | +This Trigger configures the Broker to send all flowing events to the `event-success-capture` service. |
| 134 | + |
| 135 | +### Step 5: Create the `event-failure-capture` Service |
| 136 | + |
| 137 | +Create the Service named `event-failure-capture` that was configured at the Broker as the Dead Letter Sink parameter: |
| 138 | + |
| 139 | +```yaml |
| 140 | +apiVersion: serving.knative.dev/v1 |
| 141 | +kind: Service |
| 142 | +metadata: |
| 143 | + name: event-failure-capture |
| 144 | +spec: |
| 145 | + template: |
| 146 | + metadata: |
| 147 | + annotations: |
| 148 | + autoscaling.knative.dev/min-scale: "1" |
| 149 | + spec: |
| 150 | + containers: |
| 151 | + - image: gcr.io/knative-releases/knative.dev/eventing/cmd/event_display |
| 152 | +``` |
| 153 | + |
| 154 | +This service should only receive messages that could not be delivered to a destination. |
| 155 | + |
| 156 | +## Test the Bridge |
| 157 | + |
| 158 | +Make sure that all created objects are ready by inspecting the `READY` column after this command: |
| 159 | + |
| 160 | +```console |
| 161 | +$ kubectl get ksvc,broker,trigger |
| 162 | +
|
| 163 | +NAME URL LATESTCREATED LATESTREADY READY REASON |
| 164 | +service.serving.knative.dev/event-failure-capture http://event-failure-capture.default.192.168.49.2.sslip.io event-failure-capture-00001 event-failure-capture-00001 True |
| 165 | +service.serving.knative.dev/event-success-capture http://event-success-capture.default.192.168.49.2.sslip.io event-success-capture-00001 event-success-capture-00001 True |
| 166 | +
|
| 167 | +NAME URL AGE READY REASON |
| 168 | +broker.eventing.knative.dev/demo http://broker-ingress.knative-eventing.svc.cluster.local/default/demo 3m20s True |
| 169 | +
|
| 170 | +NAME BROKER SUBSCRIBER_URI AGE READY REASON |
| 171 | +trigger.eventing.knative.dev/demo-to-display demo http://event-success-capture.default.svc.cluster.local 3m20s True |
123 | 172 | ```
|
124 | 173 |
|
125 |
| -## Viewing the Results of a Dead Letter Sink |
| 174 | +Each minute a CloudEvent should be produced by PingSource and sent to the Broker, which in turns would deliver it to the `event-success-capture`, while `event-failure-capture` should not be receiving any event. We can confirm that by reading each of those services output: |
126 | 175 |
|
127 |
| -Now that we have all the parts in place, we can monitor the events that are being sent to the dead letter sink. We can do this in two ways: |
| 176 | +!!! Info "Retrieving logs command" |
| 177 | + Kubernetes generates dynamic Pod names, but we can use `kubectl` with the `-l` flag to filter by a label that identifies the Service. |
128 | 178 |
|
129 |
| -1. View the pod logs of the `sockeye` service: |
130 |
| - * `kubectl get pods` will show the pods that are running. Retrieve the sockeye pod name from the output. |
131 |
| - * `kubectl logs <SOCKEYE_POD_NAME> user-container` By replacing the `<SOCKEYE_POD_NAME>` with the pod name you can view the logs of the sockeye pod. |
| 179 | + We also add the `-f` flag to keep receiving logs as they are produced, this way we can see the live feed of events arriving at the Service. |
132 | 180 |
|
133 |
| -2. View the web service exposed by the `sockeye` service: |
134 |
| - * `kubectl get ksvc` will show the KSVC's that are running. Retrieve the sockeye public URL from the `URL` column and navigate to it in your browser. |
| 181 | +```console |
| 182 | +$ kubectl logs -l serving.knative.dev/service=event-success-capture -c user-container -f |
| 183 | +
|
| 184 | +☁️ cloudevents.Event |
| 185 | +Context Attributes, |
| 186 | + specversion: 1.0 |
| 187 | + type: dev.knative.sources.ping |
| 188 | + source: /apis/v1/namespaces/default/pingsources/say-hi |
| 189 | + id: efcaa3b7-bcdc-4fa9-a0b3-05d9a3c4a9f9 |
| 190 | + time: 2022-06-01T19:54:00.339597948Z |
| 191 | +Extensions, |
| 192 | + knativearrivaltime: 2022-06-01T19:54:00.340295729Z |
| 193 | +Data, |
| 194 | + {"hello": "triggermesh"} |
| 195 | +``` |
| 196 | + |
| 197 | +As expected the `event-success-capture` is receiving events produced by PingSource. |
| 198 | + |
| 199 | +```console |
| 200 | +$ kubectl logs -l serving.knative.dev/service=event-failure-capture -c user-container -f |
| 201 | +2022/06/01 19:36:45 Failed to read tracing config, using the no-op default: empty json tracing config |
| 202 | +``` |
| 203 | + |
| 204 | +Meanwhile `event-failure-capture` is not showing any event. |
| 205 | + |
| 206 | +## Test Failing Bridge |
| 207 | + |
| 208 | +To make the Bridge fail will be removing the `event-success-capture` service. That will make the delivery fail and (after 2 retries) be sent to the Dead Letter Queue. |
| 209 | + |
| 210 | +```console |
| 211 | +$ kubectl delete ksvc event-success-capture |
| 212 | +service.serving.knative.dev "event-success-capture" deleted |
| 213 | +``` |
| 214 | + |
| 215 | +After doing so, all events not delivered by Broker through the configured Trigger will be shown at the `event-failure-capture`: |
| 216 | + |
| 217 | +```console |
| 218 | +$ kubectl logs -l serving.knative.dev/service=event-failure-capture -c user-container -f |
| 219 | +
|
| 220 | +☁️ cloudevents.Event |
| 221 | +Context Attributes, |
| 222 | + specversion: 1.0 |
| 223 | + type: dev.knative.sources.ping |
| 224 | + source: /apis/v1/namespaces/default/pingsources/say-hi |
| 225 | + id: 7e11c3ac-2b00-49af-9602-59575f410b9f |
| 226 | + time: 2022-06-01T20:14:00.054244562Z |
| 227 | +Extensions, |
| 228 | + knativearrivaltime: 2022-06-01T20:14:00.055027909Z |
| 229 | + knativebrokerttl: 255 |
| 230 | + knativeerrorcode: 500 |
| 231 | + knativeerrordata: |
| 232 | + knativeerrordest: http://broker-filter.knative-eventing.svc.cluster.local/triggers/default/demo-to-display/bd303253-c341-4d43-b5e2-bc3adf70122a |
| 233 | +Data, |
| 234 | + {"hello": "triggermesh"} |
| 235 | +``` |
| 236 | + |
| 237 | +## Clean up |
| 238 | + |
| 239 | +Clean up the remaining resources by issuing this command: |
| 240 | + |
| 241 | +```console |
| 242 | +kubectl delete ksvc event-failure-capture |
| 243 | +kubectl delete triggers demo-to-display |
| 244 | +kubectl delete pingsource say-hi |
| 245 | +kubectl delete broker demo |
| 246 | +``` |
0 commit comments