You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jun 7, 2024. It is now read-only.
It would be great to have both in streaming API and in subscription API in addition to existing query parameters new param max_uncommited_events_per_partition
As an example assume the following scenario:
event-type has 8 partitions
only one client connected
batch_limit=4 and max_uncommited_events=32
Currently, client will be able to receive 8 batches until max_uncommited_events is reached. However, all these batches can be from any partition: they all or part of them could be from the same partition. Thus, according to cursor commit behaviour described here, when processing events concurrently its possible that some of the events will be commited even if they are not processed.
With max_uncommited_events_per_partition=4 client will receive batches of 4 events for every partition for concurrent processing, making safe for client to commit every batch.
The text was updated successfully, but these errors were encountered:
It would be great to have both in streaming API and in subscription API in addition to existing query parameters new param max_uncommited_events_per_partition
As an example assume the following scenario:
Currently, client will be able to receive 8 batches until max_uncommited_events is reached. However, all these batches can be from any partition: they all or part of them could be from the same partition. Thus, according to cursor commit behaviour described here, when processing events concurrently its possible that some of the events will be commited even if they are not processed.
With max_uncommited_events_per_partition=4 client will receive batches of 4 events for every partition for concurrent processing, making safe for client to commit every batch.
The text was updated successfully, but these errors were encountered: