-
Notifications
You must be signed in to change notification settings - Fork 874
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question: Review of using kafka consumer for multiple topics in a single service #2395
Comments
Watch out for the following potential issues with this implementation:
|
@brnls thanks for your replay.
|
For 3) revoking partitions is part of normal kafka operation and not dangerous. This is a good description of what it is and what can happen https://docs.confluent.io/kafka-clients/dotnet/current/overview.html#committing-during-a-rebalance For 4) You are right that there is back pressure which can be controlled with various settings (like |
@brnls Thanks a lot for your review! |
Description
Hi there, first of all thanks for your support and for what you are doing.
I wish to ask some code review to double check I have no misunderstanding of using kafka client for dotnet and kafka consuming api.
Here is the code. The main idea is to have a single consumer to consume everything from kafka for every topic the service is in charge to process messages.
Consuming placesed in longruning task since that the Consume method is synchronous so that we dont use thread pool threads for that.
When message consumed we run a task on thread pool to processed the message and use some internal orchestration to create a handler for the message. Then we store offset.
We also use default strategy to consume messages - at least once. We enabled autocommit and disabled autostoreoffset.
If anything there seams odd to you please point it. Thanks in advance.
the idea is created based on to these sample and discussion:
https://github.com/confluentinc/confluent-kafka-dotnet/blob/master/examples/Consumer/Program.cs
#197
The text was updated successfully, but these errors were encountered: