-
Notifications
You must be signed in to change notification settings - Fork 338
Handle cases where there are no partitions to fetch from #439
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Rather than going into a busy-loop when e.g. all partitions are paused, sleep for `max_wait_time` seconds before retrying.
lib/kafka/consumer.rb
Outdated
sleep max_wait_time | ||
backoff = max_wait_time > 0 ? max_wait_time : 1 | ||
|
||
@logger.warn "There are no partitions to fetch from, sleeping for #{backoff}s" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wouldn't it be better if this was info? I mean having redundant processes isn't something bad and it can be by design, not by accident
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm. I guess it sort of depends on your use case – I imagine most people will use more than 1 partition.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dasch I know that, you know that but you would be surprised :D the 1 to 1 is pretty common especially at the beginning
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, I can make it info
...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you ❤️ :)
This could be expected behavior in cases where there are more consumers than partitons on purpose.
Hi @dasch, I'm using karafka to run a consumer and has this error:
Will this PR fix this issue ? |
@victorphamdeveloper if you have more processes than partitions, you will see that until the release of both, new ruby-kafka and new karafka. |
@mensfeld My topic has 10 partitions and I don't think we have more than 10 consumer processes :( |
@victorphamdeveloper please move your issue to Karafka repository: https://github.com/karafka/karafka as it may not be ruby-kafka related. We will get back to it after the release of ruby-kafka 0.5 and Karafka 1.1 |
Sure, will do. |
This change is listed as new in 0.5.0, but seems to already be included in 0.4.4. We had to revert to 0.4.3 for now because we experienced performance problems and currently traced these back to this upgrade from an old 0.3.x version of kafka-shopify to 0.4.4. It might be the way we're using the kafka consumer in a background job though. In our case we have a fetch loop in the background job that manually gets exited after a certain amount of time has passed, and it seems like the change in this PR caused our loop to get stuck. We haven't fully diagnosed this yet, but I thought it's worth mentioning. Cheers! |
I'd like to understand the actual problem first... @Soleone I assume you're stopping the consumer using |
@mensfeld it could be that we need to do |
Sorry for the late reply! Yes, we're using At this time I don't have any good example code I can provide. But I'm investigating this further soon and am enabling some additional logging to find out more. |
Fixes #416.