-
Notifications
You must be signed in to change notification settings - Fork 3.9k
Disable background GC by default; bump credit flow defaults #1098
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Quite a bit of evidence of different kinds of workloads suggest that background GC does not contribute meaningfully to reducing node RAM usage on most of them. It does, however, on each run produce a massive spike in VM and CPU context switches, causing latency spikes. Some users report that running without background GC cuts their latency to a half or even a third. Multi-hour stress tests that max out all node CPU cores suggest that our current credit_flow default can still be bumped safely, reducing the amount of time publishers are throttled and making publisher latency and node's observed message throughput more predictable.
does credit_flow affect consume speed? if there are a few consumers and small prefetch_count. |
Yes but I wouldn't expect any significant throughput
changes. Since credit flow settings can be configured already, you can give it a try with your workloads and see for yourself.
… On 5 Feb 2017, at 18:31, ShawnLi ***@***.***> wrote:
does credit_flow affect consume speed? if there are a few consumers and small prefetch_count.
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
If we increase generic value for all processes, we should also increase them for the message store. The ratio isn't particularly scientific.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great, I'll re-create https://www.pivotaltracker.com/story/show/137952203 with these changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will some testing, but it looks good to me. In previous testing of explicit gc for queues never found the gc to specially contribute to memory reduction.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tested latency spikes and they appear to be a fair bit less severe with background GC turned off.
4000 is not meaningfully different from 3000 but is closer to the new IO_BATCH_SIZE value. References rabbitmq/rabbitmq-server#1098.
Avoids a validation warning and if we increase initial message store disk bound credit to 4000, all values become proportionally increased (compared to < 3.6.7 defaults).
4000 is not meaningfully different from 3000 but is closer to the new IO_BATCH_SIZE value. References rabbitmq/rabbitmq-server#1098.
Quite a bit of evidence from different kinds of workloads suggest
that background GC does not contribute meaningfully to reducing
node RAM usage on most of them. It does, however, on each run
produce a massive spike in VM and CPU context switches,
causing latency spikes.
Some users report that running without background GC cuts
their latency to a half or even down to a third.
Multi-hour stress tests that max out all node CPU cores
suggest that our current credit_flow default can still be bumped
safely, reducing the amount of time publishers are throttled
and making publisher latency and node's observed message throughput
more predictable.