You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Remove consumer bias & allow queues under max load to drain quickly
Given a queue process under max load, with both publishers & consumers,
if consumers are not **always** prioritised over publishers, a queue
can take up to 1 day to fully drain.
Even without consumer bias, queues can drain fast (i.e. 10
minutes in our case), or slow (i.e. 1 hour or more). To put it
differently, this is what a slow drain looks like:
```
___ <- 2,000,000 messages
/ \__
/ \___ _ _
/ \___/ \_____/ \___
/ \
|-------------- 1h --------------|
```
And this is what a fast drain looks like:
```
_ <- 1,500,000 messages
/ \_
/ \___
/ \
|- 10 min -|
```
We are still trying to understand the reason behind this, but without
removing consumer bias, this would **always** happen:
```
______________ <- 2,000,000 messages
/ \_______________
/ \______________ ________
/ \__/ \______
/ \
|----------------------------- 1 day ---------------------------------|
```
Other observations worth capturing:
```
| PUBLISHERS | CONSUMERS | READY MESSAGES | PUBLISH MSG/S | CONSUME ACK MSG/S |
| ---------- | --------- | -------------- | --------------- | ----------------- |
| 3 | 3 | 0 | 22,000 - 23,000 | 22,000 - 23,000 |
| 3 | 3 | 1 - 2,000,000 | 5,000 - 8,000 | 7,000 - 11,000 |
| 3 | 0 | 1 - 2,000,000 | 21,000 - 25,000 | 0 |
| 3 | 0 | 2,000,000 | 5,000 - 15,000 | 0 |
```
* Empty queues are the fastest since messages are delivered straight to
consuming channels
* With 3 publishing channels, a single queue process gets saturated at
22,000 msg/s. The client that we used for this benchmark would max at
10,000 msg/s, meaning that we needed 3 clients, each with 1 connection
& 1 channel to max the queue process. It is possible that a single
fast client using 1 connection & 1 channel would achieve a slightly
higher throughput, but we didn't measure on this occasion. It's
highly unrealistic for a production, high-throughput RabbitMQ
deployment to use 1 publishers running 1 connection & 1 channel. If
anything, there would be many publishers with many connections &
channels.
* When a queue process gets saturated, publishing channels & their
connections will enter flow state, meaning that the publishing rates
will be throttled. This allows the consuming channels to keep up with
the publishing ones.
* Adding more publishers or consumers slow down publishinig & consuming.
The queue process, and ultimately the Erlang VMs (typically 1 per
CPU), have more work to do, so it's expected for throughput to suffer.
Most relevant properties that we used for this benchmark:
```
| erlang | 19.3.6.2 |
| rabbitmq | 3.6.12 |
| gcp instance type | n1-standard-4 |
| -------------------- | ------------ |
| queue | non-durable |
| max-length | 2,000,000 |
| -------------------- | ------------ |
| publishers | 3 |
| publisher rate msg/s | 10,000 |
| msg size | 1KB |
| -------------------- | ------------ |
| consumers | 3 |
| prefetch | 100 |
| multi-ack | every 10 msg |
```
Worth mentioning vm_memory_high_watermark_paging_ratio was set to a
really high value so that messages would not be paged to disc. When
messages are paged out, all other queue operations are blocked,
including all publishes and consumes.
More screenshots, RabbitMQ definitions, BOSH & CF manifests can be found
on the PR itself.
[#151499632]
0 commit comments