Skip to content

Don't use process dictionary for bump_reduce_memory_use message #1393

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Nov 2, 2017

Conversation

lukebakken
Copy link
Collaborator

@lukebakken lukebakken commented Oct 12, 2017

Alternative for #1388 that does not use process dictionary.

@lukebakken lukebakken changed the title WIP: Use rabbit_queue_behaviour and callback Use rabbit_queue_behaviour and callback Oct 12, 2017
@lukebakken lukebakken requested a review from hairyhum October 12, 2017 13:32
@hairyhum
Copy link
Contributor

This will change the rabbit_backing_queue behaviour and cannot go to a patch release. That's why I used the process dictionary.

@lukebakken
Copy link
Collaborator Author

OK I'll re-target it for master once rabbitmq-server-batch-betas is merged 😄

@gerhard gerhard force-pushed the rabbitmq-server-batch-betas branch from 786df6a to c574fc5 Compare October 18, 2017 16:35
@lukebakken lukebakken force-pushed the rabbitmq-server-batch-betas-lrb branch 2 times, most recently from d675126 to 60d65c0 Compare November 1, 2017 16:27
@lukebakken lukebakken changed the base branch from rabbitmq-server-batch-betas to master November 1, 2017 16:27
handle_info(bump_reduce_memory_use, State = #q{ backing_queue = BQ,
backing_queue_state = BQS0 }) ->
BQS1 = BQ:handled_bump_reduce_memory_use(BQS0),
noreply(State#q{ backing_queue_state = BQ:resume(BQS1) });
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code calls two callbacks, one of them just updates the state. Should it be a single callback instead?

@lukebakken lukebakken force-pushed the rabbitmq-server-batch-betas-lrb branch 3 times, most recently from b84f09c to 97352a7 Compare November 1, 2017 18:03
@lukebakken
Copy link
Collaborator Author

@hairyhum thanks for noticing that the new callback is not necessary.

@lukebakken lukebakken changed the title Use rabbit_queue_behaviour and callback Don't use process dictionary for bump_reduce_memory_use message Nov 1, 2017
@hairyhum
Copy link
Contributor

hairyhum commented Nov 2, 2017

The callback is actually necessary, because not every resume means received bump, so there can be another callback, which will set the flag and do resume.
The flag here makes sure that there is only one bump message in the message box, so the message box is not polluted. Every time queue process handles the bump message it should resume, but not every time it resumes it should clear the flag.
If there is another way to make sure we have a single bump message in the message box at a time, it can be used instead.

lukebakken added a commit to rabbitmq/rabbitmq-common that referenced this pull request Nov 2, 2017
This can be used to handle generic messages that the parent gen_server2 wishes to pass to backing queues. Initially used for the bump_reduce_memory_use message.

Part of rabbitmq/rabbitmq-server#1393
Alternative for #1388 that does not use process dictionary.

Requires rabbitmq/rabbitmq-common#228

Fix waiting_bump values
@lukebakken lukebakken force-pushed the rabbitmq-server-batch-betas-lrb branch from dd20ef7 to dff1d20 Compare November 2, 2017 14:49
@lukebakken
Copy link
Collaborator Author

Every time queue process handles the bump message it should resume, but not every time it resumes it should clear the flag.

@hairyhum got it. Thanks again for the careful reviews.

@lukebakken lukebakken merged commit e811e30 into master Nov 2, 2017
@lukebakken lukebakken deleted the rabbitmq-server-batch-betas-lrb branch November 2, 2017 18:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants