Skip to content

Fix recovery channel metrics that are sent with realTag without offset #339

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

Peshka1502
Copy link

@Peshka1502 Peshka1502 commented Jan 5, 2018

Proposed Changes

Scenario: consuming messages from RabbitMQ with the automatic recovery feature enabled and autoAck=false using com.rabbitmq.client.impl.StandardMetricsCollector as the metric collector.

After a channel recovery I observed a memory leak. Analizing the memory dump huge amount of longs were stored in the following variable:
com.rabbitmq.client.impl.AbstractMetricsCollector$connectionState$channelState$unackedMessageDeliveryTags

So after a channel recovery in the class com.rabbitmq.client.impl.recovery.RecoveryAwareChannelN the deliveryTag is reseted and a activeDeliveryTagOffset is stored. So the processing is done with the deliveryTag + activeDeliveryTagOffset and this tag is stored in unackedMessageDeliveryTags in the consumedMessage of AbstractMetricsCollector. But when sending the basicAck is done with the realTag (without the offset) so it removes a different deliveryTag from unackedMessageDeliveryTags.

The proposed solution is to call the AbstractMetricsCollector.basicAck method with the proper deliveryTag (with offset) in the RecoveryAwareChannelN class (same for basicNack and basicReject)

Types of Changes

What types of changes does your code introduce to this project?
Put an x in the boxes that apply

  • Bugfix (non-breaking change which fixes issue #NNNN)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Documentation (correction or otherwise)
  • Cosmetics (whitespace, appearance)

Checklist

Put an x in the boxes that apply. You can also fill these out after creating
the PR. If you're unsure about any of them, don't hesitate to ask on the
mailing list. We're here to help! This is simply a reminder of what we are
going to look for before merging your code.

  • I have read the CONTRIBUTING.md document
  • I have signed the CA (see https://cla.pivotal.io/sign/rabbitmq)
  • All tests pass locally with my changes
  • I have added tests that prove my fix is effective or that my feature works
  • I have added necessary documentation (if appropriate)
  • Any dependent changes have been merged and published in related repositories

Further Comments

If you want to reproduce this issue:

  1. Enable the autorecovery feature, and autoAck = false
  2. Start consuming messages
  3. Stop rabbit cluster
  4. Start rabbit cluster and let channel recover
  5. Continue consuming messages
  6. Check that com.rabbitmq.client.impl.AbstractMetricsCollector$connectionState$channelState$unackedMessageDeliveryTags contains deliveryTags that have been already ack'ed.

Best regards,
Taras

@pivotal-issuemaster
Copy link

@Peshka1502 Please sign the Contributor License Agreement!

Click here to manually synchronize the status of this Pull Request.

See the FAQ for frequently asked questions.

@michaelklishin
Copy link
Contributor

Thanks, this seems reasonable. Please sign the CLA and we will QA this.

@pivotal-issuemaster
Copy link

@Peshka1502 Thank you for signing the Contributor License Agreement!

@acogoluegnes acogoluegnes self-assigned this Jan 8, 2018
@acogoluegnes acogoluegnes added this to the 5.1.2 milestone Jan 8, 2018
@acogoluegnes acogoluegnes changed the base branch from master to 5.1.x-stable January 9, 2018 09:41
@acogoluegnes acogoluegnes changed the base branch from 5.1.x-stable to master January 9, 2018 09:42
@acogoluegnes acogoluegnes merged commit 9986476 into rabbitmq:master Jan 9, 2018
@acogoluegnes
Copy link
Contributor

@Peshka1502 I added a test to check the fix, thanks!

@Peshka1502
Copy link
Author

@acogoluegnes Great, thank you! Can this fix also be applied for the 4.x series?

@acogoluegnes
Copy link
Contributor

acogoluegnes commented Jan 9, 2018

@Peshka1502 Yes, I'll apply the patch to 4.4.x and release a 4.4.2 RC soon after.

acogoluegnes added a commit that referenced this pull request Jan 9, 2018
acogoluegnes added a commit that referenced this pull request Jan 9, 2018
@acogoluegnes acogoluegnes modified the milestones: 5.1.2, 4.4.2 Jan 9, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants