Skip to content

memory leak #201

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Dec 8, 2016
Merged

memory leak #201

merged 2 commits into from
Dec 8, 2016

Conversation

robertroeser
Copy link
Member

Problem
There is a race conditon that causes a SocketAdder to not be removed from the of available sockets if it is removed from the list of available servers. This has the side-effect of creating a memory leak because it will continually try to connect, but fail.

Modifications
After a SocketAdder errors out five times it will no long add itself to the list of active factories.

Result
No more memory leak, and after 5 errors the SocketAdder is removed from the list.

@stevegury
Copy link
Member

I think this is a work around the memory leak, it should be ok to infinitely try to connect on an invalid ip address.

@NiteshKant
Copy link
Contributor

Probably a slightly better way is to check for factory availability. If it isn't > 0.0 then do not add it back to active factories.

if (++errors < 5) {
activeFactories.add(factory);
} else {
logger.warn("Exception count greater than 5, not re-adding factory {}", factory.toString());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: greater than 4?

@stevegury
Copy link
Member

@NiteshKant the loadbalancer (most of the time) won't select the factory if the availability is 0, it will only do it if there are no other available factories.

The bug is that there's a period (while we're connecting to a host), when the factory is not in activeFactories and not yet in activeSockets. If the server is removed from discovery at that time, the factory will never be removed. This may lead to unbounded attempts to connect to that host.

I agree that this is a bug, but disagree that it is the cause of the memory leak.

@NiteshKant
Copy link
Contributor

@stevegury thanks for the explanation of the bug!

I was just wondering whether that is a better condition than checking for 5 magical error number :)
Since, one would decorate the factory with ReactiveSocketClients.detectFailure() hence changing availability, it also makes it configurable.

Anyway, if this change looks good to you, please approve!

@robertroeser
Copy link
Member Author

@NiteshKant I think we should merge this, and add an issue to address this properly.

@NiteshKant
Copy link
Contributor

@robertroeser Agreed.

@NiteshKant NiteshKant merged commit 7f3cf71 into rsocket:0.5.x Dec 8, 2016
@stevegury
Copy link
Member

I think the work around is acceptable for now.
👍

NiteshKant pushed a commit to NiteshKant/reactivesocket-java that referenced this pull request Dec 14, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants