-
Notifications
You must be signed in to change notification settings - Fork 252
[CI] Add integration tests for Net::LDAP#open behavior #133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The nested query should match "user2", but with Net::LDAP#open, we see: <"user3"> expected but was <"user2">. And likewise with the outer search: <["user1", "user2"]> expected but was <["user1", "user3"]>. What's happening is that the nested query is being sent after the server has already sent more data to be read for the outer search, so when the inner search reads, it gets that result. And when it comes time for the outer search to handle the second results, it gets reads the next result off the wire, the result of the inner search. I was also able to get an infinite loop just by *always* performing an inner search instead of only searching on the first result.
By using the changes in #131, I was able to get a dump of the PDUs sent from the server in response to the failing test case's queries: Nothing out of the norm, meaning it looks like it's related to our handling, as I described in f16a4ff. There are multiple considerations here: what the client side should allow the user to do, what the client side should handle from the server, what the server does in this situation, and possibly others I've not considered. We could easily add logic to raise an error if someone tries to search while a search is still in progress, but should we? @jch mentioned queueing nested searches, connection pooling, or other methods which all sound reasonable but are far more complex. Do servers properly handle serving multiple queries at the same time over the same connection? One thing I noticed is that the @jch @schaary @ruby-ldap/ruby-ldap-admins could use your thoughts on this. Still working through the problem, and need to go with a solution that balances flexibility for the user with spec adherence along with practical application. Speaking of practical: let's be honest, it's not really practical for someone to perform a subquery for every entry found: this is inviting O(n+1) or O(n*m) performance characteristics. We can't prevent our users from shooting themselves in the foot, but we can prevent the gun from firing accidentally in the wrong direction. Thoughts? |
I don't think we should raise an exception because I saw nothing in the RFC to suggest that it isn't supported. I think the key to is map Message IDs between requests and responses. I'll look at your dumps and failing test more closely. |
I looked at how the Apache LDAP API handles multiple requests over the same socket. Download link. I didn't run the code, but tracing through the documentation and method calls shows that the library does support execute multiple requests ( I'm not proposing we follow this implementation, but it is interesting to see that multiple requests are supported elsewhere. |
Shouldn't have unread messages but provide a way to measure it anyway.
Prioritize reads from queue by message ID
unless messages.empty? | ||
instrument "search_messages_unread.net_ldap_connection", | ||
message_id: message_id, messages: messages | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To double check I understand this. This clears PDU's we've queue_read
from the socket for this particular search keyed by message_id
that weren't consumed. I think this is good cleanup, but when do we hit this scenario? Is this an exceptional case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, you understand it perfectly. It shouldn't happen, it would be an exceptional case. Didn't feel right to blow up, though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A contrived failure would be if we didn't follow spec and reused a message_id before that request was finished. Maybe some additional comments indicating this case shouldn't be reached?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jch added a brief comment.
Conflicts: lib/net/ldap/connection.rb test/test_ldap_connection.rb
🚢 |
[CI] Add integration tests for Net::LDAP#open behavior
[CI] Add integration tests for Net::LDAP#open behavior
I'm working through some ill-defined behavior (discussed here) with
Net::LDAP#open
and wanted to reproduce the problematic behavior in the most minimal way possible.This PR just starts by adding some basic integration tests for
Net::LDAP#open
behavior (and not even good integration tests, at that).I'll be following up with a reproduction of the bad behavior (hopefully).
cc @jch