-
Notifications
You must be signed in to change notification settings - Fork 412
Split PeerManager::handle_message
to avoid explicit mem::drop
#2967
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Split PeerManager::handle_message
to avoid explicit mem::drop
#2967
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## main #2967 +/- ##
==========================================
- Coverage 89.50% 89.50% -0.01%
==========================================
Files 117 117
Lines 95146 95167 +21
Branches 95146 95167 +21
==========================================
+ Hits 85164 85177 +13
- Misses 7770 7775 +5
- Partials 2212 2215 +3 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, makes sense to me.
lightning/src/ln/peer_handler.rs
Outdated
|
||
// Conducts all message processing that doesn't require us to hold the `peer_lock`. | ||
// | ||
// Returns the message back to allow for further subsequent processing using move semantics. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should mention that the return value here means "please broadcast", not just "more processing".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We mention this in the docs for handle_message
already, but happy to add it here, too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, but this is where we actually are doing the decision.
lightning/src/ln/peer_handler.rs
Outdated
|
||
// Conducts all message processing that requires us to hold the `peer_lock`. | ||
// | ||
// Returns the message back to allow for further subsequent processing using move semantics. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe be explicit that this returns None if we've fully processed the message, but Some if it needs to be handled by do_handle_message_without_peer_lock
8a11db1
to
a8e5f5c
Compare
Force-pushed including the following doc changes: git diff-tree -U2 8a11db101 a8e5f5cdb
diff --git a/lightning/src/ln/peer_handler.rs b/lightning/src/ln/peer_handler.rs
index f0dae530f..ecbc1f254 100644
--- a/lightning/src/ln/peer_handler.rs
+++ b/lightning/src/ln/peer_handler.rs
@@ -1610,5 +1610,7 @@ impl<Descriptor: SocketDescriptor, CM: Deref, RM: Deref, OM: Deref, L: Deref, CM
// Conducts all message processing that requires us to hold the `peer_lock`.
//
- // Returns the message back to allow for further subsequent processing using move semantics.
+ // Returns `None` if the message was fully processed and otherwise returns the message back via
+ // move semantics to allow it to be subsequently processed by
+ // `do_handle_message_without_peer_lock`.
fn do_handle_message_holding_peer_lock<'a>(
&self,
@@ -1704,5 +1706,5 @@ impl<Descriptor: SocketDescriptor, CM: Deref, RM: Deref, OM: Deref, L: Deref, CM
// Conducts all message processing that doesn't require us to hold the `peer_lock`.
//
- // Returns the message back to allow for further subsequent processing using move semantics.
+ // Returns the message back if it needs to be broadcasted to all other peers.
fn do_handle_message_without_peer_lock<'a>(
&self, |
message: wire::Message<<<CMH as core::ops::Deref>::Target as wire::CustomMessageReader>::CustomMessage> | ||
) -> Result<Option<wire::Message<<<CMH as core::ops::Deref>::Target as wire::CustomMessageReader>::CustomMessage>>, MessageHandlingError> { | ||
let their_node_id = peer_lock.their_node_id.clone().expect("We know the peer's public key by the time we receive messages").0; | ||
let logger = WithContext::from(&self.logger, Some(their_node_id), None); | ||
|
||
let message = match self.do_handle_message_holding_peer_lock(peer_lock, message, &their_node_id, &logger)? { | ||
Some(processed_message) => processed_message, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
q: inside holding_peer_lock
, i didn't see any message modification,
why not pass a reference ? it would make it clear that it is not being mutated and reference is the same one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For one, I wanted to preserve the general move semantics pattern used by handle_message
, but more importantly it lets us avoid having to clone fields, e.g., for msg.features
.
Previously, `handle_message` was a single large method consisting of two logical parts: one modifying the peer state hence requiring us to hold the `peer_lock` `MutexGuard`, and, after calling `mem::drop(peer_lock)`, the remainder which does not only *not* require to hold the `MutexGuard`, but relies on it being dropped to avoid double-locking. However, the `mem::drop` was easily overlooked, making reasoning about lock orders etc. a headache. Here, we therefore have `handle_message` call two sub-methods reflecting the two logical parts, allowing us to avoid the explicit `mem::drop`, while at the same time making it less error-prone due to the two methods' signatures.
a8e5f5c
to
f2ecf8d
Compare
Force-pushed with the following changes: > git diff-tree -U2 a8e5f5cdb f2ecf8db1
diff --git a/lightning/src/ln/peer_handler.rs b/lightning/src/ln/peer_handler.rs
index ecbc1f254..0188130d1 100644
--- a/lightning/src/ln/peer_handler.rs
+++ b/lightning/src/ln/peer_handler.rs
@@ -1595,6 +1595,6 @@ impl<Descriptor: SocketDescriptor, CM: Deref, RM: Deref, OM: Deref, L: Deref, CM
peer_mutex: &Mutex<Peer>,
peer_lock: MutexGuard<Peer>,
- message: wire::Message<<<CMH as core::ops::Deref>::Target as wire::CustomMessageReader>::CustomMessage>
- ) -> Result<Option<wire::Message<<<CMH as core::ops::Deref>::Target as wire::CustomMessageReader>::CustomMessage>>, MessageHandlingError> {
+ message: wire::Message<<<CMH as Deref>::Target as wire::CustomMessageReader>::CustomMessage>
+ ) -> Result<Option<wire::Message<<<CMH as Deref>::Target as wire::CustomMessageReader>::CustomMessage>>, MessageHandlingError> {
let their_node_id = peer_lock.their_node_id.clone().expect("We know the peer's public key by the time we receive messages").0;
let logger = WithContext::from(&self.logger, Some(their_node_id), None);
@@ -1610,14 +1610,13 @@ impl<Descriptor: SocketDescriptor, CM: Deref, RM: Deref, OM: Deref, L: Deref, CM
// Conducts all message processing that requires us to hold the `peer_lock`.
//
- // Returns `None` if the message was fully processed and otherwise returns the message back via
- // move semantics to allow it to be subsequently processed by
- // `do_handle_message_without_peer_lock`.
+ // Returns `None` if the message was fully processed and otherwise returns the message back to
+ // allow it to be subsequently processed by `do_handle_message_without_peer_lock`.
fn do_handle_message_holding_peer_lock<'a>(
&self,
mut peer_lock: MutexGuard<Peer>,
- message: wire::Message<<<CMH as core::ops::Deref>::Target as wire::CustomMessageReader>::CustomMessage>,
+ message: wire::Message<<<CMH as Deref>::Target as wire::CustomMessageReader>::CustomMessage>,
their_node_id: &PublicKey,
logger: &WithContext<'a, L>
- ) -> Result<Option<wire::Message<<<CMH as core::ops::Deref>::Target as wire::CustomMessageReader>::CustomMessage>>, MessageHandlingError>
+ ) -> Result<Option<wire::Message<<<CMH as Deref>::Target as wire::CustomMessageReader>::CustomMessage>>, MessageHandlingError>
{
peer_lock.received_message_since_timer_tick = true;
@@ -1710,8 +1709,8 @@ impl<Descriptor: SocketDescriptor, CM: Deref, RM: Deref, OM: Deref, L: Deref, CM
&self,
peer_mutex: &Mutex<Peer>,
- message: wire::Message<<<CMH as core::ops::Deref>::Target as wire::CustomMessageReader>::CustomMessage>,
+ message: wire::Message<<<CMH as Deref>::Target as wire::CustomMessageReader>::CustomMessage>,
their_node_id: &PublicKey,
logger: &WithContext<'a, L>
- ) -> Result<Option<wire::Message<<<CMH as core::ops::Deref>::Target as wire::CustomMessageReader>::CustomMessage>>, MessageHandlingError>
+ ) -> Result<Option<wire::Message<<<CMH as Deref>::Target as wire::CustomMessageReader>::CustomMessage>>, MessageHandlingError>
{
if is_gossip_msg(message.type_id()) {
@@ -1916,5 +1915,5 @@ impl<Descriptor: SocketDescriptor, CM: Deref, RM: Deref, OM: Deref, L: Deref, CM
}
- fn forward_broadcast_msg(&self, peers: &HashMap<Descriptor, Mutex<Peer>>, msg: &wire::Message<<<CMH as core::ops::Deref>::Target as wire::CustomMessageReader>::CustomMessage>, except_node: Option<&PublicKey>) {
+ fn forward_broadcast_msg(&self, peers: &HashMap<Descriptor, Mutex<Peer>>, msg: &wire::Message<<<CMH as Deref>::Target as wire::CustomMessageReader>::CustomMessage>, except_node: Option<&PublicKey>) {
match msg {
wire::Message::ChannelAnnouncement(ref msg) => {
@@ -2308,5 +2307,5 @@ impl<Descriptor: SocketDescriptor, CM: Deref, RM: Deref, OM: Deref, L: Deref, CM
// about to disconnect the peer and do it after we finish
// processing most messages.
- let msg = msg.map(|msg| wire::Message::<<<CMH as core::ops::Deref>::Target as wire::CustomMessageReader>::CustomMessage>::Error(msg));
+ let msg = msg.map(|msg| wire::Message::<<<CMH as Deref>::Target as wire::CustomMessageReader>::CustomMessage>::Error(msg));
peers_to_disconnect.insert(node_id, msg);
}, |
Previously,
handle_message
was a single large method consisting of two logical parts: one modifying the peer state hence requiring us to hold thepeer_lock
MutexGuard
, and, after callingmem::drop(peer_lock)
, the remainder which does not only not require to hold theMutexGuard
, but relies on it being dropped to avoid double-locking.However, the
mem::drop
was easily overlooked, making reasoning about lock orders etc. a headache. Here, we therefore havehandle_message
call two sub-methods reflecting the two logical parts, allowing us to avoid the explicitmem::drop
, while at the same time making it less error-prone due to the two methods' signatures.