Skip to content

Handle query_channel_range gossip queries #828

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 11 commits into from
Mar 17, 2021

Conversation

bmancini55
Copy link
Contributor

Finally back in the saddle on this. This PR implements the NetGraphMsgHandler method for processing inbound query_channel_range messages. I'm working on another PR to handle query_short_channel_ids as well.

Notes:

  • This implementation enqueues each reply message as an event. Right now, at most this will only generate a few events per query due to the graph size. In the future it could generate a large number of events if the graph is large. I'm considering switching to a polling model like the init_sync code in do_attempt_write_data to be more aware of the write buffer. Wanted opinions before I go down that path though.
  • Sequencing metadata in replies (first_blocknum, number_of_blocks) is still necessary to be spec compliant even though its ugly. I added notes on overflow safety. I'm happy to make that more explicit if that's preferable.

Creates a MessageSendEvent for sending a reply_channel_range message.
This event will be fired when handling inbound query_channel_range
messages in the NetGraphMessageHandler.
Overflow safe calculation of the ending block number for a query. Can
be used when processing the query.
Util converts parts into u64 SCID
Util extracts block height from u64 SCID

Both will be used when processing gossip_query messages.
Initial implementation of handling query_channel_range messages in
NetGraphMsgHandler.  Enqueues a sequence of reply message in the pending
message events buffer.
@bmancini55 bmancini55 force-pushed the reply_channel_range branch from c2d0aec to 44ba52c Compare March 5, 2021 22:14
@codecov
Copy link

codecov bot commented Mar 5, 2021

Codecov Report

Merging #828 (7b842b6) into main (4894d52) will increase coverage by 1.38%.
The diff coverage is 92.67%.

Impacted file tree graph

@@            Coverage Diff             @@
##             main     #828      +/-   ##
==========================================
+ Coverage   90.97%   92.35%   +1.38%     
==========================================
  Files          48       51       +3     
  Lines       26452    32827    +6375     
==========================================
+ Hits        24064    30318    +6254     
- Misses       2388     2509     +121     
Impacted Files Coverage Δ
lightning/src/ln/peer_handler.rs 44.22% <0.00%> (-0.60%) ⬇️
lightning/src/util/events.rs 23.89% <0.00%> (+0.47%) ⬆️
lightning/src/ln/channel.rs 90.92% <75.00%> (+3.08%) ⬆️
lightning/src/routing/network_graph.rs 91.86% <98.22%> (+0.83%) ⬆️
lightning/src/ln/channelmanager.rs 88.62% <100.00%> (+3.33%) ⬆️
lightning/src/ln/msgs.rs 88.76% <100.00%> (+0.13%) ⬆️
lightning/src/util/scid_utils.rs 100.00% <100.00%> (ø)
lightning/src/ln/onchaintx.rs 86.98% <0.00%> (-7.34%) ⬇️
lightning/src/util/macro_logger.rs 87.09% <0.00%> (-1.24%) ⬇️
background-processor/src/lib.rs 100.00% <0.00%> (ø)
... and 15 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 4894d52...7b842b6. Read the comment docs.

Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some pretty trivial nits, but overall looks good. Will get a second reviewer this week.

This implementation enqueues each reply message as an event. Right now, at most this will only generate a few events per query due to the graph size. In the future it could generate a large number of events if the graph is large. I'm considering switching to a polling model like the init_sync code in do_attempt_write_data to be more aware of the write buffer. Wanted opinions before I go down that path though.

Hmm, right, ~5 messages for ~32KB is definitely not worth doing that for, but we will need it to respond to ReplyChannelRange messages, which could be a lot of additional messages and we can't block channel messages while sending them. For that we definitely need to pull the InitSyncTracker struct out into network_graph.rs and make it an associated type for the RoutingMessageHandler trait. I leave it up to you if you want to take that complexity here and also apply it to this code or not.

Modifies scid_from_parts to use u64 inputs allowing untruncated
validation. Adds public constants for limits.
Refactors validation and short_channel_id construction to use
the new scid_from_parts function.
Refactor to use an enumerator in NetGraphMsgHandler::handle_query_channel_range
@bmancini55
Copy link
Contributor Author

bmancini55 commented Mar 9, 2021

Thanks for the review, I updated with nit fixes!

but we will need it to respond to ReplyChannelRange messages, which could be a lot of additional messages and we can't block channel messages while sending them. For that we definitely need to pull the InitSyncTracker struct out into network_graph.rs and make it an associated type for the RoutingMessageHandler trait. I leave it up to you if you want to take that complexity here and also apply it to this code or not.

Do you mean reply to QueryShortChannelIds? If so, was thinking the same thing in regards to InitSyncTracker, thanks for confirming. I'd like to defer that complexity into the PR that addresses QueryShortChannelIds.

Clarifies u32 max value used as the default.
Copy link

@ariard ariard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall sounds good, mostly minor comments. I did exercise the test coverage, can't find holes.

This implementation enqueues each reply message as an event. Right now, at most this will only generate a few events per query due to the graph size. In the future it could generate a large number of events if the graph is large. I'm considering switching to a polling model like the init_sync code in do_attempt_write_data to be more aware of the write buffer. Wanted opinions before I go down that path though.

What are you thinking of more exactly ? I would say the rational for the direction is justified, though can't identify the init_sync code you're mentioning.

@TheBlueMatt
Copy link
Collaborator

Do you mean reply to QueryShortChannelIds? If so, was thinking the same thing in regards to InitSyncTracker, thanks for confirming. I'd like to defer that complexity into the PR that addresses QueryShortChannelIds.

Yes, and, ok, sounds good. Can you add a comment noting the amount of memory that can be used in these messages and at what point we should change the logic to avoid it?

@bmancini55
Copy link
Contributor Author

bmancini55 commented Mar 11, 2021

@ariard thanks for the review! Will clean up those points.

I would say the rational for the direction is justified, though can't identify the init_sync code you're mentioning.

I was refer to PeerHandler::do_attempt_write_data where it uses the InitSyncTracker enum to track state of an initial sync. I was planning to apply this (or similar) to process QueryShortChannelIds in another PR.

Modifies NetGraphMsgHandler::handle_query_channel_range to use a constant
max value in replies. Modifies tests to generate 8000 channels instead
of making this value configurable.
@bmancini55 bmancini55 force-pushed the reply_channel_range branch from 40ef08d to 81ec069 Compare March 16, 2021 21:31
Cleans up NetGraphMsgHandler::handle_query_channel_range
Modify NetGraphMsgHandler::handle_query_channel_range to always use
first_blocknum=0 in replies. This is spec compliant after changes to
make sequence completion explicity using sync_complete.
@bmancini55 bmancini55 force-pushed the reply_channel_range branch from 81ec069 to 7b842b6 Compare March 16, 2021 23:38
Copy link

@ariard ariard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review ACK 7b842b6

Thanks for simplifying sequencing of handle_query_channel_range. Reading the spec again, this new way to fulfill first_blocknum/number_of_blocks is compliant, all good.

@TheBlueMatt TheBlueMatt merged commit 8799a2a into lightningdevkit:main Mar 17, 2021
@bmancini55 bmancini55 deleted the reply_channel_range branch March 17, 2021 20:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants