-
Notifications
You must be signed in to change notification settings - Fork 412
Sanititze and document incoming HTLC cltv_expiry handling #579
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sanititze and document incoming HTLC cltv_expiry handling #579
Conversation
Following up on this one offline, though note the full_stack_target is mad because it was (apparently) relying on sending bogus HTLCs. Happy to look into this for you if you want. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like a fair clarification + check :)
Fixed full_stack's demo case for you at TheBlueMatt@83341b9 please squash that into the relevant commit. |
ead89eb
to
f8e0b84
Compare
Thanks for review and full_stack fix, please wait before to take it. |
@TheBlueMatt @naumenkogs, I think we should double CLTV_CLAIM_BUFFER in As we realized with Gleb, with time-dilation attacks, this timelock is the minimal one exploitable by an attacker, so if anyone can pin you back of 6 blocks in the past, your counterparty should be able to steal HTLC by timeout after you reveal preimage offchain. |
lightning/src/ln/channelmanager.rs
Outdated
if msg.cltv_expiry <= cur_height + CLTV_CLAIM_BUFFER + LATENCY_GRACE_PERIOD_BLOCKS as u32 { // expiry_too_soon | ||
break Some(("CLTV expiry is too close", 0x1000 | 14, Some(self.get_channel_update(chan).unwrap()))); | ||
} | ||
if msg.cltv_expiry > cur_height + CLTV_FAR_FAR_AWAY as u32 { // expiry_too_far | ||
break Some(("CLTV expiry is too far in the future", 21, None)); | ||
} | ||
// In theory, we would be safe against unitentional channel-closure, if we only required a margin of LATENCY_GRACE_PERIOD_BLOCKS. | ||
// But, to be safe against policy reception, we use a longuer delay. | ||
if (*outgoing_cltv_value) as u64 <= (cur_height + 2 * CLTV_CLAIM_BUFFER + LATENCY_GRACE_PERIOD_BLOCKS) as u64 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why 2* CLAIM_BUFFER? We never need two on-chain claim buffers? I could understand an argument for 2*LATENCY_GRACE_PERIOD, because if we presume that LATENCY_GRACE_PERIOD it the max time it should take to do a commitment_signed dance then you could suggest we need two, but that's not quite true. We fail-back after the expiry minus LATENCY_GRACE_PERIOD, not even GRACE_PERIOD + CLAIM_BUFFER.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ultimately LATENCY_GRACE_PERIOD + CLTV_CLAIM_BUFFER seems like more than enough - for outbound stuff we don't really care, its up to our counterparty to care about the height here, we just don't want to get force-closed on because we're behind by one block and are too tight.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2*CLAIM_BUFFER is increasing how long someone would have to mess up with our onchain view before to somehow steal HTLC. But this has other implication on outbound policy of other implementation I guess so reverse for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And in fact CLAIM_BUFFER is 6, so we have 9 blocks which is not so bad
As for generally using a higher CLTV_CLAIM_BUFFER, I don't really have an objection, 6 seems pretty tight but we also have to work within the reality of what other implementations use in practice. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you rebase on c292ef9 (check the documentation I added there and make sure it looks right, then use that constant in here).
lightning/src/ln/channelmanager.rs
Outdated
// Final node can't rely on checking a CLTV_EXPIRY_DELTA which enforces to be superior to CLTV_CLAIM_BUFFER so make sure | ||
// we don't accept incoming HTLC we wouldn't have time to claim with a worst-case broadcast scenario | ||
// Also, ensure that, in the case of an unknown payment hash, our payment logic has enough time to fail the HTLC backward | ||
// before our onchain logic triggers a channel closure. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The < operator here should be changed to <=, I think - at cltv_expiry == latest_block_height, the timeout transaction can be broadcasted. Also please add 1, see overall comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah yes, updated with both remarks
f8e0b84
to
17dc251
Compare
Updated at 17dc251, took your commit just change few bits on (1), the preimage flow was confusing IMO |
lightning/src/util/events.rs
Outdated
@@ -52,7 +52,8 @@ pub enum Event { | |||
/// Indicates we've received money! Just gotta dig out that payment preimage and feed it to | |||
/// ChannelManager::claim_funds to get it.... | |||
/// Note that if the preimage is not known or the amount paid is incorrect, you must call | |||
/// ChannelManager::fail_htlc_backwards to free up resources for this HTLC. | |||
/// ChannelManager::fail_htlc_backwards to free up resources for this HTLC before | |||
/// LATENCY_GRACE_PERIOD_BLOCKS to avoid any channel-closure by onchain monitoring. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is correct - we don't know what our counterparty's criteria for channel-closure is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Precised its our onchain monitoring, not counterparty's one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But the sentence is all about when we dont have the preimage.
lightning/src/ln/channelmanager.rs
Outdated
@@ -1039,7 +1039,11 @@ impl<ChanSigner: ChannelKeys, M: Deref, T: Deref, K: Deref, F: Deref> ChannelMan | |||
|
|||
// OUR PAYMENT! | |||
// final_expiry_too_soon | |||
if (msg.cltv_expiry as u64) < self.latest_block_height.load(Ordering::Acquire) as u64 + (CLTV_CLAIM_BUFFER + LATENCY_GRACE_PERIOD_BLOCKS) as u64 { | |||
// Final node can't rely on checking a CLTV_EXPIRY_DELTA which enforces to be superior to CLTV_CLAIM_BUFFER so make sure | |||
// we don't accept incoming HTLC we wouldn't have time to claim with a worst-case broadcast scenario |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: needs a period at the end.
lightning/src/ln/channelmanager.rs
Outdated
@@ -1039,7 +1039,11 @@ impl<ChanSigner: ChannelKeys, M: Deref, T: Deref, K: Deref, F: Deref> ChannelMan | |||
|
|||
// OUR PAYMENT! | |||
// final_expiry_too_soon | |||
if (msg.cltv_expiry as u64) < self.latest_block_height.load(Ordering::Acquire) as u64 + (CLTV_CLAIM_BUFFER + LATENCY_GRACE_PERIOD_BLOCKS) as u64 { | |||
// Final node can't rely on checking a CLTV_EXPIRY_DELTA which enforces to be superior to CLTV_CLAIM_BUFFER so make sure |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know what this means. Maybe just say, eg, "We have to have at have some headroom to broadcast on chain if we have the preimage, so make sure we have at least HTLC_FAIL_BACK_BUFFER blocks to go.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Took your sentence
I cleaned up the language at 23bcfaa though your changes had a grammar bug in them. Let me know if the new version there is easier to read. |
Needs rebase on master, though it should be trivial - just drop the common commit. |
17dc251
to
a9142a1
Compare
Build, adressed and rebased at a9142a1 |
969e4fd
to
5dea054
Compare
@@ -51,8 +51,6 @@ pub enum Event { | |||
}, | |||
/// Indicates we've received money! Just gotta dig out that payment preimage and feed it to | |||
/// ChannelManager::claim_funds to get it.... | |||
/// Note that if the preimage is not known or the amount paid is incorrect, you must call | |||
/// ChannelManager::fail_htlc_backwards to free up resources for this HTLC. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm, I think we should still encourage the user to do this - waiting for the timeout is kinda bad for the network. Its possible waiting for the timeout is better from a privacy perspective, but hopefully such attacks aren't feasible, what do you think?
We want to avoid a third-party channel closure, where a random node by sending us a payment expiring at current height, would trigger our onchain logic to close the channel due to a near-expiration.
5dea054
to
886223a
Compare
Keep in mind a) ChannelMonitor may run with a different block provider than ChannelManage, and so both of them may not be in height sync, b) a malicious peer may intentionally provoke hash collision to try to trigger the onchain monitor logic.