Skip to content

Sanititze and document incoming HTLC cltv_expiry handling #579

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

ariard
Copy link

@ariard ariard commented Apr 10, 2020

Keep in mind a) ChannelMonitor may run with a different block provider than ChannelManage, and so both of them may not be in height sync, b) a malicious peer may intentionally provoke hash collision to try to trigger the onchain monitor logic.

@TheBlueMatt
Copy link
Collaborator

Following up on this one offline, though note the full_stack_target is mad because it was (apparently) relying on sending bogus HTLCs. Happy to look into this for you if you want.

Copy link
Contributor

@valentinewallace valentinewallace left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like a fair clarification + check :)

@TheBlueMatt
Copy link
Collaborator

Fixed full_stack's demo case for you at TheBlueMatt@83341b9 please squash that into the relevant commit.

@ariard ariard force-pushed the 2020-04-sanitize-cltv-delay branch from ead89eb to f8e0b84 Compare April 18, 2020 00:54
@ariard
Copy link
Author

ariard commented Apr 18, 2020

Thanks for review and full_stack fix, please wait before to take it.

@ariard
Copy link
Author

ariard commented Apr 20, 2020

@TheBlueMatt @naumenkogs, I think we should double CLTV_CLAIM_BUFFER in would_broadcast_at_height for claiming incoming HTLC for which we have the preimage.

As we realized with Gleb, with time-dilation attacks, this timelock is the minimal one exploitable by an attacker, so if anyone can pin you back of 6 blocks in the past, your counterparty should be able to steal HTLC by timeout after you reveal preimage offchain.

if msg.cltv_expiry <= cur_height + CLTV_CLAIM_BUFFER + LATENCY_GRACE_PERIOD_BLOCKS as u32 { // expiry_too_soon
break Some(("CLTV expiry is too close", 0x1000 | 14, Some(self.get_channel_update(chan).unwrap())));
}
if msg.cltv_expiry > cur_height + CLTV_FAR_FAR_AWAY as u32 { // expiry_too_far
break Some(("CLTV expiry is too far in the future", 21, None));
}
// In theory, we would be safe against unitentional channel-closure, if we only required a margin of LATENCY_GRACE_PERIOD_BLOCKS.
// But, to be safe against policy reception, we use a longuer delay.
if (*outgoing_cltv_value) as u64 <= (cur_height + 2 * CLTV_CLAIM_BUFFER + LATENCY_GRACE_PERIOD_BLOCKS) as u64 {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why 2* CLAIM_BUFFER? We never need two on-chain claim buffers? I could understand an argument for 2*LATENCY_GRACE_PERIOD, because if we presume that LATENCY_GRACE_PERIOD it the max time it should take to do a commitment_signed dance then you could suggest we need two, but that's not quite true. We fail-back after the expiry minus LATENCY_GRACE_PERIOD, not even GRACE_PERIOD + CLAIM_BUFFER.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ultimately LATENCY_GRACE_PERIOD + CLTV_CLAIM_BUFFER seems like more than enough - for outbound stuff we don't really care, its up to our counterparty to care about the height here, we just don't want to get force-closed on because we're behind by one block and are too tight.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2*CLAIM_BUFFER is increasing how long someone would have to mess up with our onchain view before to somehow steal HTLC. But this has other implication on outbound policy of other implementation I guess so reverse for now.

Copy link
Author

@ariard ariard Apr 24, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And in fact CLAIM_BUFFER is 6, so we have 9 blocks which is not so bad

@TheBlueMatt
Copy link
Collaborator

As for generally using a higher CLTV_CLAIM_BUFFER, I don't really have an objection, 6 seems pretty tight but we also have to work within the reality of what other implementations use in practice.

Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you rebase on c292ef9 (check the documentation I added there and make sure it looks right, then use that constant in here).

// Final node can't rely on checking a CLTV_EXPIRY_DELTA which enforces to be superior to CLTV_CLAIM_BUFFER so make sure
// we don't accept incoming HTLC we wouldn't have time to claim with a worst-case broadcast scenario
// Also, ensure that, in the case of an unknown payment hash, our payment logic has enough time to fail the HTLC backward
// before our onchain logic triggers a channel closure.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The < operator here should be changed to <=, I think - at cltv_expiry == latest_block_height, the timeout transaction can be broadcasted. Also please add 1, see overall comment.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes, updated with both remarks

@ariard ariard force-pushed the 2020-04-sanitize-cltv-delay branch from f8e0b84 to 17dc251 Compare April 24, 2020 05:21
@ariard
Copy link
Author

ariard commented Apr 24, 2020

Updated at 17dc251, took your commit just change few bits on (1), the preimage flow was confusing IMO

@@ -52,7 +52,8 @@ pub enum Event {
/// Indicates we've received money! Just gotta dig out that payment preimage and feed it to
/// ChannelManager::claim_funds to get it....
/// Note that if the preimage is not known or the amount paid is incorrect, you must call
/// ChannelManager::fail_htlc_backwards to free up resources for this HTLC.
/// ChannelManager::fail_htlc_backwards to free up resources for this HTLC before
/// LATENCY_GRACE_PERIOD_BLOCKS to avoid any channel-closure by onchain monitoring.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is correct - we don't know what our counterparty's criteria for channel-closure is.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Precised its our onchain monitoring, not counterparty's one.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But the sentence is all about when we dont have the preimage.

@@ -1039,7 +1039,11 @@ impl<ChanSigner: ChannelKeys, M: Deref, T: Deref, K: Deref, F: Deref> ChannelMan

// OUR PAYMENT!
// final_expiry_too_soon
if (msg.cltv_expiry as u64) < self.latest_block_height.load(Ordering::Acquire) as u64 + (CLTV_CLAIM_BUFFER + LATENCY_GRACE_PERIOD_BLOCKS) as u64 {
// Final node can't rely on checking a CLTV_EXPIRY_DELTA which enforces to be superior to CLTV_CLAIM_BUFFER so make sure
// we don't accept incoming HTLC we wouldn't have time to claim with a worst-case broadcast scenario
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: needs a period at the end.

@@ -1039,7 +1039,11 @@ impl<ChanSigner: ChannelKeys, M: Deref, T: Deref, K: Deref, F: Deref> ChannelMan

// OUR PAYMENT!
// final_expiry_too_soon
if (msg.cltv_expiry as u64) < self.latest_block_height.load(Ordering::Acquire) as u64 + (CLTV_CLAIM_BUFFER + LATENCY_GRACE_PERIOD_BLOCKS) as u64 {
// Final node can't rely on checking a CLTV_EXPIRY_DELTA which enforces to be superior to CLTV_CLAIM_BUFFER so make sure
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know what this means. Maybe just say, eg, "We have to have at have some headroom to broadcast on chain if we have the preimage, so make sure we have at least HTLC_FAIL_BACK_BUFFER blocks to go.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Took your sentence

@TheBlueMatt
Copy link
Collaborator

I cleaned up the language at 23bcfaa though your changes had a grammar bug in them. Let me know if the new version there is easier to read.

@TheBlueMatt
Copy link
Collaborator

Needs rebase on master, though it should be trivial - just drop the common commit.

@ariard ariard force-pushed the 2020-04-sanitize-cltv-delay branch from 17dc251 to a9142a1 Compare April 24, 2020 20:29
@ariard
Copy link
Author

ariard commented Apr 24, 2020

Build, adressed and rebased at a9142a1

@ariard ariard force-pushed the 2020-04-sanitize-cltv-delay branch 2 times, most recently from 969e4fd to 5dea054 Compare April 24, 2020 21:51
@@ -51,8 +51,6 @@ pub enum Event {
},
/// Indicates we've received money! Just gotta dig out that payment preimage and feed it to
/// ChannelManager::claim_funds to get it....
/// Note that if the preimage is not known or the amount paid is incorrect, you must call
/// ChannelManager::fail_htlc_backwards to free up resources for this HTLC.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm, I think we should still encourage the user to do this - waiting for the timeout is kinda bad for the network. Its possible waiting for the timeout is better from a privacy perspective, but hopefully such attacks aren't feasible, what do you think?

Antoine Riard added 2 commits April 24, 2020 18:30
We want to avoid a third-party channel closure, where a random node
by sending us a payment expiring at current height, would trigger our
onchain logic to close the channel due to a near-expiration.
@ariard ariard force-pushed the 2020-04-sanitize-cltv-delay branch from 5dea054 to 886223a Compare April 24, 2020 22:31
@TheBlueMatt TheBlueMatt merged commit 4dc0dd1 into lightningdevkit:master Apr 24, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants