Skip to content

Watch revoked HTLC-Success/Timeout outputs #519

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

ariard
Copy link

@ariard ariard commented Feb 26, 2020

Build on top of #489 + #462, fix #411

(Likely fix SpendableOutput and reverse order of PRs to make test changes easier to read)

@TheBlueMatt TheBlueMatt added this to the 0.0.11 milestone Feb 27, 2020
@TheBlueMatt
Copy link
Collaborator

Needs rebase.

@ariard ariard force-pushed the 2020-2-fix-remote-htlc-outputs-tracking branch 2 times, most recently from 28623a4 to 277e1e2 Compare March 10, 2020 20:56
@codecov
Copy link

codecov bot commented Mar 10, 2020

Codecov Report

Merging #519 into master will increase coverage by <.01%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #519      +/-   ##
==========================================
+ Coverage   90.07%   90.07%   +<.01%     
==========================================
  Files          34       34              
  Lines       19053    19058       +5     
==========================================
+ Hits        17162    17167       +5     
  Misses       1891     1891
Impacted Files Coverage Δ
lightning/src/ln/channelmonitor.rs 90.25% <100%> (-0.06%) ⬇️
lightning/src/ln/functional_tests.rs 96.29% <100%> (ø) ⬆️
lightning/src/ln/onchaintx.rs 92.75% <100%> (+0.33%) ⬆️
lightning/src/ln/functional_test_utils.rs 94.58% <100%> (ø) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 83c9eb4...2a32921. Read the comment docs.

//TODO: send back new outputs to guarantee pending_claim_request consistency
let htlc_txid = tx.txid(); //TODO: This is gonna be a performance bottleneck for watchtowers!
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this comment is bogus - it refers to the fact that watchtowers, in some other .txid() calls loop over each monitor and for each monitor calculate each transactions' txid again and again. In this case, check_spend_remote_htlc is only called if the prevout txid matches in block_connected, so you can drop the TODO.

Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, though I'm a bit confused on some of the test changes.

@@ -6964,7 +6966,7 @@ fn test_bump_penalty_txn_on_revoked_htlcs() {
connect_blocks(&nodes[0].block_notifier, 20, 145, true, header_145.bitcoin_hash());
{
let mut node_txn = nodes[0].tx_broadcaster.txn_broadcasted.lock().unwrap();
assert_eq!(node_txn.len(), 1); //TODO: fix check_spend_remote_htlc lack of watch output
assert_eq!(node_txn.len(), 0); // Spending of revoked htlc output by claiming transaction remove request as expected
node_txn.clear();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You definitely dont need this now.

@@ -4182,7 +4184,7 @@ fn test_static_spendable_outputs_justice_tx_revoked_htlc_timeout_tx() {
check_closed_broadcast!(nodes[1], false);

let node_txn = nodes[1].tx_broadcaster.txn_broadcasted.lock().unwrap();
assert_eq!(node_txn.len(), 4 ); // ChannelMonitor: justice tx on revoked commitment, justice tx on revoked HTLC-timeout, adjusted justice tx, ChannelManager: local commitment tx
assert_eq!(node_txn.len(), 5); // ChannelMonitor: justice tx on revoked commitment, justice tx on revoked HTLC-timeout, adjusted justice tx, bumped justice tx, ChannelManager: local commitment tx
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please try to avoid checking for transaction presense without checking what that transaction is. Either check_spends!() it or check the witness length, at least.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ooooh thanks for acute comment, in fact fixing bug by tracking revoked htlc output was adding another bug by duplicating regeneration of adjusted claim transaction (aka when an aggregated tx has one of its outpoints claimed we MUST regenerate a new valid claim tx). See new commit for more context.

@@ -6964,7 +6966,7 @@ fn test_bump_penalty_txn_on_revoked_htlcs() {
connect_blocks(&nodes[0].block_notifier, 20, 145, true, header_145.bitcoin_hash());
{
let mut node_txn = nodes[0].tx_broadcaster.txn_broadcasted.lock().unwrap();
assert_eq!(node_txn.len(), 1); //TODO: fix check_spend_remote_htlc lack of watch output
assert_eq!(node_txn.len(), 0); // Spending of revoked htlc output by claiming transaction remove request as expected
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm super confused as to why adding a watch output should ever result in fewer transactions broadcast. This comment doesn't really tell me anything - it follows up on the previous comment, but I'm confused what its trying to say.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I added a comment explaining the reason why bugfix results in fewer transaction broadcast (and reference to issue). Reason is watching output now means when our justice tx confirms onchain and spend revoked htlc output, this is reported to bumping logic which stop dutifully new justice tx generation. Yeah that's not straightforward but I think now that's a right behavior.

Antoine Riard added 3 commits March 11, 2020 15:15
Adjusted tx occurs when a previous aggregated claim tx has
seen one of its outpoint being partially claimed by a remote tx.
To pursue claiming of the remaining outpoint a adjusted claim tx
is generated  with leftover of claimable outpoints.

Previously, in case of block-rescan where a partial claim occurs,
we would generate duplicated adjusted tx, wrongly inflating feerate
for next bumps. At rescan, if input has already been dropped from
outpoints map from a claiming request, don't regenerate again
a adjuste tx.
Bumping of justice txn on revoked HTLC-Success/HTLC-timeout is triggered
until our claim is confirmed onchain with at least
ANTI_REORG_DELAY_SAFE. Before this patch, we weren't tracking them in
check_spend_remote_htlc, leading us to infinite bumps.

Fix lightningdevkit#411
@ariard ariard force-pushed the 2020-2-fix-remote-htlc-outputs-tracking branch from 277e1e2 to 2a32921 Compare March 11, 2020 19:21
@ariard
Copy link
Author

ariard commented Mar 11, 2020

Updated 2a32921, thanks for review, this part of the code is really tricky :/

if tx.input.len() != 1 || tx.output.len() != 1 || tx.input[0].witness.len() != 5 {
return Vec::new()
return (Vec::new(), (htlc_txid, Vec::new()))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is confusing - you're returning (value, ()) to imply "nothing to watch". Can you at least comment that and update the callsite in block_connected to not push anything to watch_outputs if there are no outputs set?

@TheBlueMatt
Copy link
Collaborator

Otherwise looks good, the travis failure is #544, so don't worry about it here.

@TheBlueMatt
Copy link
Collaborator

Will take as #546.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Send back new_outputs in check_spend_remote_htlcs
2 participants