Skip to content

Broadcast transactions only after their timelock is up #932

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

TheBlueMatt
Copy link
Collaborator

Built on #928, this avoids broadcasting transactions too early resulting in us leaning on users to do rebroadcasting. Instead, we wait until timelocks are up to broadcast.

@TheBlueMatt TheBlueMatt changed the title 2021 05 broadcast locktime delay Broadcast transactions only after their timelock is up May 27, 2021
@TheBlueMatt TheBlueMatt force-pushed the 2021-05-broadcast-locktime-delay branch 2 times, most recently from 7fa5a2c to d267f7b Compare May 27, 2021 19:26
@codecov
Copy link

codecov bot commented May 27, 2021

Codecov Report

Merging #932 (d51d606) into main (df829a8) will increase coverage by 0.08%.
The diff coverage is 95.79%.

Impacted file tree graph

@@            Coverage Diff             @@
##             main     #932      +/-   ##
==========================================
+ Coverage   90.45%   90.54%   +0.08%     
==========================================
  Files          60       60              
  Lines       30595    30698     +103     
==========================================
+ Hits        27675    27795     +120     
+ Misses       2920     2903      -17     
Impacted Files Coverage Δ
lightning/src/ln/channelmanager.rs 83.15% <ø> (ø)
lightning/src/util/test_utils.rs 82.54% <40.00%> (-0.60%) ⬇️
lightning/src/chain/package.rs 92.51% <86.20%> (+0.10%) ⬆️
lightning/src/chain/channelmonitor.rs 91.04% <94.11%> (+0.01%) ⬆️
lightning/src/ln/functional_tests.rs 97.18% <97.03%> (+0.37%) ⬆️
lightning-background-processor/src/lib.rs 94.73% <100.00%> (ø)
lightning/src/chain/onchaintx.rs 94.14% <100.00%> (+0.46%) ⬆️
lightning/src/ln/chanmon_update_fail_tests.rs 97.78% <100.00%> (+<0.01%) ⬆️
lightning/src/ln/functional_test_utils.rs 94.81% <100.00%> (-0.09%) ⬇️
lightning/src/ln/reorg_tests.rs 99.62% <100.00%> (+<0.01%) ⬆️
... and 1 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update df829a8...d51d606. Read the comment docs.

@TheBlueMatt TheBlueMatt force-pushed the 2021-05-broadcast-locktime-delay branch from d267f7b to 6202aaa Compare May 27, 2021 20:30
log_debug!(logger, " Outpoint {}", outpoint);
}
self.locktimed_packages.entry(req.package_timelock()).or_insert(Vec::new()).push(req);
} else if let Some((new_timer, new_feerate, tx)) = self.generate_claim_tx(height, &req, &*fee_estimator, &*logger) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we broadcast delayed-transactions scenario with the following reorg-scenario ?

  • delayed-transactions are cached in locktimed_packages
  • height expiration is reached, we generate and broadcast delayed transactions
  • chain is reorg-out, delayed transactions are evicted from mempools
  • before syncing to new tip, we receive a preimage offchain and enter into update_claims_view

I don't think we can regenerate and broadcast a non-propagating delayed transaction as regeneration timer are height-based. In the future, if we switch regeneration timer to mempool feeration variations, we might broadcast such delayed transactions.

Otherwise, I can't think of other edgy reorg cases.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In that case don't we still generate the relevant commitment + timeout/claim transactions when the block comes back in when we sync to the new tip? Either way we should bump after a few blocks if we haven't confirmed by then.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clearly, not a big deal. It's a just a edge-case about new feature added by this PR in case of a hypothetical future change. If we switch or add the option of mempool-based timing we'll have to split our hairs on few others things around OnchainTxHandler.

(Thought I still think mempool fluctuations-based timing might be more more interesting for fee savings than dumb rebroadcast after blocks X, if you're not near timelocks expiration)

@@ -576,13 +600,22 @@ impl PackageTemplate {
}
self.height_timer = cmp::min(self.height_timer, merge_from.height_timer);
}
pub(crate) fn package_amount(&self) -> u64 {
/// Gets the amount of all outptus being spent by this package, only valid for malleable
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: *outputs an "only valid for now (legacy channel types)"

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, not sure what you're suggesting? I'll let you update the docs for anchor in the anchor PRs.

@TheBlueMatt TheBlueMatt force-pushed the 2021-05-broadcast-locktime-delay branch from 6202aaa to 2f755b1 Compare May 28, 2021 00:13
@TheBlueMatt
Copy link
Collaborator Author

Rebased after merge of #928 and addressed comments.

Copy link

@ariard ariard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review ACK 2f755b1

I went through all the fixes commit, globally satisfied with new comments. I had a look on test changes they look good overall.

Copy link
Contributor

@jkczyz jkczyz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to take a closer look at the tests in the later commits.

Comment on lines 277 to 278
PackageSolvingData::HolderHTLCOutput(..) => { debug_assert!(false); 0 },
PackageSolvingData::HolderFundingOutput(..) => { debug_assert!(false); 0 },
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would unreachable! be appropriate?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, yea. I was avoiding doing it to keep the panic in debug, but I think you're right, if we ever actually get there, its probably indication that we're gonna generate an invalid spend transaction and/or burn funds to fee, which is not ok.

@@ -1499,16 +1499,17 @@ fn test_duplicate_htlc_different_direction_onchain() {

// Check we only broadcast 1 timeout tx
let claim_txn = nodes[0].tx_broadcaster.txn_broadcasted.lock().unwrap().clone();
let htlc_pair = if claim_txn[0].output[0].value == 800_000 / 1000 { (claim_txn[0].clone(), claim_txn[1].clone()) } else { (claim_txn[1].clone(), claim_txn[0].clone()) };
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was this unnecessary?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Entirely, the 800_000 wasn't even right anymore - now that we subtract fees no output was ever exactly the 800k msat

@@ -8689,6 +8700,9 @@ fn test_concurrent_monitor_claim() {
watchtower
};
let header = BlockHeader { version: 0x20000000, prev_blockhash: Default::default(), merkle_root: Default::default(), time: 42, bits: 42, nonce: 42 };
// Make the tx_broadcaster aware of enough blocks that it doesn't think we're violating
// transaction lock time requirements here.
chanmon_cfgs[0].tx_broadcaster.blocks.lock().unwrap().resize((CHAN_CONFIRM_DEPTH + 1 + TEST_FINAL_CLTV + LATENCY_GRACE_PERIOD_BLOCKS) as usize, (header, 0));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a need to use a Vec? Would storing the last block header and height suffice?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Vec here is the same one that the block_disconnected stuff uses, so, yes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I didn't realized they were shared. Makes sense.

Comment on lines 1913 to 1915
// Note that we add + 1 as transactions are broadcastable when they can be
// confirmed in the next block, not the current height.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if I can parse this. If the current height is X then any transaction we broadcast won't confirm until X + 1 at the earliest by definition.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, its just noting that we can always broadcast when the locktime is X + 1. I can drop if you think its obvious but Antoine suggested it.

  • Fun fact of the day - Bitcoin Core had a bug for many years where it actually wouldn't accept-for-relay transactions until they were confirm-able in the current block, not X+1.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the current block is X and the earliest the transaction can confirm is X + 1 then broadcasting at X should be fine, right? Because the next block is X + 1. I guess I'm partly wondering if the + 1 is needed. Maybe I'm misunderstanding cltv_expiry.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, if the current height is X, then the next block which will be mined is X + 1. Thus, we can always broadcast something with a CLTV expiry of X + 1 as it can go into the next block (but could not appear in the current best block).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, that makes sense. I think dropping "not the current height" would be beneficial as including it can lead to confusion.

@TheBlueMatt TheBlueMatt force-pushed the 2021-05-broadcast-locktime-delay branch 4 times, most recently from 4c6b0ac to ab7f150 Compare May 28, 2021 16:43
Copy link
Contributor

@jkczyz jkczyz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a couple nits, but otherwise this is still outstanding: ff8ca02#r641682315

@TheBlueMatt
Copy link
Collaborator Author

Hmm, your link isn't working for me now, but I did try to respond at ff8ca02#r641820784

@TheBlueMatt TheBlueMatt force-pushed the 2021-05-broadcast-locktime-delay branch from ab7f150 to a59b879 Compare May 28, 2021 21:36
Comment on lines 1913 to 1915
// Note that we add + 1 as transactions are broadcastable when they can be
// confirmed in the next block, not the current height.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, that makes sense. I think dropping "not the current height" would be beneficial as including it can lead to confusion.

Copy link

@ariard ariard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review ACK a59b879

Changes since last time are addressing Jeff's comments

This somewhat cleans up the public API of PackageSolvingData to
make it harder to get an invalid amount and use it, adding further
debug assertion to check it at test-time.
This allows us to interrogate a PackageTemplate for the CLTV
timelock of the resulting transaction.
This cleans up some of the transaction format verification and docs
to make it easier when we delay CLTV-locked transactions to update
the tests.
This simplifies logic somewhat and avoids duplicating the storage
of the current height in OnchainTxHandler.
This stores transaction templates temporarily until their locktime
is reached, avoiding broadcasting (or RBF bumping) transactions
prior to their locktime. For those broadcasting transactions
(potentially indirectly) via Bitcoin Core RPC, this ensures no
automated rebroadcast of transactions on the client side is
required to get transactions confirmed.
@TheBlueMatt TheBlueMatt force-pushed the 2021-05-broadcast-locktime-delay branch from a59b879 to a87a02f Compare May 28, 2021 23:57
@TheBlueMatt TheBlueMatt force-pushed the 2021-05-broadcast-locktime-delay branch from a87a02f to d51d606 Compare May 28, 2021 23:58
@TheBlueMatt
Copy link
Collaborator Author

Tweaked the comment as suggested at #932 (comment), fixed one trivial error from new fixes, and squashed. Will merge after CI:

$ git diff-tree -U1 a59b8798 d51d606ba
diff --git a/lightning/src/chain/channelmonitor.rs b/lightning/src/chain/channelmonitor.rs
index 3df0111f5..26187ce34 100644
--- a/lightning/src/chain/channelmonitor.rs
+++ b/lightning/src/chain/channelmonitor.rs
@@ -1914,3 +1914,3 @@ impl<Signer: Sign> ChannelMonitorImpl<Signer> {
 					// Note that we add + 1 as transactions are broadcastable when they can be
-					// confirmed in the next block, not the current height.
+					// confirmed in the next block.
 					continue;
diff --git a/lightning/src/ln/functional_tests.rs b/lightning/src/ln/functional_tests.rs
index 30b9d0145..eb4f6054b 100644
--- a/lightning/src/ln/functional_tests.rs
+++ b/lightning/src/ln/functional_tests.rs
@@ -5843,3 +5843,3 @@ fn test_key_derivation_params() {
 	// Ensure all nodes are at the same height
-	let node_max_height = nodes.iter().map(|node| node.blocks.borrow().len()).max().unwrap() as u32;
+	let node_max_height = nodes.iter().map(|node| node.blocks.lock().unwrap().len()).max().unwrap() as u32;
 	connect_blocks(&nodes[0], node_max_height - nodes[0].best_block_info().1);
$

@TheBlueMatt TheBlueMatt merged commit 8cc9410 into lightningdevkit:main May 29, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants