Skip to content

Add TapCtx #255

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Oct 26, 2021
Merged

Add TapCtx #255

merged 8 commits into from
Oct 26, 2021

Conversation

sanket1729
Copy link
Member

@sanket1729 sanket1729 commented Jul 26, 2021

This is still a draft WIP for taproot PR. There are several API changes and multiple design decisions, and I am open to other possible design opinions. In particular, feedback about changes to ToPublicKey and the ScriptContext trait is appreciated.

This does not include support for P2TR descriptors, multi_a fragment in miniscript.

Still todo:

  • Track height during execution to make sure we don't exceed 1000 elements. This was not a concern previously because it was impossible to hit the limit without exceeding 201 opcodes. But with taproot, this is now possible. See [Reminder issue] Tapscript changes to miniscript #198 .

  • Implement support for multi_a fragment in Taproot. Depends on support for new opcodes from rust-bitcoin. We can use NOPs meanwhile as this is not a hard blocker.

  • Implement taproot descriptor. This would require some design considerations about the tree representation.

  • Integrate taproot with interpreter and compiler. The interpreter is probably straightforward but carrying around bitcoin:::PublicKey and schnorr::Publickey would be tricky.

  • Integrate with descriptorPublicKey so that descriptor wallets like bdk can actually use it.

@sanket1729 sanket1729 marked this pull request as ready for review August 24, 2021 17:15
Copy link
Member Author

@sanket1729 sanket1729 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ready for review now

@sanket1729 sanket1729 changed the title [DONOTMERGE] Add TapCtx Add TapCtx Aug 24, 2021
@sanket1729 sanket1729 requested a review from apoelstra August 24, 2021 18:50
@SarcasticNastik SarcasticNastik mentioned this pull request Aug 30, 2021
4 tasks
@darosior
Copy link
Contributor

darosior commented Sep 1, 2021

CC'ing myself as i'd like to (hopefully) review this soon :)

@@ -206,4 +222,12 @@ mod tests {
assert!(parse_num("+6").is_err());
assert!(parse_num("-6").is_err());
}

#[test]
#[ignore]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This #[ignore] can be removed. (The heavy_nest one cannot.)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why? are they not the same thing?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The heavy_nest one stack overflows on my system while this one does not.

I assumed the other stack overflow would be fixed in a future PR.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting, does it show an overflow error? Or just take a long time because there is no recursion here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason for #[ignore] is that all ignored tests are run in release mode in CI. In debug mode, the rustc keeps lot of debug information which makes the program really slow to execute.

In the first commit of the PR, both of the tests should pass in a few seconds in release mode. there is no bug fix for a future commit

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on a discussion in IRC, I have reverted the commits that switched the implementation back to recursive ones.

/// Maximum stack + alt stack size during dissat execution
/// This does **not** include initial witness elements. This element only captures
/// the additional elements that are pushed during execution.
pub exec_stack_elem_count_dissat: Option<usize>,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not include initial witness elements? I think every fragment has a fixed number of initial stack elements that it consumes before pushing new things onto the stack

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are already tracking max_elem_stack_count for initial witness elements in a separate field.

I think every fragment has a fixed number of initial stack elements that it consumes before pushing new things onto the stack.

ors/ threshes can have multiple satisfactions which can different heights.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Derp, yes, thank you!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait, no, I'm still confused .. ors have multiple satisfactions but the max stack depth will just be the maximum of the children. Similar for thresh, you just take the sum of the k maximum-depth subitems.

Copy link
Member Author

@sanket1729 sanket1729 Sep 8, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but the max stack depth will just be the maximum of the children

This is incorrect.
The stack depth while executing A in and_b(A,B) would be max_exec _A + initial witness A + initial witness B. But for B would max_exec_B + initial_witness_B because A witness would have been consumed

@@ -455,6 +506,10 @@ impl Property for ExtData {
stack_elem_count_dissat: self.stack_elem_count_dissat.map(|x| x + 1),
max_sat_size: self.max_sat_size.map(|(w, s)| (w + 2, s + 1)),
max_dissat_size: self.max_dissat_size.map(|(w, s)| (w + 1, s + 1)),
// TODO: fix dissat stack elem counting above in a later commit
// Technically max(1, self.exec_stack_elem_count_sat), same rationale as cast_dupif
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious why you defer this to a later commit. You have cmp::max :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is a bug fix in existing code about max_stack_elem_count (different from exec count) #268 which should have a separate commit. Opened #268 for it

@apoelstra
Copy link
Member

Reviewed 9ae4d7b

Mostly good, just a few nits.

@@ -206,4 +222,12 @@ mod tests {
assert!(parse_num("+6").is_err());
assert!(parse_num("-6").is_err());
}

#[test]
#[ignore]
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why? are they not the same thing?

/// Maximum stack + alt stack size during dissat execution
/// This does **not** include initial witness elements. This element only captures
/// the additional elements that are pushed during execution.
pub exec_stack_elem_count_dissat: Option<usize>,
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are already tracking max_elem_stack_count for initial witness elements in a separate field.

I think every fragment has a fixed number of initial stack elements that it consumes before pushing new things onto the stack.

ors/ threshes can have multiple satisfactions which can different heights.

@@ -455,6 +506,10 @@ impl Property for ExtData {
stack_elem_count_dissat: self.stack_elem_count_dissat.map(|x| x + 1),
max_sat_size: self.max_sat_size.map(|(w, s)| (w + 2, s + 1)),
max_dissat_size: self.max_dissat_size.map(|(w, s)| (w + 1, s + 1)),
// TODO: fix dissat stack elem counting above in a later commit
// Technically max(1, self.exec_stack_elem_count_sat), same rationale as cast_dupif
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is a bug fix in existing code about max_stack_elem_count (different from exec count) #268 which should have a separate commit. Opened #268 for it

ms.ext.exec_stack_elem_count_sat,
ms.ext.stack_elem_count_sat,
) {
if s + h > MAX_STACK_SIZE {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The check with initial elements + exec elements is done here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we fail if we have one of these properties and not the other?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

None here means that there is no satisfaction for this miniscript. In all cases, if one of them is Some, the other must also be Some

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe should be Some((x,y)) instead of Some(x), Some(y) then? Or assert that !a.xor(b).is_some()?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will add the assert for now.

@@ -109,7 +142,11 @@ impl fmt::Display for ScriptContextError {
/// For example, disallowing uncompressed keys in Segwit context
pub trait ScriptContext:
fmt::Debug + Clone + Ord + PartialOrd + Eq + PartialEq + hash::Hash + private::Sealed
where
Self::Key: MiniscriptKey<Hash = bitcoin::hashes::hash160::Hash>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a reason for splitting the trait bound in a where / type clause (below)?

/// Note that this includes the serialization prefix. Returns
/// 34/66 for Bare/Legacy based on key compressedness
/// 34 for Segwitv0, 33 for Tap
fn pk_len<Pk: MiniscriptKey>(pk: &Pk) -> usize;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think this should not be generic, it should just be pk: &Key?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be generic because we want the ability to compute the length of scripts with Pk is abstract without requiring translating those to consensus keys.

) -> Result<(), ScriptContextError> {
// No fragment is malleable in tapscript context.
// Certain fragments like Multi are invalid, but are not malleable
Ok(())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does that include fragments that have a repeated key?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, that check is carried out separately in another pass.

fn return_none<T>(_: usize) -> Option<T> {
None
}

/// Trait for parsing keys from byte slices
pub trait ParseableKey: Sized + ToPublicKey + private::Sealed {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should parseablekey require miniscriptkey?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will make it more explicit, but ToPublicKey has MiniscriptKey bound

},
// Note this does not collide with hash32 because they always followed by equal
// and would be parsed in different branch. If we get a naked Bytes32, it must be
// a x-only key
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fwiw this part kinda confuses me & I don't feel certain there's no ambiguity ever.

can this logic be refactored to be more clear to not have a collision or can the documentation of parse be improved (e.g., perhaps in a comment here that gets pr'd separately?)

))?
},
Tk::Verify, Tk::Equal, Tk::Num(32), Tk::Size => {
non_term.push(NonTerm::Verify);
term.reduce0(Terminal::Hash160(
hash160::Hash::from_inner(hash)
hash160::Hash::from_slice(hash).expect("valid size")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use a non panicking form on these?

PublicKey::from_slice(bytes).map_err(Error::BadPubkey)?,
));
}
20 => ret.push(Token::Hash20(&bytes)),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see https://github.com/rust-bitcoin/rust-miniscript/pull/255/files#r729970708, you can make these be array refs instead of unsized slices.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can do something like:

bytes.try_into::<[u8; 20]>.map(Token::Hash20).map(|v| ret.push(v));
bytes.try_into::<[u8; 32]>.map(Token::Bytes32).map(|v| ret.push(v));
bytes.try_into::<[u8; 33]>.map(Token::Bytes33).map(|v| ret.push(v));
bytes.try_into::<[u8; 65]>.map(Token::Bytes65).map(|v| ret.push(v));

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will do it once we have 1.34 MSRV.

@@ -14,6 +14,7 @@

//! Abstract Policies

use std::collections::HashSet;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

prefer using BTreeSet since it gives you reproducible results, although less critical for abstract policies than concrete.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, this was an unnecessary change that somehow got into this PR while I was testing bitcoin core merge script.

@JeremyRubin
Copy link
Contributor

Overall looking pretty fantastic, awesome work @sanket1729. A few minor points of cleanup, but nothing that IMO feels blocking for merge (although would be worth it to fix before release).

debug_assert!(!xor(
self.stack_elem_count_dissat,
self.exec_stack_elem_count_dissat
));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: this could probably be more clearly written as

debug_assert_eq!(self.stack_elem_count_stat.is_some(), self.exec_stack_elem_count_sat.is_some());

etc. without the xor helper.

apoelstra
apoelstra previously approved these changes Oct 21, 2021
Copy link
Member

@apoelstra apoelstra left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ACK 498bad2

I don't think any of the nits are enough to hold up merging.

Fixup: remove xor check
@sanket1729
Copy link
Member Author

Added a fixup commit from your ACK that should be easy to review. It removes the xor check and adds some comments about x-only decoding.

Copy link
Member

@apoelstra apoelstra left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ACK 96cb2ec

@apoelstra apoelstra merged commit f1220a2 into rust-bitcoin:master Oct 26, 2021
sanket1729 added a commit that referenced this pull request Jan 13, 2022
42e4c50 Cleanup context code with SigType enum (sanket1729)
e70e6a3 Fix fuzzer crash while allocating keys for multi_a (sanket1729)
f6b2536 Add satisfaction tests (sanket1729)
1a7e779 Change satisfier to support xonly sigs (sanket1729)
2c32c03 Use bitcoin::SchnorrSig type from rust-bitcoin (sanket1729)
e26d4e6 Use bitcoin::EcdsaSig from rust-bitcoin (sanket1729)
89e7cb3 add multi_a (sanket1729)

Pull request description:

  Adds support for MultiA script fragment. Builds on top of #255 .

  Do not merge this until we have a rust-bitcoin release

ACKs for top commit:
  apoelstra:
    ACK 42e4c50

Tree-SHA512: 10701266cf6fe436fe07359e67d16bc925ebbcd767d4e5a8d325db61b979bd76b1e0291dd9eae5b7d58f6a385f8f2e16c2657957a597ce18736382b776e20502
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants