-
Notifications
You must be signed in to change notification settings - Fork 155
Add TapCtx #255
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add TapCtx #255
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ready for review now
CC'ing myself as i'd like to (hopefully) review this soon :) |
125b059
to
887fde3
Compare
src/expression.rs
Outdated
@@ -206,4 +222,12 @@ mod tests { | |||
assert!(parse_num("+6").is_err()); | |||
assert!(parse_num("-6").is_err()); | |||
} | |||
|
|||
#[test] | |||
#[ignore] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This #[ignore]
can be removed. (The heavy_nest
one cannot.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why? are they not the same thing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The heavy_nest
one stack overflows on my system while this one does not.
I assumed the other stack overflow would be fixed in a future PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting, does it show an overflow error? Or just take a long time because there is no recursion here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason for #[ignore] is that all ignored tests are run in release mode in CI. In debug mode, the rustc keeps lot of debug information which makes the program really slow to execute.
In the first commit of the PR, both of the tests should pass in a few seconds in release mode. there is no bug fix for a future commit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Based on a discussion in IRC, I have reverted the commits that switched the implementation back to recursive ones.
/// Maximum stack + alt stack size during dissat execution | ||
/// This does **not** include initial witness elements. This element only captures | ||
/// the additional elements that are pushed during execution. | ||
pub exec_stack_elem_count_dissat: Option<usize>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not include initial witness elements? I think every fragment has a fixed number of initial stack elements that it consumes before pushing new things onto the stack
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are already tracking max_elem_stack_count
for initial witness elements in a separate field.
I think every fragment has a fixed number of initial stack elements that it consumes before pushing new things onto the stack.
ors/ threshes can have multiple satisfactions which can different heights.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Derp, yes, thank you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait, no, I'm still confused .. or
s have multiple satisfactions but the max stack depth will just be the maximum of the children. Similar for thresh
, you just take the sum of the k
maximum-depth subitems.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but the max stack depth will just be the maximum of the children
This is incorrect.
The stack depth while executing A
in and_b(A,B) would be max_exec _A
+ initial witness A
+ initial witness B
. But for B
would max_exec_B
+ initial_witness_B
because A witness would have been consumed
@@ -455,6 +506,10 @@ impl Property for ExtData { | |||
stack_elem_count_dissat: self.stack_elem_count_dissat.map(|x| x + 1), | |||
max_sat_size: self.max_sat_size.map(|(w, s)| (w + 2, s + 1)), | |||
max_dissat_size: self.max_dissat_size.map(|(w, s)| (w + 1, s + 1)), | |||
// TODO: fix dissat stack elem counting above in a later commit | |||
// Technically max(1, self.exec_stack_elem_count_sat), same rationale as cast_dupif |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm curious why you defer this to a later commit. You have cmp::max
:)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 9ae4d7b Mostly good, just a few nits. |
9ae4d7b
to
1f0cd5e
Compare
src/expression.rs
Outdated
@@ -206,4 +222,12 @@ mod tests { | |||
assert!(parse_num("+6").is_err()); | |||
assert!(parse_num("-6").is_err()); | |||
} | |||
|
|||
#[test] | |||
#[ignore] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why? are they not the same thing?
/// Maximum stack + alt stack size during dissat execution | ||
/// This does **not** include initial witness elements. This element only captures | ||
/// the additional elements that are pushed during execution. | ||
pub exec_stack_elem_count_dissat: Option<usize>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are already tracking max_elem_stack_count
for initial witness elements in a separate field.
I think every fragment has a fixed number of initial stack elements that it consumes before pushing new things onto the stack.
ors/ threshes can have multiple satisfactions which can different heights.
@@ -455,6 +506,10 @@ impl Property for ExtData { | |||
stack_elem_count_dissat: self.stack_elem_count_dissat.map(|x| x + 1), | |||
max_sat_size: self.max_sat_size.map(|(w, s)| (w + 2, s + 1)), | |||
max_dissat_size: self.max_dissat_size.map(|(w, s)| (w + 1, s + 1)), | |||
// TODO: fix dissat stack elem counting above in a later commit | |||
// Technically max(1, self.exec_stack_elem_count_sat), same rationale as cast_dupif |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ms.ext.exec_stack_elem_count_sat, | ||
ms.ext.stack_elem_count_sat, | ||
) { | ||
if s + h > MAX_STACK_SIZE { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The check with initial elements + exec elements is done here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we fail if we have one of these properties and not the other?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
None
here means that there is no satisfaction for this miniscript. In all cases, if one of them is Some, the other must also be Some
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe should be Some((x,y)) instead of Some(x), Some(y) then? Or assert that !a.xor(b).is_some()
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will add the assert for now.
1f0cd5e
to
7103b50
Compare
@@ -109,7 +142,11 @@ impl fmt::Display for ScriptContextError { | |||
/// For example, disallowing uncompressed keys in Segwit context | |||
pub trait ScriptContext: | |||
fmt::Debug + Clone + Ord + PartialOrd + Eq + PartialEq + hash::Hash + private::Sealed | |||
where | |||
Self::Key: MiniscriptKey<Hash = bitcoin::hashes::hash160::Hash>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there a reason for splitting the trait bound in a where / type clause (below)?
/// Note that this includes the serialization prefix. Returns | ||
/// 34/66 for Bare/Legacy based on key compressedness | ||
/// 34 for Segwitv0, 33 for Tap | ||
fn pk_len<Pk: MiniscriptKey>(pk: &Pk) -> usize; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think this should not be generic, it should just be pk: &Key
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be generic because we want the ability to compute the length of scripts with Pk is abstract without requiring translating those to consensus keys.
) -> Result<(), ScriptContextError> { | ||
// No fragment is malleable in tapscript context. | ||
// Certain fragments like Multi are invalid, but are not malleable | ||
Ok(()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does that include fragments that have a repeated key?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, that check is carried out separately in another pass.
fn return_none<T>(_: usize) -> Option<T> { | ||
None | ||
} | ||
|
||
/// Trait for parsing keys from byte slices | ||
pub trait ParseableKey: Sized + ToPublicKey + private::Sealed { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should parseablekey require miniscriptkey?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will make it more explicit, but ToPublicKey has MiniscriptKey bound
}, | ||
// Note this does not collide with hash32 because they always followed by equal | ||
// and would be parsed in different branch. If we get a naked Bytes32, it must be | ||
// a x-only key |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fwiw this part kinda confuses me & I don't feel certain there's no ambiguity ever.
can this logic be refactored to be more clear to not have a collision or can the documentation of parse be improved (e.g., perhaps in a comment here that gets pr'd separately?)
))? | ||
}, | ||
Tk::Verify, Tk::Equal, Tk::Num(32), Tk::Size => { | ||
non_term.push(NonTerm::Verify); | ||
term.reduce0(Terminal::Hash160( | ||
hash160::Hash::from_inner(hash) | ||
hash160::Hash::from_slice(hash).expect("valid size") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use a non panicking form on these?
PublicKey::from_slice(bytes).map_err(Error::BadPubkey)?, | ||
)); | ||
} | ||
20 => ret.push(Token::Hash20(&bytes)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see https://github.com/rust-bitcoin/rust-miniscript/pull/255/files#r729970708, you can make these be array refs instead of unsized slices.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can do something like:
bytes.try_into::<[u8; 20]>.map(Token::Hash20).map(|v| ret.push(v));
bytes.try_into::<[u8; 32]>.map(Token::Bytes32).map(|v| ret.push(v));
bytes.try_into::<[u8; 33]>.map(Token::Bytes33).map(|v| ret.push(v));
bytes.try_into::<[u8; 65]>.map(Token::Bytes65).map(|v| ret.push(v));
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will do it once we have 1.34 MSRV.
src/policy/semantic.rs
Outdated
@@ -14,6 +14,7 @@ | |||
|
|||
//! Abstract Policies | |||
|
|||
use std::collections::HashSet; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
prefer using BTreeSet since it gives you reproducible results, although less critical for abstract policies than concrete.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, this was an unnecessary change that somehow got into this PR while I was testing bitcoin core merge script.
Overall looking pretty fantastic, awesome work @sanket1729. A few minor points of cleanup, but nothing that IMO feels blocking for merge (although would be worth it to fix before release). |
src/miniscript/types/extra_props.rs
Outdated
debug_assert!(!xor( | ||
self.stack_elem_count_dissat, | ||
self.exec_stack_elem_count_dissat | ||
)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this could probably be more clearly written as
debug_assert_eq!(self.stack_elem_count_stat.is_some(), self.exec_stack_elem_count_sat.is_some());
etc. without the xor
helper.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ACK 498bad2
I don't think any of the nits are enough to hold up merging.
Fixup: remove xor check
Added a fixup commit from your ACK that should be easy to review. It removes the xor check and adds some comments about x-only decoding. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ACK 96cb2ec
42e4c50 Cleanup context code with SigType enum (sanket1729) e70e6a3 Fix fuzzer crash while allocating keys for multi_a (sanket1729) f6b2536 Add satisfaction tests (sanket1729) 1a7e779 Change satisfier to support xonly sigs (sanket1729) 2c32c03 Use bitcoin::SchnorrSig type from rust-bitcoin (sanket1729) e26d4e6 Use bitcoin::EcdsaSig from rust-bitcoin (sanket1729) 89e7cb3 add multi_a (sanket1729) Pull request description: Adds support for MultiA script fragment. Builds on top of #255 . Do not merge this until we have a rust-bitcoin release ACKs for top commit: apoelstra: ACK 42e4c50 Tree-SHA512: 10701266cf6fe436fe07359e67d16bc925ebbcd767d4e5a8d325db61b979bd76b1e0291dd9eae5b7d58f6a385f8f2e16c2657957a597ce18736382b776e20502
This is still a draft WIP for taproot PR. There are several API changes and multiple design decisions, and I am open to other possible design opinions. In particular, feedback about changes to
ToPublicKey
and theScriptContext
trait is appreciated.This does not include support for P2TR descriptors, multi_a fragment in miniscript.
Still todo:
Track height during execution to make sure we don't exceed 1000 elements. This was not a concern previously because it was impossible to hit the limit without exceeding 201 opcodes. But with taproot, this is now possible. See [Reminder issue] Tapscript changes to miniscript #198 .
Implement support for multi_a fragment in Taproot. Depends on support for new opcodes from rust-bitcoin. We can use NOPs meanwhile as this is not a hard blocker.
Implement taproot descriptor. This would require some design considerations about the tree representation.
Integrate taproot with interpreter and compiler. The interpreter is probably straightforward but carrying around
bitcoin:::PublicKey
andschnorr::Publickey
would be tricky.Integrate with descriptorPublicKey so that descriptor wallets like bdk can actually use it.