Achow101 [ARCHIVE] on Nostr: 📅 Original date posted:2018-06-26 📝 Original message:Hi, On June 26, 2018 8:33 ...
📅 Original date posted:2018-06-26
📝 Original message:Hi,
On June 26, 2018 8:33 AM, matejcik via bitcoin-dev <bitcoin-dev at lists.linuxfoundation.org> wrote:
>
>
> hello,
>
> in general, I agree with my colleague Tomas, the proposed changes are
>
> good and achieve the most important things that we wanted. We'll review
>
> the proposal in more detail later.
>
> For now a couple minor things I have run into:
>
> - valid test vector 2 ("one P2PKH input and one P2SH-P2WPKH input")
>
> seems broken, at least its hex version; a delimiter seems to be missing,
>
> misplaced or corrupted
Fixed
>
> - at least the first signing vector is not updated, but you probably
>
> know that
I updated all of the tests yesterday so they should be correct now. I will be adding more tests
this week.
>
> - BIP32 derivation fields don't specify size of the "master public key",
>
> which would make it un-parsable :)
Oops, that's actually supposed to be master key fingerprint, not master public key. I have updated
the BIP to reflect this.
>
> - "Transaction format is specified as follows" and its table need to be
>
> updated.
Done.
>
> I'm still going to argue against the key-value model though.
>
> It's true that this is not significant in terms of space. But I'm more
>
> concerned about human readability, i.e., confusing future implementers.
>
> At this point, the key-value model is there "for historical reasons",
>
> except these aren't valid even before finalizing the format. The
>
> original rationale for using key-values seems to be gone (no key-based
>
> lookups are necessary). As for combining and deduplication, whether key
>
> data is present or not is now purely a stand-in for a "repeatable" flag.
>
> We could just as easily say, e.g., that the high bit of "type" specifies
>
> whether this record can be repeated.
>
> (Moreover, as I wrote previously, the Combiner seems like a weirdly
>
> placed role. I still don't see its significance and why is it important
>
> to correctly combine PSBTs by agents that don't understand them. If you
>
> have a usecase in mind, please explain.
There is a diagram in the BIP that explains this. The combiner's job is to combine two PSBTs that
are for the same transaction but contain different data such as signatures. It is easier to implement
a combiner that does not need to understand the types at all, and such combiners are forwards compatible,
so new types do not require new combiner implementations.
>
> ISTM a Combiner could just as well combine based on whole-record
>
> uniqueness, and leave the duplicate detection to the Finalizer. In case
>
> the incoming PSBTs have incompatible unique fields, the Combiner would
>
> have to fail anyway, so the Finalizer might as well do it. Perhaps it
>
> would be good to leave out the Combiner role entirely?)
A transaction that contains duplicate keys would be completely invalid. Furthermore, in the set of typed
records model, having more than one redeemScript and witnessScript should be invalid, so a combiner
would still need to understand what types are in order to avoid this situation. Otherwise it would produce
an invalid PSBT.
I also dislike the idea of having type specific things like "only one redeemScript" where a more generic
thing would work.
>
> There's two remaining types where key data is used: BIP32 derivations
>
> and partial signatures. In case of BIP32 derivation, the key data is
>
> redundant ( pubkey = derive(value) ), so I'd argue we should leave that
>
> out and save space.
I think it is still necessary to include the pubkey as not all signers who can sign for a given pubkey may
know the derivation path. For example, if a privkey is imported into a wallet, that wallet does not necessarily
know the master key fingerprint for the privkey nor would it necessarily have the master key itself in
order to derive the privkey. But it still has the privkey and can still sign for it.
>
> Re breaking change, we are proposing breaking changes anyway, existing
>
> code will need to be touched, and given that this is a hand-parsed
>
> format, changing `parse_keyvalue` to `parse_record` seems like a small
>
> matter?
The point is to not make it difficult for existing implementations to change. Mostly what has been done now is just
moving things around, not an entire format change itself. Changing to a set of typed records model require more
changes and is a complete restructuring of the format, not just moving things around.
Additionally, I think that the current model is fairly easy to hand parse. I don't think a record set model would make
it easier to follow. Furthermore, moving to Protobuf would make it harder to hand parse as varints are slightly more
confusing in protobuf than with Compact size uints. And with the key-value model, you don't need to know the types
to know whether something is valid. You don't need to interpret any data.
Andrew
Published at
2023-06-07 18:13:16Event JSON
{
"id": "4ef6f161932118acaf4dce8ba893e4e73227ccce8bd58a5561ad571588b498c8",
"pubkey": "75fdfdb61750de444c31286deb53bedfff4351d5a85d3d66cc6b36e95fa21d06",
"created_at": 1686161596,
"kind": 1,
"tags": [
[
"e",
"cde3c2f1af5ec4e3200e32c7fdbcba54b58741a9d65d38dd383e78325ee0ffcd",
"",
"root"
],
[
"e",
"f8d3ac9b66e76b55f97b574123706b617d3b55690c9962b689ab9b48a931a4ab",
"",
"reply"
],
[
"p",
"7b004e1cd14bfe95aec539b0a3843c2713d155a939abd807a5714c2a09f484c4"
]
],
"content": "📅 Original date posted:2018-06-26\n📝 Original message:Hi,\n\nOn June 26, 2018 8:33 AM, matejcik via bitcoin-dev \u003cbitcoin-dev at lists.linuxfoundation.org\u003e wrote:\n\n\u003e \n\u003e \n\u003e hello,\n\u003e \n\u003e in general, I agree with my colleague Tomas, the proposed changes are\n\u003e \n\u003e good and achieve the most important things that we wanted. We'll review\n\u003e \n\u003e the proposal in more detail later.\n\u003e \n\u003e For now a couple minor things I have run into:\n\u003e \n\u003e - valid test vector 2 (\"one P2PKH input and one P2SH-P2WPKH input\")\n\u003e \n\u003e seems broken, at least its hex version; a delimiter seems to be missing,\n\u003e \n\u003e misplaced or corrupted\n\nFixed\n\n\u003e \n\u003e - at least the first signing vector is not updated, but you probably\n\u003e \n\u003e know that\n\nI updated all of the tests yesterday so they should be correct now. I will be adding more tests\nthis week.\n\n\u003e \n\u003e - BIP32 derivation fields don't specify size of the \"master public key\",\n\u003e \n\u003e which would make it un-parsable :)\n\nOops, that's actually supposed to be master key fingerprint, not master public key. I have updated\nthe BIP to reflect this.\n\n\u003e \n\u003e - \"Transaction format is specified as follows\" and its table need to be\n\u003e \n\u003e updated.\n\nDone.\n\n\u003e \n\u003e I'm still going to argue against the key-value model though.\n\u003e \n\u003e It's true that this is not significant in terms of space. But I'm more\n\u003e \n\u003e concerned about human readability, i.e., confusing future implementers.\n\u003e \n\u003e At this point, the key-value model is there \"for historical reasons\",\n\u003e \n\u003e except these aren't valid even before finalizing the format. The\n\u003e \n\u003e original rationale for using key-values seems to be gone (no key-based\n\u003e \n\u003e lookups are necessary). As for combining and deduplication, whether key\n\u003e \n\u003e data is present or not is now purely a stand-in for a \"repeatable\" flag.\n\u003e \n\u003e We could just as easily say, e.g., that the high bit of \"type\" specifies\n\u003e \n\u003e whether this record can be repeated.\n\u003e \n\u003e (Moreover, as I wrote previously, the Combiner seems like a weirdly\n\u003e \n\u003e placed role. I still don't see its significance and why is it important\n\u003e \n\u003e to correctly combine PSBTs by agents that don't understand them. If you\n\u003e \n\u003e have a usecase in mind, please explain.\n\nThere is a diagram in the BIP that explains this. The combiner's job is to combine two PSBTs that\nare for the same transaction but contain different data such as signatures. It is easier to implement\na combiner that does not need to understand the types at all, and such combiners are forwards compatible,\nso new types do not require new combiner implementations.\n\n\u003e \n\u003e ISTM a Combiner could just as well combine based on whole-record\n\u003e \n\u003e uniqueness, and leave the duplicate detection to the Finalizer. In case\n\u003e \n\u003e the incoming PSBTs have incompatible unique fields, the Combiner would\n\u003e \n\u003e have to fail anyway, so the Finalizer might as well do it. Perhaps it\n\u003e \n\u003e would be good to leave out the Combiner role entirely?)\n\nA transaction that contains duplicate keys would be completely invalid. Furthermore, in the set of typed\nrecords model, having more than one redeemScript and witnessScript should be invalid, so a combiner\nwould still need to understand what types are in order to avoid this situation. Otherwise it would produce\nan invalid PSBT.\n\nI also dislike the idea of having type specific things like \"only one redeemScript\" where a more generic\nthing would work.\n\n\u003e \n\u003e There's two remaining types where key data is used: BIP32 derivations\n\u003e \n\u003e and partial signatures. In case of BIP32 derivation, the key data is\n\u003e \n\u003e redundant ( pubkey = derive(value) ), so I'd argue we should leave that\n\u003e \n\u003e out and save space. \n\nI think it is still necessary to include the pubkey as not all signers who can sign for a given pubkey may\nknow the derivation path. For example, if a privkey is imported into a wallet, that wallet does not necessarily\nknow the master key fingerprint for the privkey nor would it necessarily have the master key itself in\norder to derive the privkey. But it still has the privkey and can still sign for it.\n\n\u003e \n\u003e Re breaking change, we are proposing breaking changes anyway, existing\n\u003e \n\u003e code will need to be touched, and given that this is a hand-parsed\n\u003e \n\u003e format, changing `parse_keyvalue` to `parse_record` seems like a small\n\u003e \n\u003e matter?\n\nThe point is to not make it difficult for existing implementations to change. Mostly what has been done now is just\nmoving things around, not an entire format change itself. Changing to a set of typed records model require more\nchanges and is a complete restructuring of the format, not just moving things around.\n\nAdditionally, I think that the current model is fairly easy to hand parse. I don't think a record set model would make\nit easier to follow. Furthermore, moving to Protobuf would make it harder to hand parse as varints are slightly more\nconfusing in protobuf than with Compact size uints. And with the key-value model, you don't need to know the types\nto know whether something is valid. You don't need to interpret any data.\n\n\n\nAndrew",
"sig": "a0ff3d31c171ac9f8f775240c0a112f7fe854479bbd78ca70df9c3f8bf56af7bc3f260e0114f03ccdc80d3772b72d0a0df8512cf086778537712f617fb32d42f"
}