Gregory Maxwell [ARCHIVE] on Nostr: 📅 Original date posted:2018-06-09 📝 Original message:> So what's the cost in ...
📅 Original date posted:2018-06-09
📝 Original message:> So what's the cost in using
> the current filter (as it lets the client verify the filter if they want to,
An example of that cost is you arguing against specifying and
supporting the design that is closer to one that would be softforked,
which increases the time until we can make these filters secure
because it slows convergence on the design of what would get
committed.
>> I don't agree at all, and I can't see why you say so.
>
> Sure it doesn't _have_ to, but from my PoV as "adding more commitments" is
> on the top of every developers wish list for additions to Bitcoin, it would
> make sense to coordinate on an "ultimate" extensible commitment once, rather
> than special case a bunch of distinct commitments. I can see arguments for
> either really.
We have an extensible commitment style via BIP141 already. I don't see
why this in particular demands a new one.
> 1. The current filter format (even moving to prevouts) cannot be committed
> in this fashion as it indexes each of the coinbase output scripts. This
> creates a circular dependency: the commitment is modified by the
> filter,
Great point, but it should probably exclude coinbase OP_RETURN output.
This would exclude the current BIP141 style commitment and likely any
other.
Should I start a new thread on excluding all OP_RETURN outputs from
BIP-158 filters for all transactions? -- they can't be spent, so
including them just pollutes the filters.
> 2. Since the coinbase transaction is the first in a block, it has the
> longest merkle proof path. As a result, it may be several hundred bytes
> (and grows with future capacity increases) to present a proof to the
If 384 bytes is a concern, isn't 3840 bytes (the filter size
difference is in this ballpark) _much_ more of a concern? Path to the
coinbase transaction increases only logarithmically so further
capacity increases are unlikely to matter much, but the filter size
increases linearly and so it should be much more of a concern.
> In regards to the second item above, what do you think of the old Tier Nolan
> proposal [1] to create a "constant" sized proof for future commitments by
> constraining the size of the block and placing the commitments within the
> last few transactions in the block?
I think it's a fairly ugly hack. esp since it requires that mining
template code be able to stuff the block if they just don't know
enough actual transactions-- which means having a pool of spendable
outputs in order to mine, managing private keys, etc... it also
requires downstream software not tinker with the transaction count
(which I wish it didn't but as of today it does). A factor of two
difference in capacity-- if you constrain to get the smallest possible
proof-- is pretty stark, optimal txn selection with this cardinality
constraint would be pretty weird. etc.
If the community considers tree depth for proofs like that to be such
a concern to take on technical debt for that structure, we should
probably be thinking about more drastic (incompatible) changes... but
I don't think it's actually that interesting.
> I don't think its fair to compare those that wish to implement this proposal
> (and actually do the validation) to the legacy SPV software that to my
> knowledge is all but abandoned. The project I work on that seeks to deploy
Yes, maybe it isn't. But then that just means we don't have good information.
When a lot of people were choosing electrum over SPV wallets when
those SPV wallets weren't abandoned, sync time was frequently cited as
an actual reason. BIP158 makes that worse, not better. So while I'm
hopeful, I'm also somewhat sceptical. Certainly things that reduce
the size of the 158 filters make them seem more likely to be a success
to me.
> too difficult to implement "full" validation, as they're bitcoin developers
> with quite a bit of experience.
::shrugs:: Above you're also arguing against fetching down to the
coinbase transaction to save a couple hundred bytes a block, which
makes it impossible to validate a half dozen other things (including
as mentioned in the other threads depth fidelity of returned proofs).
There are a lot of reasons why things don't get implemented other than
experience! :)
Published at
2023-06-07 18:12:42Event JSON
{
"id": "628faea54b58a6942854c79b3e394a046b7784affb34657069124f06b903fa1b",
"pubkey": "4aa6cf9aa5c8e98f401dac603c6a10207509b6a07317676e9d6615f3d7103d73",
"created_at": 1686161562,
"kind": 1,
"tags": [
[
"e",
"51a146fc69b159388af76f4dbc680a6d56f40357ab923d85aeb4f302294e74e6",
"",
"root"
],
[
"e",
"8f7af80aa0049a974a5fb01e7cddf707ad789d61d39a4ce4014990bb800d33cb",
"",
"reply"
],
[
"p",
"2df3fc2660459521b852c995d4fc1a93938389a5e085677d0ebb33ef92cc5476"
]
],
"content": "📅 Original date posted:2018-06-09\n📝 Original message:\u003e So what's the cost in using\n\u003e the current filter (as it lets the client verify the filter if they want to,\n\nAn example of that cost is you arguing against specifying and\nsupporting the design that is closer to one that would be softforked,\nwhich increases the time until we can make these filters secure\nbecause it slows convergence on the design of what would get\ncommitted.\n\n\u003e\u003e I don't agree at all, and I can't see why you say so.\n\u003e\n\u003e Sure it doesn't _have_ to, but from my PoV as \"adding more commitments\" is\n\u003e on the top of every developers wish list for additions to Bitcoin, it would\n\u003e make sense to coordinate on an \"ultimate\" extensible commitment once, rather\n\u003e than special case a bunch of distinct commitments. I can see arguments for\n\u003e either really.\n\nWe have an extensible commitment style via BIP141 already. I don't see\nwhy this in particular demands a new one.\n\n\u003e 1. The current filter format (even moving to prevouts) cannot be committed\n\u003e in this fashion as it indexes each of the coinbase output scripts. This\n\u003e creates a circular dependency: the commitment is modified by the\n\u003e filter,\n\nGreat point, but it should probably exclude coinbase OP_RETURN output.\nThis would exclude the current BIP141 style commitment and likely any\nother.\n\nShould I start a new thread on excluding all OP_RETURN outputs from\nBIP-158 filters for all transactions? -- they can't be spent, so\nincluding them just pollutes the filters.\n\n\u003e 2. Since the coinbase transaction is the first in a block, it has the\n\u003e longest merkle proof path. As a result, it may be several hundred bytes\n\u003e (and grows with future capacity increases) to present a proof to the\n\nIf 384 bytes is a concern, isn't 3840 bytes (the filter size\ndifference is in this ballpark) _much_ more of a concern? Path to the\ncoinbase transaction increases only logarithmically so further\ncapacity increases are unlikely to matter much, but the filter size\nincreases linearly and so it should be much more of a concern.\n\n\u003e In regards to the second item above, what do you think of the old Tier Nolan\n\u003e proposal [1] to create a \"constant\" sized proof for future commitments by\n\u003e constraining the size of the block and placing the commitments within the\n\u003e last few transactions in the block?\n\nI think it's a fairly ugly hack. esp since it requires that mining\ntemplate code be able to stuff the block if they just don't know\nenough actual transactions-- which means having a pool of spendable\noutputs in order to mine, managing private keys, etc... it also\nrequires downstream software not tinker with the transaction count\n(which I wish it didn't but as of today it does). A factor of two\ndifference in capacity-- if you constrain to get the smallest possible\nproof-- is pretty stark, optimal txn selection with this cardinality\nconstraint would be pretty weird. etc.\n\nIf the community considers tree depth for proofs like that to be such\na concern to take on technical debt for that structure, we should\nprobably be thinking about more drastic (incompatible) changes... but\nI don't think it's actually that interesting.\n\n\u003e I don't think its fair to compare those that wish to implement this proposal\n\u003e (and actually do the validation) to the legacy SPV software that to my\n\u003e knowledge is all but abandoned. The project I work on that seeks to deploy\n\nYes, maybe it isn't. But then that just means we don't have good information.\n\nWhen a lot of people were choosing electrum over SPV wallets when\nthose SPV wallets weren't abandoned, sync time was frequently cited as\nan actual reason. BIP158 makes that worse, not better. So while I'm\nhopeful, I'm also somewhat sceptical. Certainly things that reduce\nthe size of the 158 filters make them seem more likely to be a success\nto me.\n\n\u003e too difficult to implement \"full\" validation, as they're bitcoin developers\n\u003e with quite a bit of experience.\n\n::shrugs:: Above you're also arguing against fetching down to the\ncoinbase transaction to save a couple hundred bytes a block, which\nmakes it impossible to validate a half dozen other things (including\nas mentioned in the other threads depth fidelity of returned proofs).\nThere are a lot of reasons why things don't get implemented other than\nexperience! :)",
"sig": "9cb708992b802a91bf4f7fdd013e27aa418f68e565b8a0079463ef7dd14c1ef71d39ec75aeefa2117b5201cf0d4710465df2ff116e3f5c6bc22d9b4127f9452f"
}