Tomas [ARCHIVE] on Nostr: 📅 Original date posted:2017-04-11 📝 Original message:On Tue, Apr 11, 2017, at ...
📅 Original date posted:2017-04-11
📝 Original message:On Tue, Apr 11, 2017, at 03:44, Eric Voskuil wrote:
> As I understand it you would split tx inputs and outputs and send them
> independently, and that you intend this to be a P2P network
> optimization - not a consensus rule change. So my comments are based
> on those inferences. If we are talking about consensus changes this
> conversation will end up in an entirely different place.
> I don't agree with the input/output relevance statements above. When a
> tx is announced the entire tx is relevant. It cannot be validated as
> outputs only. If it cannot be validated it cannot be stored by the
> node. Validating the outputs only would require the node store invalid
> transactions.
Splitting transactions only happens *on storage* and is just a minor
optimization compared to storing them in full. (actually a very recent
change with only marginally better results). This is simply because the
output scripts are read on script validation, and storing the outputs of
the transaction separately ensures better spatial locality of reference
(the inputs are just "in the way"). This is not relevant when using a
UTXO-index, because the outputs are then directly stored in the index,
where bitcrust has to read them from the transaction data.
It is not my intention to send them independently.
> I do accept that a double-spend detection is not an optimal criteria
> by which to discard a tx. One also needs fee information. But without
> double-spend knowledge the node has no rational way to defend itself
> against an infinity of transactions that spend the minimal fee but
> also have conflicting inputs (i.e. risking the fee only once). So tx
> (pool) validation requires double-spend knowledge and at least a
> summary from outputs.
Double spent information is still available to the network node and
could still be used for DoS protection, although I do believe
alternatives may exist.
>
> A reorg is conceptual and cannot be engineered out. What you are
> referring to is a restructuring of stored information as a consequence
> of a reorg. I don't see this as related to the above. The ability to
> perform reorganization via a branch pointer swap is based not on the
> order or factoring of validation but instead on the amount of
> information stored. It requires more information to maintain multiple
> branches.
>
> Transactions have confirmation states, validation contexts and spender
> heights for potentially each branch of an unbounded number of
> branches. It is this requirement to maintain that state for each
> branch that makes this design goal a very costly trade-off of space
> and complexity for reorg speed. As I mentioned earlier, it's the
> optimization for this scenario that I find questionable.
Sure, we can still call switching tips a "reorg". And it is indeed a
trade off as orphan blocks are stored, but a block in the spend tree
takes only ~12kb and contains the required state information.
I believe this trade off reduced complexity. For the earlier tree this
could be pruned.
> Because choosing the lesser amount of work is non-consensus behavior.
> Under the same circumstances (i.e. having seen the same set of blocks)
> two nodes will disagree on whether there is one confirmation or no
> confirmations for a given tx. This disagreement will persist (i.e. why
> take the weaker block only to turn around and replace it with the
> stronger block that arrives a few seconds or minutes later). It stands
> to reason that if one rejects a stronger block under a race condition,
> one would reorg out a stronger block when a weaker block arrives a
> little after the stronger block. Does this "optimization" then apply
> to chains of blocks too?
The blockchain is - by design - only eventually consistent across nodes.
Even if nodes would use the same "tip-selection" rules, you cannot rely
on all blocks being propagated and hence each transaction having the
same number of confirmations across all nodes.
As a simpler example, if two miners both mine a block at approximately
the same time and send it to each other, then surely they would want to
continue mining on their own block. Otherwise they would be throwing
away their own reward.
And yes, this can also happen over multiple blocks, but the chances of
consistency are vastly increased with each confirmation.
> Accepting a block that all previous implementations would have
> rejected under the same circumstance could be considered a hard fork,
> but you may be right.
I am not talking about rejecting blocks, I am only talking choosing on
which tip to mine.
> > Frankly, I think this is a bit of an exaggeration. Soft forks are
> > counted on a hand, and I don't think there are many - if any -
> > transactions in the current chain that have changed compliance
> > based on height.
>
> Hope is a bug.
>
> If you intend this to be useful it has to help build the chain, not
> just rely on hardwiring checkpoints once rule changes are presumed to
> be buried deeply enough to do so (as the result of other implementations
> ).
>
> I understand this approach, it was ours at one time. There is a
> significant difference, and your design is to some degree based on a
> failure to fully consider this. I encourage you to not assume any
> consensus-related detail is too small.
I am not failing to consider this, and I don't consider this too small .
But ensuring contextual transaction validity by "validate => valid with
rules X,Y,Z" and then checking the active rules (softfork activation) on
order validation, will give logically the same results as "validate with
X,Y,Z => valid". This is not "hardwiring checkpoints" at all.
> You cannot have a useful performance measure without full compliance.
I agree that the results are preliminary and I will post more if the
product reaches later stages.
> It's worth noting that many of your stated objectives, including
> modularity, developer platform, store isolation, consensus rule
> isolation (including optional use of libbitcoinconsensus) are implemente
> d.
>
> It seems like you are doing some good work and it's not my intent to
> discourage that. Libbitcoin is open source, I don't get paid and I'm
> not selling anything. But if you are going down this path you should
> be aware of it and may benefit from our successes as well as some of
> the other stuff :). And hopefully we can get the benefit of your
> insights as well.
Thank you, I will definitely further dive into libbitcoin, and see what
insights I can use for Bitcrust.
Tomas
Published at
2023-06-07 17:59:36Event JSON
{
"id": "6676d59ded632d698079d1827f1d504f119d0718e6f8d0259072d4c40d35c9cf",
"pubkey": "1c03575343555d1132a621c49466190d680da4a306ba8b992e8b87e267609cdd",
"created_at": 1686160776,
"kind": 1,
"tags": [
[
"e",
"d4a682be1f6603f0ff8798c52b7225cac6554e21f3aedb0c80e7d41cf71983ad",
"",
"root"
],
[
"e",
"d4a1bb02001cdf0e144c243d96efb13d5f067c00882376a29316ff8bcd7e37a5",
"",
"reply"
],
[
"p",
"82205f272f995d9be742779a3c19a2ae08522ca14824c3a3b01525fb5459161e"
]
],
"content": "📅 Original date posted:2017-04-11\n📝 Original message:On Tue, Apr 11, 2017, at 03:44, Eric Voskuil wrote:\n\n\u003e As I understand it you would split tx inputs and outputs and send them\n\u003e independently, and that you intend this to be a P2P network\n\u003e optimization - not a consensus rule change. So my comments are based\n\u003e on those inferences. If we are talking about consensus changes this\n\u003e conversation will end up in an entirely different place.\n\n\u003e I don't agree with the input/output relevance statements above. When a\n\u003e tx is announced the entire tx is relevant. It cannot be validated as\n\u003e outputs only. If it cannot be validated it cannot be stored by the\n\u003e node. Validating the outputs only would require the node store invalid\n\u003e transactions.\n\nSplitting transactions only happens *on storage* and is just a minor\noptimization compared to storing them in full. (actually a very recent\nchange with only marginally better results). This is simply because the\noutput scripts are read on script validation, and storing the outputs of\nthe transaction separately ensures better spatial locality of reference\n(the inputs are just \"in the way\"). This is not relevant when using a\nUTXO-index, because the outputs are then directly stored in the index,\nwhere bitcrust has to read them from the transaction data.\n\nIt is not my intention to send them independently.\n \n\u003e I do accept that a double-spend detection is not an optimal criteria\n\u003e by which to discard a tx. One also needs fee information. But without\n\u003e double-spend knowledge the node has no rational way to defend itself\n\u003e against an infinity of transactions that spend the minimal fee but\n\u003e also have conflicting inputs (i.e. risking the fee only once). So tx\n\u003e (pool) validation requires double-spend knowledge and at least a\n\u003e summary from outputs.\n\nDouble spent information is still available to the network node and\ncould still be used for DoS protection, although I do believe\nalternatives may exist.\n \n\u003e \n\u003e A reorg is conceptual and cannot be engineered out. What you are\n\u003e referring to is a restructuring of stored information as a consequence\n\u003e of a reorg. I don't see this as related to the above. The ability to\n\u003e perform reorganization via a branch pointer swap is based not on the\n\u003e order or factoring of validation but instead on the amount of\n\u003e information stored. It requires more information to maintain multiple\n\u003e branches.\n\u003e \n\u003e Transactions have confirmation states, validation contexts and spender\n\u003e heights for potentially each branch of an unbounded number of\n\u003e branches. It is this requirement to maintain that state for each\n\u003e branch that makes this design goal a very costly trade-off of space\n\u003e and complexity for reorg speed. As I mentioned earlier, it's the\n\u003e optimization for this scenario that I find questionable.\n\nSure, we can still call switching tips a \"reorg\". And it is indeed a\ntrade off as orphan blocks are stored, but a block in the spend tree\ntakes only ~12kb and contains the required state information. \n\nI believe this trade off reduced complexity. For the earlier tree this\ncould be pruned.\n\n\u003e Because choosing the lesser amount of work is non-consensus behavior.\n\u003e Under the same circumstances (i.e. having seen the same set of blocks)\n\u003e two nodes will disagree on whether there is one confirmation or no\n\u003e confirmations for a given tx. This disagreement will persist (i.e. why\n\u003e take the weaker block only to turn around and replace it with the\n\u003e stronger block that arrives a few seconds or minutes later). It stands\n\u003e to reason that if one rejects a stronger block under a race condition,\n\u003e one would reorg out a stronger block when a weaker block arrives a\n\u003e little after the stronger block. Does this \"optimization\" then apply\n\u003e to chains of blocks too?\n\nThe blockchain is - by design - only eventually consistent across nodes.\nEven if nodes would use the same \"tip-selection\" rules, you cannot rely\non all blocks being propagated and hence each transaction having the\nsame number of confirmations across all nodes.\n\nAs a simpler example, if two miners both mine a block at approximately\nthe same time and send it to each other, then surely they would want to\ncontinue mining on their own block. Otherwise they would be throwing\naway their own reward. \n\nAnd yes, this can also happen over multiple blocks, but the chances of\nconsistency are vastly increased with each confirmation.\n\n\u003e Accepting a block that all previous implementations would have\n\u003e rejected under the same circumstance could be considered a hard fork,\n\u003e but you may be right.\n\nI am not talking about rejecting blocks, I am only talking choosing on\nwhich tip to mine.\n\n\u003e \u003e Frankly, I think this is a bit of an exaggeration. Soft forks are \n\u003e \u003e counted on a hand, and I don't think there are many - if any - \n\u003e \u003e transactions in the current chain that have changed compliance \n\u003e \u003e based on height.\n\u003e \n\u003e Hope is a bug.\n\u003e \n\u003e If you intend this to be useful it has to help build the chain, not\n\u003e just rely on hardwiring checkpoints once rule changes are presumed to\n\u003e be buried deeply enough to do so (as the result of other implementations\n\u003e ).\n\u003e \n\u003e I understand this approach, it was ours at one time. There is a\n\u003e significant difference, and your design is to some degree based on a\n\u003e failure to fully consider this. I encourage you to not assume any\n\u003e consensus-related detail is too small.\n\nI am not failing to consider this, and I don't consider this too small .\nBut ensuring contextual transaction validity by \"validate =\u003e valid with\nrules X,Y,Z\" and then checking the active rules (softfork activation) on\norder validation, will give logically the same results as \"validate with\nX,Y,Z =\u003e valid\". This is not \"hardwiring checkpoints\" at all.\n\n\u003e You cannot have a useful performance measure without full compliance.\n\nI agree that the results are preliminary and I will post more if the\nproduct reaches later stages.\n\n\u003e It's worth noting that many of your stated objectives, including\n\u003e modularity, developer platform, store isolation, consensus rule\n\u003e isolation (including optional use of libbitcoinconsensus) are implemente\n\u003e d.\n\u003e \n\u003e It seems like you are doing some good work and it's not my intent to\n\u003e discourage that. Libbitcoin is open source, I don't get paid and I'm\n\u003e not selling anything. But if you are going down this path you should\n\u003e be aware of it and may benefit from our successes as well as some of\n\u003e the other stuff :). And hopefully we can get the benefit of your\n\u003e insights as well.\n \n\nThank you, I will definitely further dive into libbitcoin, and see what\ninsights I can use for Bitcrust.\n\nTomas",
"sig": "744f9cc77fa8fc13a769e1f3b2034123de8341089ad4670652d7f3bede05ad192ac19ce02601b603fa01e8ca86c3f3e709c54f84842acd6f188088f4a2791913"
}