Tomas [ARCHIVE] on Nostr: 📅 Original date posted:2017-04-08 📝 Original message:> I don’t fully ...
đź“… Original date posted:2017-04-08
📝 Original message:> I don’t fully understand your storage engine. So the following deduction
> is just based on common sense.
>
> a) It is possible to make unlimited number of 1-in-100-out txs
>
> b) The maximum number of 100-in-1-out txs is limited by the number of
> previous 1-in-100-out txs
>
> c) Since bitcrust performs not good with 100-in-1-out txs, for anti-DoS
> purpose you should limit the number of previous 1-in-100-out txs.
>
> d) Limit 1-in-100-out txs == Limit UTXO growth
>
> I’m not surprised that you find an model more efficient than Core. But I
> don’t believe one could find a model that doesn’t become more efficient
> with UTXO growth limitation.
My efficiency claims are *only* with regards to order validation. If we
assume all transactions are already pre-synced and verified, bitcrust's
order validation is very fast, and (only slightly) negatively effected
by input-counts.
Most total time is spend during base load script validation, and UTXO
growth is the definitely the limiting factor there, as the model here
isn't all that different from Core's.
> Maybe you could try an experiment with regtest? Make a lot 1-in-100-out
> txs with many blocks, then spend all the UTXOs with 100-in-1-out txs.
> Compare the performance of bitcrust with core. Then repeat with
> 1-in-1-out chained txs (so the UTXO set is always almost empty)
>
Again, this really depends on whether we focus on full block validation,
in which case the 100-1, 1-100 distinction will be the similar to Core,
or only regard order validation, in which case Bitcrust will have this
odd reversal.
> One more question: what is the absolute minimum disk and memory usage in
> bitcrust, compared with the pruning mode in Core?
As bitcrust doesn't support this yet, I cannot give accurate numbers,
but I've provided some numbers estimates earlier in the thread.
Rereading my post and these comments, I may have stepped on some toes
with regards to SegWit's model. I like SegWit (though I may have a
slight preference for BIP140), and I understand the reasons for the
"discount", so this was not my intention. I just think that the reversal
of costs during peak load order validation is a rather interesting
feature of using spend-tree based validation.
Tomas
Published at
2023-06-07 17:59:41Event JSON
{
"id": "f46964a7599c95f60a739b26feca7cdade57e476bd37ef3aa2bc27f029bb1414",
"pubkey": "1c03575343555d1132a621c49466190d680da4a306ba8b992e8b87e267609cdd",
"created_at": 1686160781,
"kind": 1,
"tags": [
[
"e",
"d4a682be1f6603f0ff8798c52b7225cac6554e21f3aedb0c80e7d41cf71983ad",
"",
"root"
],
[
"e",
"7db4283f0efc6b868c5a604460ae1d23e64d943d5f3ef6c3b0a45af05a252749",
"",
"reply"
],
[
"p",
"492fa402e838904bdc8eb2c8fafa1aa895df26438bfd998c71b01cb9db550ff7"
]
],
"content": "📅 Original date posted:2017-04-08\n📝 Original message:\u003e I don’t fully understand your storage engine. So the following deduction\n\u003e is just based on common sense.\n\u003e \n\u003e a) It is possible to make unlimited number of 1-in-100-out txs\n\u003e \n\u003e b) The maximum number of 100-in-1-out txs is limited by the number of\n\u003e previous 1-in-100-out txs\n\u003e \n\u003e c) Since bitcrust performs not good with 100-in-1-out txs, for anti-DoS\n\u003e purpose you should limit the number of previous 1-in-100-out txs. \n\u003e \n\u003e d) Limit 1-in-100-out txs == Limit UTXO growth\n\u003e \n\u003e I’m not surprised that you find an model more efficient than Core. But I\n\u003e don’t believe one could find a model that doesn’t become more efficient\n\u003e with UTXO growth limitation.\n\nMy efficiency claims are *only* with regards to order validation. If we\nassume all transactions are already pre-synced and verified, bitcrust's\norder validation is very fast, and (only slightly) negatively effected\nby input-counts.\n\nMost total time is spend during base load script validation, and UTXO\ngrowth is the definitely the limiting factor there, as the model here\nisn't all that different from Core's.\n\n\n\u003e Maybe you could try an experiment with regtest? Make a lot 1-in-100-out\n\u003e txs with many blocks, then spend all the UTXOs with 100-in-1-out txs.\n\u003e Compare the performance of bitcrust with core. Then repeat with\n\u003e 1-in-1-out chained txs (so the UTXO set is always almost empty)\n\u003e \n\nAgain, this really depends on whether we focus on full block validation,\nin which case the 100-1, 1-100 distinction will be the similar to Core,\nor only regard order validation, in which case Bitcrust will have this\nodd reversal. \n\n\n\u003e One more question: what is the absolute minimum disk and memory usage in\n\u003e bitcrust, compared with the pruning mode in Core?\n\nAs bitcrust doesn't support this yet, I cannot give accurate numbers,\nbut I've provided some numbers estimates earlier in the thread.\n\n\nRereading my post and these comments, I may have stepped on some toes\nwith regards to SegWit's model. I like SegWit (though I may have a\nslight preference for BIP140), and I understand the reasons for the\n\"discount\", so this was not my intention. I just think that the reversal\nof costs during peak load order validation is a rather interesting\nfeature of using spend-tree based validation. \n\nTomas",
"sig": "8f6f5e05a14d78a809112eb9feb719fe451c517741ae304d2c8e26cc4de8c580589a7bcc964ba5eed86e80e4c19692a3ce46a0eb66e34c7b96f2b89c64ffa898"
}