Johnson Lau [ARCHIVE] on Nostr: đź“… Original date posted:2017-04-08 đź“ť Original message:> On 9 Apr 2017, at 03:56, ...
đź“… Original date posted:2017-04-08
đź“ť Original message:> On 9 Apr 2017, at 03:56, Tomas <tomas at tomasvdw.nl> wrote:
>
>
>> I don’t fully understand your storage engine. So the following deduction
>> is just based on common sense.
>>
>> a) It is possible to make unlimited number of 1-in-100-out txs
>>
>> b) The maximum number of 100-in-1-out txs is limited by the number of
>> previous 1-in-100-out txs
>>
>> c) Since bitcrust performs not good with 100-in-1-out txs, for anti-DoS
>> purpose you should limit the number of previous 1-in-100-out txs.
>>
>> d) Limit 1-in-100-out txs == Limit UTXO growth
>>
>> I’m not surprised that you find an model more efficient than Core. But I
>> don’t believe one could find a model that doesn’t become more efficient
>> with UTXO growth limitation.
>
> My efficiency claims are *only* with regards to order validation. If we
> assume all transactions are already pre-synced and verified, bitcrust's
> order validation is very fast, and (only slightly) negatively effected
> by input-counts.
pre-synced means already in mempool and verified? Then it sounds like we just need some mempool optimisation? The tx order in a block is not important, unless they are dependent
>
>> One more question: what is the absolute minimum disk and memory usage in
>> bitcrust, compared with the pruning mode in Core?
>
> As bitcrust doesn't support this yet, I cannot give accurate numbers,
> but I've provided some numbers estimates earlier in the thread.
>
>
> Rereading my post and these comments, I may have stepped on some toes
> with regards to SegWit's model. I like SegWit (though I may have a
> slight preference for BIP140), and I understand the reasons for the
> "discount", so this was not my intention. I just think that the reversal
> of costs during peak load order validation is a rather interesting
> feature of using spend-tree based validation.
>
> Tomas
Please no conspiracy theory like stepping on someone’s toes. I believe it’s always nice to challenge the established model. However, as I’m trying to make some hardfork design, I intend to have a stricter UTXO growth limit. As you said "protocol addressing the UTXO growth, might not be worth considering protocol improvements*, it sounds like UTXO growth limit wouldn’t be very helpful for your model, which I doubt.
Published at
2023-06-07 17:59:41Event JSON
{
"id": "e01ea5648608a361dbf711f5af785f10e051ab19771ba9b09935fef8f52aa1bf",
"pubkey": "492fa402e838904bdc8eb2c8fafa1aa895df26438bfd998c71b01cb9db550ff7",
"created_at": 1686160781,
"kind": 1,
"tags": [
[
"e",
"d4a682be1f6603f0ff8798c52b7225cac6554e21f3aedb0c80e7d41cf71983ad",
"",
"root"
],
[
"e",
"f46964a7599c95f60a739b26feca7cdade57e476bd37ef3aa2bc27f029bb1414",
"",
"reply"
],
[
"p",
"1c03575343555d1132a621c49466190d680da4a306ba8b992e8b87e267609cdd"
]
],
"content": "📅 Original date posted:2017-04-08\n📝 Original message:\u003e On 9 Apr 2017, at 03:56, Tomas \u003ctomas at tomasvdw.nl\u003e wrote:\n\u003e \n\u003e \n\u003e\u003e I don’t fully understand your storage engine. So the following deduction\n\u003e\u003e is just based on common sense.\n\u003e\u003e \n\u003e\u003e a) It is possible to make unlimited number of 1-in-100-out txs\n\u003e\u003e \n\u003e\u003e b) The maximum number of 100-in-1-out txs is limited by the number of\n\u003e\u003e previous 1-in-100-out txs\n\u003e\u003e \n\u003e\u003e c) Since bitcrust performs not good with 100-in-1-out txs, for anti-DoS\n\u003e\u003e purpose you should limit the number of previous 1-in-100-out txs. \n\u003e\u003e \n\u003e\u003e d) Limit 1-in-100-out txs == Limit UTXO growth\n\u003e\u003e \n\u003e\u003e I’m not surprised that you find an model more efficient than Core. But I\n\u003e\u003e don’t believe one could find a model that doesn’t become more efficient\n\u003e\u003e with UTXO growth limitation.\n\u003e \n\u003e My efficiency claims are *only* with regards to order validation. If we\n\u003e assume all transactions are already pre-synced and verified, bitcrust's\n\u003e order validation is very fast, and (only slightly) negatively effected\n\u003e by input-counts.\n\npre-synced means already in mempool and verified? Then it sounds like we just need some mempool optimisation? The tx order in a block is not important, unless they are dependent\n\n\u003e \n\u003e\u003e One more question: what is the absolute minimum disk and memory usage in\n\u003e\u003e bitcrust, compared with the pruning mode in Core?\n\u003e \n\u003e As bitcrust doesn't support this yet, I cannot give accurate numbers,\n\u003e but I've provided some numbers estimates earlier in the thread.\n\u003e \n\u003e \n\u003e Rereading my post and these comments, I may have stepped on some toes\n\u003e with regards to SegWit's model. I like SegWit (though I may have a\n\u003e slight preference for BIP140), and I understand the reasons for the\n\u003e \"discount\", so this was not my intention. I just think that the reversal\n\u003e of costs during peak load order validation is a rather interesting\n\u003e feature of using spend-tree based validation. \n\u003e \n\u003e Tomas\n\nPlease no conspiracy theory like stepping on someone’s toes. I believe it’s always nice to challenge the established model. However, as I’m trying to make some hardfork design, I intend to have a stricter UTXO growth limit. As you said \"protocol addressing the UTXO growth, might not be worth considering protocol improvements*, it sounds like UTXO growth limit wouldn’t be very helpful for your model, which I doubt.",
"sig": "bbfe51ebeb48c0eb21d8d8861d1de7337ab5ca77611edff2b6fe786ba16a3aef4d26597f0e06dfab20f4baca7b127819523b7d0201b93cf8e905f3e1106a5785"
}