Anthony Towns [ARCHIVE] on Nostr: 📅 Original date posted:2016-02-07 📝 Original message:On Fri, Feb 05, 2016 at ...
📅 Original date posted:2016-02-07
📝 Original message:On Fri, Feb 05, 2016 at 03:51:08PM -0500, Gavin Andresen via bitcoin-dev wrote:
> Constructive feedback welcome; [...]
> Summary:
> Increase block size limit to 2,000,000 bytes.
> With accurate sigop counting, but existing sigop limit (20,000)
> And a new, high limit on signature hashing
To me, it seems absurd to have a hardfork but not take the opportunity
to combine these limits into a single weighted sum.
I'd suggest:
0.5*blocksize + 50*accurate_sigops + 0.001*sighash < 2,000,000
That provides worst case blocksize of 4MB, worst case sigops of 40,000
and worst case sighash bytes of 2GB. Given the separate limit on sighash
bytes and the improvements from libsecp256k1 I think 40k sigops should
be fine, but I'm happy to be corrected.
For a regular transaction, of say 380 bytes with 2 sigops and hashing
about 800 bytes, that uses up about 291 units of the limit, meaning
that if a block was full of transactions of that form, the limit would
be 6872 tx or 2.6MB per block (along with 13.7k sigops and ~5.5MB hashed
for signatures). Those weightings could probably be improved by doing
some detailed analysis and measurements, but I think they're pretty
reasonable for round figures.
The main advantage is that it would prevent blocks being cheaply filled
up due to hitting one of the secondary limits but only paying for the
contribution to the primary limit (presumably block size), which avoids
denial of service spam attacks.
I think having the limit take UTXO increase (or decrease) into effect
would be helpful too; but I don't have a specific suggestion. If it's
just a matter of making the limit stronger (eg adding "0.25*max(0,change
in UTXO bytes)" to the formula on the left, but not changing the limit on
the right), that would be a soft-forking change that could be introduced
later, and maybe that's fine.
If there was time to actually iterate on this proposal, rather than an
apparent aim to get it out the door in the next month or two, I think it
would be good to also design it so that the parameters of the weighted
sum could be adjusted by a soft-fork in future rather than requiring a
hard fork every time a limit's reached, or a weighting can be relaxed.
But I don't think that's feasible to design within a few weeks, so I
think it's off the table given the activation goal.
Cheers,
aj
Published at
2023-06-07 17:48:48Event JSON
{
"id": "820c9c264c9b7cba0b38f86ddf45d353f21f7b19873bb2f0520e5a640a2f2e70",
"pubkey": "f0feda6ad58ea9f486e469f87b3b9996494363a26982b864667c5d8acb0542ab",
"created_at": 1686160128,
"kind": 1,
"tags": [
[
"e",
"299dc342b2928a854d3cdebebdfa1b5d7d113553560926ba6670328be5b1c1ce",
"",
"root"
],
[
"e",
"669be227e5bc30d135b233bddf91124d7e9edbb5a8dcf5eec756ceb40d018930",
"",
"reply"
],
[
"p",
"ad79974d758dc4cf1bbec6f26efe5df8b5016434bf40cc8451cf2a236e43b8d8"
]
],
"content": "📅 Original date posted:2016-02-07\n📝 Original message:On Fri, Feb 05, 2016 at 03:51:08PM -0500, Gavin Andresen via bitcoin-dev wrote:\n\u003e Constructive feedback welcome; [...]\n\u003e Summary:\n\u003e Increase block size limit to 2,000,000 bytes.\n\u003e With accurate sigop counting, but existing sigop limit (20,000)\n\u003e And a new, high limit on signature hashing\n\nTo me, it seems absurd to have a hardfork but not take the opportunity\nto combine these limits into a single weighted sum.\n\nI'd suggest:\n\n 0.5*blocksize + 50*accurate_sigops + 0.001*sighash \u003c 2,000,000\n\nThat provides worst case blocksize of 4MB, worst case sigops of 40,000\nand worst case sighash bytes of 2GB. Given the separate limit on sighash\nbytes and the improvements from libsecp256k1 I think 40k sigops should\nbe fine, but I'm happy to be corrected.\n\nFor a regular transaction, of say 380 bytes with 2 sigops and hashing\nabout 800 bytes, that uses up about 291 units of the limit, meaning\nthat if a block was full of transactions of that form, the limit would\nbe 6872 tx or 2.6MB per block (along with 13.7k sigops and ~5.5MB hashed\nfor signatures). Those weightings could probably be improved by doing\nsome detailed analysis and measurements, but I think they're pretty\nreasonable for round figures.\n\nThe main advantage is that it would prevent blocks being cheaply filled\nup due to hitting one of the secondary limits but only paying for the\ncontribution to the primary limit (presumably block size), which avoids\ndenial of service spam attacks.\n\nI think having the limit take UTXO increase (or decrease) into effect\nwould be helpful too; but I don't have a specific suggestion. If it's\njust a matter of making the limit stronger (eg adding \"0.25*max(0,change\nin UTXO bytes)\" to the formula on the left, but not changing the limit on\nthe right), that would be a soft-forking change that could be introduced\nlater, and maybe that's fine.\n\nIf there was time to actually iterate on this proposal, rather than an\napparent aim to get it out the door in the next month or two, I think it\nwould be good to also design it so that the parameters of the weighted\nsum could be adjusted by a soft-fork in future rather than requiring a\nhard fork every time a limit's reached, or a weighting can be relaxed.\nBut I don't think that's feasible to design within a few weeks, so I\nthink it's off the table given the activation goal.\n\nCheers,\naj",
"sig": "7ded3933c5207e22ca577a6bc16d2cfe981de7c03b84639558983dbc6571f138f7399e8b0be19b77ed78270a23989931b50b188b67c0457786295dd454dc1081"
}