Matt Corallo [ARCHIVE] on Nostr: 📅 Original date posted:2017-05-09 📝 Original message:There is something in ...
📅 Original date posted:2017-05-09
📝 Original message:There is something in between throwing the SegWit goals out the window
(as Sergio seems to be advocating for) and having a higher discount
ratio (which is required for the soft fork version to be useful).
When I first started looking at the problem I very much wanted to reduce
the worst-case block size (though have come around to caring a bit less
about that thanks to all the work in FIBRE and other similar systems
over the past year or two), but rapidly realized that just reducing the
Segwit discount wasn't really the right solution here.
You might as well take the real win and reduce the cost of the input
prevout itself so that average inputs are less expensive than outputs
(which SegWit doesn't quite achieve due to the large prevout size - 40
bytes). This way you can reduce the discount, still get the SegWit goal,
and get a lower ratio between worst-case and average-case block size,
though, frankly, I'm less interested in the last one these days, at
least for reasonable parameters. If you're gonna look at hard forks,
limiting yourself to just the parameters that we can tweak in a soft
fork seems short-sighted, at beast.
Matt
On 05/09/17 19:30, Gregory Maxwell wrote:
> On Tue, May 9, 2017 at 7:15 PM, Sergio Demian Lerner via bitcoin-dev
> <bitcoin-dev at lists.linuxfoundation.org> wrote:
>> The capacity of Segwit(50%)+2MbHF is 50% more than Segwit, and the maximum
>> block size is the same.
>
> And the UTXO bloat potential is twice as large and the cost of that
> UTXO bloat is significantly reduced. So you're basically gutting the
> most of the gain from weight, making something incompatible, etc.
>
> I'm not sure what to explain-- even that page on segwit.org explains
> that the values are selected to balance worst case costs not to
> optimize one to the total exclusion of others. Raw size is not very
> relevant in the long run, but if your goal were to optimize for it
> (which it seems to be), then the limit should be pure size.
>
Published at
2023-06-07 18:00:59Event JSON
{
"id": "31a85627b0bc3bb47af5020c7379d83934a16a9cd4ccc8a4e5f1fef4706d3ad9",
"pubkey": "cd753aa8fbc112e14ffe9fe09d3630f0eff76ca68e376e004b8e77b687adddba",
"created_at": 1686160859,
"kind": 1,
"tags": [
[
"e",
"4aacc31e765e9e31a3d8eb41d296ff5dc213462528e2e5335cec5ccb80e70465",
"",
"root"
],
[
"e",
"de143d25e5ae4fd5d696c8cc8b31923d410c7c378521a7a7c281f16dc3a52b81",
"",
"reply"
],
[
"p",
"4aa6cf9aa5c8e98f401dac603c6a10207509b6a07317676e9d6615f3d7103d73"
]
],
"content": "📅 Original date posted:2017-05-09\n📝 Original message:There is something in between throwing the SegWit goals out the window\n(as Sergio seems to be advocating for) and having a higher discount\nratio (which is required for the soft fork version to be useful).\n\nWhen I first started looking at the problem I very much wanted to reduce\nthe worst-case block size (though have come around to caring a bit less\nabout that thanks to all the work in FIBRE and other similar systems\nover the past year or two), but rapidly realized that just reducing the\nSegwit discount wasn't really the right solution here.\n\nYou might as well take the real win and reduce the cost of the input\nprevout itself so that average inputs are less expensive than outputs\n(which SegWit doesn't quite achieve due to the large prevout size - 40\nbytes). This way you can reduce the discount, still get the SegWit goal,\nand get a lower ratio between worst-case and average-case block size,\nthough, frankly, I'm less interested in the last one these days, at\nleast for reasonable parameters. If you're gonna look at hard forks,\nlimiting yourself to just the parameters that we can tweak in a soft\nfork seems short-sighted, at beast.\n\nMatt\n\nOn 05/09/17 19:30, Gregory Maxwell wrote:\n\u003e On Tue, May 9, 2017 at 7:15 PM, Sergio Demian Lerner via bitcoin-dev\n\u003e \u003cbitcoin-dev at lists.linuxfoundation.org\u003e wrote:\n\u003e\u003e The capacity of Segwit(50%)+2MbHF is 50% more than Segwit, and the maximum\n\u003e\u003e block size is the same.\n\u003e \n\u003e And the UTXO bloat potential is twice as large and the cost of that\n\u003e UTXO bloat is significantly reduced. So you're basically gutting the\n\u003e most of the gain from weight, making something incompatible, etc.\n\u003e \n\u003e I'm not sure what to explain-- even that page on segwit.org explains\n\u003e that the values are selected to balance worst case costs not to\n\u003e optimize one to the total exclusion of others. Raw size is not very\n\u003e relevant in the long run, but if your goal were to optimize for it\n\u003e (which it seems to be), then the limit should be pure size.\n\u003e",
"sig": "78c88ac1b78eae69eca29a3db025ea56e8caf94871db7ad9ad764203bc5c530ee5d18f93c36225f21d55a7f1fd29031ad5ee664715add69d7e5a5d7dff9fc010"
}