jl2012 [ARCHIVE] on Nostr: 📅 Original date posted:2015-12-17 📝 Original message:I know my reply is a long ...
📅 Original date posted:2015-12-17
📝 Original message:I know my reply is a long one but please read before you hit send. I
have 2 proposals: fast BIP102 + slow SWSF and fast SWSF only. I guess no
one here is arguing for not doing segwit; and it is on the top of my
wish list. My main argument (maybe also Jeff's) is that segwit is too
complicated and may not be a viable short term solution (with the
reasons I listed that I don't want to repeat)
And also I don't agree with you that BIP102 is *strictly* inferior than
segwit. We never had a complex softfork like segwit, but we did have a
successful simple hardfork (BIP50), and BIP102 is very simple. (Details
in my last post. I'm not going to repeat)
Mark Friedenbach 於 2015-12-17 04:33 寫到:
> There are many reasons to support segwit beyond it being a soft-fork.
> For example:
>
> * the limitation of non-witness data to no more than 1MB makes the
> quadratic scaling costs in large transaction validation no worse than
> they currently are;
> * redeem scripts in witness use a more accurate cost accounting than
> non-witness data (further improvements to this beyond what Pieter has
> implemented are possible); and
> * segwit provides features (e.g. opt-in malleability protection) which
> are required by higher-level scaling solutions.
>
> With that in mind I really don't understand the viewpoint that it
> would be better to engage a strictly inferior proposal such as a
> simple adjustment of the block size to 2MB.
Published at
2023-06-07 17:46:37Event JSON
{
"id": "f2d19305cb6879592dcf204a5d6bc18b042f3381fbd8c0e793e1a4c83aeb867d",
"pubkey": "ab1c85bd5ad443631a95b228bd1630bf7acdb27f6de01a960ccfbb077831d7ec",
"created_at": 1686159997,
"kind": 1,
"tags": [
[
"e",
"46b1d48aa4e00b636e8dc4e9c37e717542bb528954bcd46114dd6b14e1119e69",
"",
"root"
],
[
"e",
"e7041931ebd763e9e49271a012e944dfcf151438fee2719f64597fb528a4b087",
"",
"reply"
],
[
"p",
"1c61d995949cbfaf14f767784e166bde865c7b8783d7aa3bf0a1d014b70c0069"
]
],
"content": "📅 Original date posted:2015-12-17\n📝 Original message:I know my reply is a long one but please read before you hit send. I \nhave 2 proposals: fast BIP102 + slow SWSF and fast SWSF only. I guess no \none here is arguing for not doing segwit; and it is on the top of my \nwish list. My main argument (maybe also Jeff's) is that segwit is too \ncomplicated and may not be a viable short term solution (with the \nreasons I listed that I don't want to repeat)\n\nAnd also I don't agree with you that BIP102 is *strictly* inferior than \nsegwit. We never had a complex softfork like segwit, but we did have a \nsuccessful simple hardfork (BIP50), and BIP102 is very simple. (Details \nin my last post. I'm not going to repeat)\n\nMark Friedenbach 於 2015-12-17 04:33 寫到:\n\u003e There are many reasons to support segwit beyond it being a soft-fork.\n\u003e For example:\n\u003e \n\u003e * the limitation of non-witness data to no more than 1MB makes the\n\u003e quadratic scaling costs in large transaction validation no worse than\n\u003e they currently are;\n\u003e * redeem scripts in witness use a more accurate cost accounting than\n\u003e non-witness data (further improvements to this beyond what Pieter has\n\u003e implemented are possible); and\n\u003e * segwit provides features (e.g. opt-in malleability protection) which\n\u003e are required by higher-level scaling solutions.\n\u003e \n\u003e With that in mind I really don't understand the viewpoint that it\n\u003e would be better to engage a strictly inferior proposal such as a\n\u003e simple adjustment of the block size to 2MB.",
"sig": "b85402f06fa36e924c4fe8c8b83ecc2afeacb50d08b52b5d8cff124c3d18d6e7d0bdb153c8afa4221b1d7f49fbbbdecbb3045885816c3a2aca6d19e5286ee391"
}