Matt Corallo [ARCHIVE] on Nostr: 📅 Original date posted:2016-02-09 📝 Original message:Thanks for keeping ...
📅 Original date posted:2016-02-09
📝 Original message:Thanks for keeping on-topic, replying to the proposal, and being civil!
Replies inline.
On 02/09/16 09:00, Anthony Towns via bitcoin-dev wrote:
> On Mon, Feb 08, 2016 at 07:26:48PM +0000, Matt Corallo via bitcoin-dev wrote:
>> As what a hard fork should look like in the context of segwit has never
>> (!) been discussed in any serious sense, I'd like to kick off such a
>> discussion with a (somewhat) specific proposal.
>
>> Here is a proposed outline (to activate only after SegWit and with the
>> currently-proposed version of SegWit):
>
> Is this intended to be activated soon (this year?) or a while away
> (2017, 2018?)?
It's intended to activate when we have clear and broad consensus around
a hard proposal across the community.
>> 1) The segregated witness discount is changed from 75% to 50%. The block
>> size limit (ie transactions + witness/2) is set to 1.5MB. This gives a
>> maximum block size of 3MB and a "network-upgraded" block size of roughly
>> 2.1MB. This still significantly discounts script data which is kept out
>> of the UTXO set, while keeping the maximum-sized block limited.
>
> This would mean the limits go from:
>
> pre-segwit segwit pkh segwit 2/2 msig worst case
> 1MB - - 1MB
> 1MB 1.7MB 2MB 4MB
> 1.5MB 2.1MB 2.2MB 3MB
>
> That seems like a fairly small gain (20% for pubkeyhash, which would
> last for about 3 months if you're growth rate means doubling every 9
> months), so this probably makes the most sense as a "quick cleanup"
> change, that also safely demonstrates how easy/difficult doing a hard
> fork is in practice?
>
> On the other hand, if segwit wallet deployment takes longer than
> hoped, the 50% increase for pre-segwit transactions might be a useful
> release-valve.
>
> Doing a "2x" hardfork with the same reduction to a 50% segwit discount
> would (I think) look like:
>
> pre-segwit segwit pkh segwit 2/2 msig worst case
> 1MB - - 1MB
> 1MB 1.7MB 2MB 4MB
> 2MB 2.8MB 2.9MB 4MB
>
> which seems somewhat more appealing, without making the worst-case any
> worse; but I guess there's concern about the relay networking scaling
> above around 2MB per block, at least prior to IBLT/weak-blocks/whatever?
The goal isnt really to get a "gain" here...its mostly to decrease the
worst-case (4MB is pretty crazy) and keep the total size in-line with
what the network could handle. Getting 1MB blocks through the network in
under a second is already incredibly difficult...2MB is pretty scary and
will take lots of work...3MB is over the bound of "yea, we can pretty
for sure get that to work pretty well".
>> 2) In order to prevent significant blowups in the cost to validate
>> [...] and transactions are only allowed to contain
>> up to 20 non-segwit inputs. [...]
>
> This could potentially make old, signed, but time-locked transactions
> invalid. Is that a good idea?
Hmmmmmm...you make a valid point. I was trying to avoid breaking old
transactions, but didnt think too much about time-locked ones.
Hmmmmmm...we could apply the limits to txn that dont have at least one
"newer than the fork input", but I'm not sure I like that either.
>> Along similar lines, we may wish to switch MAX_BLOCK_SIGOPS from
>> 1-per-50-bytes across the entire block to a per-transaction limit which
>> is slightly looser (though not too much looser - even with libsecp256k1
>> 1-per-50-bytes represents 2 seconds of single-threaded validation in
>> just sigops on my high-end workstation).
>
> I think turning MAX_BLOCK_SIGOPS and MAX_BLOCK_SIZE into a combined
> limit would be a good addition, ie:
>
> #define MAX_BLOCK_SIZE 1500000
> #define MAX_BLOCK_DATA_SIZE 3000000
> #define MAX_BLOCK_SIGOPS 50000
>
> #define MAX_COST 3000000
> #define SIGOP_COST (MAX_COST / MAX_BLOCK_SIGOPS)
> #define BLOCK_COST (MAX_COST / MAX_BLOCK_SIZE)
> #define DATA_COST (MAX_COST / MAX_BLOCK_DATA_SIZE)
>
> if (utxo_data * BLOCK_COST + bytes * DATA_COST + sigops * SIGOP_COST
> > MAX_COST)
> {
> block_is_invalid();
> }
>
> Though I think you'd need to bump up the worst-case limits somewhat to
> make that work cleanly.
There is a clear goal here of NOT using block-based limits and switching
to transaction-based limits. By switching to transaction-based limits we
avoid nasty issues with mining code either getting complicated or
enforcing too-strict limits on individual transactions.
>> 4) Instead of requiring the first four bytes of the previous block hash
>> field be 0s, we allow them to contain any value. This allows Bitcoin
>> mining hardware to reduce the required logic, making it easier to
>> produce competitive hardware [1].
>> [1] Simpler here may not be entirely true. There is potential for
>> optimization if you brute force the SHA256 midstate, but if nothing
>> else, this will prevent there being a strong incentive to use the
>> version field as nonce space. This may need more investigation, as we
>> may wish to just set the minimum difficulty higher so that we can add
>> more than 4 nonce-bytes.
>
> Could you just use leading non-zero bytes of the prevhash as additional
> nonce?
>
> So to work out the actual prev hash, set leading bytes to zero until
> you hit a zero. Conversely, to add nonce info to a hash, if there are
> N leading zero bytes, fill up the first N-1 (or less) of them with
> non-zero values.
>
> That would give a little more than 255**(N-1) possible values
> ((255**N-1)/254) to be exact). That would actually scale automatically
> with difficulty, and seems easy enough to make use of in an ASIC?
Published at
2023-06-07 17:48:57Event JSON
{
"id": "7e785a52e63e0f7d36fb241bb80afdde21c065400188347494160e445a85efbd",
"pubkey": "cd753aa8fbc112e14ffe9fe09d3630f0eff76ca68e376e004b8e77b687adddba",
"created_at": 1686160137,
"kind": 1,
"tags": [
[
"e",
"8c25de74ae7131786de7d5ab012e48bad7eff95963972248b7442bdb60a9ffb5",
"",
"root"
],
[
"e",
"ad27ee11802340701ea2e809e55d8dbe9893081b93294263a05b6ac31edd0b94",
"",
"reply"
],
[
"p",
"f0feda6ad58ea9f486e469f87b3b9996494363a26982b864667c5d8acb0542ab"
]
],
"content": "📅 Original date posted:2016-02-09\n📝 Original message:Thanks for keeping on-topic, replying to the proposal, and being civil!\n\nReplies inline.\n\nOn 02/09/16 09:00, Anthony Towns via bitcoin-dev wrote:\n\u003e On Mon, Feb 08, 2016 at 07:26:48PM +0000, Matt Corallo via bitcoin-dev wrote:\n\u003e\u003e As what a hard fork should look like in the context of segwit has never\n\u003e\u003e (!) been discussed in any serious sense, I'd like to kick off such a\n\u003e\u003e discussion with a (somewhat) specific proposal.\n\u003e \n\u003e\u003e Here is a proposed outline (to activate only after SegWit and with the\n\u003e\u003e currently-proposed version of SegWit):\n\u003e \n\u003e Is this intended to be activated soon (this year?) or a while away\n\u003e (2017, 2018?)?\n\nIt's intended to activate when we have clear and broad consensus around\na hard proposal across the community.\n\n\u003e\u003e 1) The segregated witness discount is changed from 75% to 50%. The block\n\u003e\u003e size limit (ie transactions + witness/2) is set to 1.5MB. This gives a\n\u003e\u003e maximum block size of 3MB and a \"network-upgraded\" block size of roughly\n\u003e\u003e 2.1MB. This still significantly discounts script data which is kept out\n\u003e\u003e of the UTXO set, while keeping the maximum-sized block limited.\n\u003e \n\u003e This would mean the limits go from:\n\u003e \n\u003e pre-segwit segwit pkh segwit 2/2 msig worst case\n\u003e 1MB - - 1MB\n\u003e 1MB 1.7MB 2MB 4MB\n\u003e 1.5MB 2.1MB 2.2MB 3MB\n\u003e \n\u003e That seems like a fairly small gain (20% for pubkeyhash, which would\n\u003e last for about 3 months if you're growth rate means doubling every 9\n\u003e months), so this probably makes the most sense as a \"quick cleanup\"\n\u003e change, that also safely demonstrates how easy/difficult doing a hard\n\u003e fork is in practice?\n\u003e\n\u003e On the other hand, if segwit wallet deployment takes longer than\n\u003e hoped, the 50% increase for pre-segwit transactions might be a useful\n\u003e release-valve.\n\u003e \n\u003e Doing a \"2x\" hardfork with the same reduction to a 50% segwit discount\n\u003e would (I think) look like:\n\u003e \n\u003e pre-segwit segwit pkh segwit 2/2 msig worst case\n\u003e 1MB - - 1MB\n\u003e 1MB 1.7MB 2MB 4MB\n\u003e 2MB 2.8MB 2.9MB 4MB\n\u003e \n\u003e which seems somewhat more appealing, without making the worst-case any\n\u003e worse; but I guess there's concern about the relay networking scaling\n\u003e above around 2MB per block, at least prior to IBLT/weak-blocks/whatever?\n\n\nThe goal isnt really to get a \"gain\" here...its mostly to decrease the\nworst-case (4MB is pretty crazy) and keep the total size in-line with\nwhat the network could handle. Getting 1MB blocks through the network in\nunder a second is already incredibly difficult...2MB is pretty scary and\nwill take lots of work...3MB is over the bound of \"yea, we can pretty\nfor sure get that to work pretty well\".\n\n\n\u003e\u003e 2) In order to prevent significant blowups in the cost to validate\n\u003e\u003e [...] and transactions are only allowed to contain\n\u003e\u003e up to 20 non-segwit inputs. [...]\n\u003e \n\u003e This could potentially make old, signed, but time-locked transactions\n\u003e invalid. Is that a good idea?\n\n\nHmmmmmm...you make a valid point. I was trying to avoid breaking old\ntransactions, but didnt think too much about time-locked ones.\nHmmmmmm...we could apply the limits to txn that dont have at least one\n\"newer than the fork input\", but I'm not sure I like that either.\n\n\n\u003e\u003e Along similar lines, we may wish to switch MAX_BLOCK_SIGOPS from\n\u003e\u003e 1-per-50-bytes across the entire block to a per-transaction limit which\n\u003e\u003e is slightly looser (though not too much looser - even with libsecp256k1\n\u003e\u003e 1-per-50-bytes represents 2 seconds of single-threaded validation in\n\u003e\u003e just sigops on my high-end workstation).\n\u003e \n\u003e I think turning MAX_BLOCK_SIGOPS and MAX_BLOCK_SIZE into a combined\n\u003e limit would be a good addition, ie:\n\u003e \n\u003e #define MAX_BLOCK_SIZE 1500000\n\u003e #define MAX_BLOCK_DATA_SIZE 3000000\n\u003e #define MAX_BLOCK_SIGOPS 50000\n\u003e \n\u003e #define MAX_COST 3000000\n\u003e #define SIGOP_COST (MAX_COST / MAX_BLOCK_SIGOPS)\n\u003e #define BLOCK_COST (MAX_COST / MAX_BLOCK_SIZE)\n\u003e #define DATA_COST (MAX_COST / MAX_BLOCK_DATA_SIZE)\n\u003e \n\u003e if (utxo_data * BLOCK_COST + bytes * DATA_COST + sigops * SIGOP_COST\n\u003e \u003e MAX_COST)\n\u003e {\n\u003e block_is_invalid();\n\u003e }\n\u003e \n\u003e Though I think you'd need to bump up the worst-case limits somewhat to\n\u003e make that work cleanly.\n\n\nThere is a clear goal here of NOT using block-based limits and switching\nto transaction-based limits. By switching to transaction-based limits we\navoid nasty issues with mining code either getting complicated or\nenforcing too-strict limits on individual transactions.\n\n\n\u003e\u003e 4) Instead of requiring the first four bytes of the previous block hash\n\u003e\u003e field be 0s, we allow them to contain any value. This allows Bitcoin\n\u003e\u003e mining hardware to reduce the required logic, making it easier to\n\u003e\u003e produce competitive hardware [1].\n\u003e\u003e [1] Simpler here may not be entirely true. There is potential for\n\u003e\u003e optimization if you brute force the SHA256 midstate, but if nothing\n\u003e\u003e else, this will prevent there being a strong incentive to use the\n\u003e\u003e version field as nonce space. This may need more investigation, as we\n\u003e\u003e may wish to just set the minimum difficulty higher so that we can add\n\u003e\u003e more than 4 nonce-bytes.\n\u003e \n\u003e Could you just use leading non-zero bytes of the prevhash as additional\n\u003e nonce?\n\u003e \n\u003e So to work out the actual prev hash, set leading bytes to zero until\n\u003e you hit a zero. Conversely, to add nonce info to a hash, if there are\n\u003e N leading zero bytes, fill up the first N-1 (or less) of them with\n\u003e non-zero values.\n\u003e \n\u003e That would give a little more than 255**(N-1) possible values\n\u003e ((255**N-1)/254) to be exact). That would actually scale automatically\n\u003e with difficulty, and seems easy enough to make use of in an ASIC?",
"sig": "a1f267c8f8f91d300f4f6be97b791ff5214f034ad7203d5fe4b2b35d8a8056024beb5631af93d638da2a923d60924a4a93eb90f99f1c950577d7c6f70a34d1af"
}