Anthony Towns [ARCHIVE] on Nostr: 📅 Original date posted:2016-02-09 📝 Original message:On Mon, Feb 08, 2016 at ...
📅 Original date posted:2016-02-09
📝 Original message:On Mon, Feb 08, 2016 at 07:26:48PM +0000, Matt Corallo via bitcoin-dev wrote:
> As what a hard fork should look like in the context of segwit has never
> (!) been discussed in any serious sense, I'd like to kick off such a
> discussion with a (somewhat) specific proposal.
> Here is a proposed outline (to activate only after SegWit and with the
> currently-proposed version of SegWit):
Is this intended to be activated soon (this year?) or a while away
(2017, 2018?)?
> 1) The segregated witness discount is changed from 75% to 50%. The block
> size limit (ie transactions + witness/2) is set to 1.5MB. This gives a
> maximum block size of 3MB and a "network-upgraded" block size of roughly
> 2.1MB. This still significantly discounts script data which is kept out
> of the UTXO set, while keeping the maximum-sized block limited.
This would mean the limits go from:
pre-segwit segwit pkh segwit 2/2 msig worst case
1MB - - 1MB
1MB 1.7MB 2MB 4MB
1.5MB 2.1MB 2.2MB 3MB
That seems like a fairly small gain (20% for pubkeyhash, which would
last for about 3 months if you're growth rate means doubling every 9
months), so this probably makes the most sense as a "quick cleanup"
change, that also safely demonstrates how easy/difficult doing a hard
fork is in practice?
On the other hand, if segwit wallet deployment takes longer than
hoped, the 50% increase for pre-segwit transactions might be a useful
release-valve.
Doing a "2x" hardfork with the same reduction to a 50% segwit discount
would (I think) look like:
pre-segwit segwit pkh segwit 2/2 msig worst case
1MB - - 1MB
1MB 1.7MB 2MB 4MB
2MB 2.8MB 2.9MB 4MB
which seems somewhat more appealing, without making the worst-case any
worse; but I guess there's concern about the relay networking scaling
above around 2MB per block, at least prior to IBLT/weak-blocks/whatever?
> 2) In order to prevent significant blowups in the cost to validate
> [...] and transactions are only allowed to contain
> up to 20 non-segwit inputs. [...]
This could potentially make old, signed, but time-locked transactions
invalid. Is that a good idea?
> Along similar lines, we may wish to switch MAX_BLOCK_SIGOPS from
> 1-per-50-bytes across the entire block to a per-transaction limit which
> is slightly looser (though not too much looser - even with libsecp256k1
> 1-per-50-bytes represents 2 seconds of single-threaded validation in
> just sigops on my high-end workstation).
I think turning MAX_BLOCK_SIGOPS and MAX_BLOCK_SIZE into a combined
limit would be a good addition, ie:
#define MAX_BLOCK_SIZE 1500000
#define MAX_BLOCK_DATA_SIZE 3000000
#define MAX_BLOCK_SIGOPS 50000
#define MAX_COST 3000000
#define SIGOP_COST (MAX_COST / MAX_BLOCK_SIGOPS)
#define BLOCK_COST (MAX_COST / MAX_BLOCK_SIZE)
#define DATA_COST (MAX_COST / MAX_BLOCK_DATA_SIZE)
if (utxo_data * BLOCK_COST + bytes * DATA_COST + sigops * SIGOP_COST
> MAX_COST)
{
block_is_invalid();
}
Though I think you'd need to bump up the worst-case limits somewhat to
make that work cleanly.
> 4) Instead of requiring the first four bytes of the previous block hash
> field be 0s, we allow them to contain any value. This allows Bitcoin
> mining hardware to reduce the required logic, making it easier to
> produce competitive hardware [1].
> [1] Simpler here may not be entirely true. There is potential for
> optimization if you brute force the SHA256 midstate, but if nothing
> else, this will prevent there being a strong incentive to use the
> version field as nonce space. This may need more investigation, as we
> may wish to just set the minimum difficulty higher so that we can add
> more than 4 nonce-bytes.
Could you just use leading non-zero bytes of the prevhash as additional
nonce?
So to work out the actual prev hash, set leading bytes to zero until
you hit a zero. Conversely, to add nonce info to a hash, if there are
N leading zero bytes, fill up the first N-1 (or less) of them with
non-zero values.
That would give a little more than 255**(N-1) possible values
((255**N-1)/254) to be exact). That would actually scale automatically
with difficulty, and seems easy enough to make use of in an ASIC?
Cheers,
aj
Published at
2023-06-07 17:48:56Event JSON
{
"id": "ad27ee11802340701ea2e809e55d8dbe9893081b93294263a05b6ac31edd0b94",
"pubkey": "f0feda6ad58ea9f486e469f87b3b9996494363a26982b864667c5d8acb0542ab",
"created_at": 1686160136,
"kind": 1,
"tags": [
[
"e",
"8c25de74ae7131786de7d5ab012e48bad7eff95963972248b7442bdb60a9ffb5",
"",
"root"
],
[
"e",
"8387c04c94dce265a4d12bd1d8edd21fb83c6e78645b13b93ff9f06407694913",
"",
"reply"
],
[
"p",
"daa2fc676a25e3b5b45644540bcbd1e1168b111427cd0e3cf19c56194fb231aa"
]
],
"content": "📅 Original date posted:2016-02-09\n📝 Original message:On Mon, Feb 08, 2016 at 07:26:48PM +0000, Matt Corallo via bitcoin-dev wrote:\n\u003e As what a hard fork should look like in the context of segwit has never\n\u003e (!) been discussed in any serious sense, I'd like to kick off such a\n\u003e discussion with a (somewhat) specific proposal.\n\n\u003e Here is a proposed outline (to activate only after SegWit and with the\n\u003e currently-proposed version of SegWit):\n\nIs this intended to be activated soon (this year?) or a while away\n(2017, 2018?)?\n\n\u003e 1) The segregated witness discount is changed from 75% to 50%. The block\n\u003e size limit (ie transactions + witness/2) is set to 1.5MB. This gives a\n\u003e maximum block size of 3MB and a \"network-upgraded\" block size of roughly\n\u003e 2.1MB. This still significantly discounts script data which is kept out\n\u003e of the UTXO set, while keeping the maximum-sized block limited.\n\nThis would mean the limits go from:\n\n pre-segwit segwit pkh segwit 2/2 msig worst case\n 1MB - - 1MB\n 1MB 1.7MB 2MB 4MB\n 1.5MB 2.1MB 2.2MB 3MB\n\nThat seems like a fairly small gain (20% for pubkeyhash, which would\nlast for about 3 months if you're growth rate means doubling every 9\nmonths), so this probably makes the most sense as a \"quick cleanup\"\nchange, that also safely demonstrates how easy/difficult doing a hard\nfork is in practice?\n\nOn the other hand, if segwit wallet deployment takes longer than\nhoped, the 50% increase for pre-segwit transactions might be a useful\nrelease-valve.\n\nDoing a \"2x\" hardfork with the same reduction to a 50% segwit discount\nwould (I think) look like:\n\n pre-segwit segwit pkh segwit 2/2 msig worst case\n 1MB - - 1MB\n 1MB 1.7MB 2MB 4MB\n 2MB 2.8MB 2.9MB 4MB\n\nwhich seems somewhat more appealing, without making the worst-case any\nworse; but I guess there's concern about the relay networking scaling\nabove around 2MB per block, at least prior to IBLT/weak-blocks/whatever?\n\n\u003e 2) In order to prevent significant blowups in the cost to validate\n\u003e [...] and transactions are only allowed to contain\n\u003e up to 20 non-segwit inputs. [...]\n\nThis could potentially make old, signed, but time-locked transactions\ninvalid. Is that a good idea?\n\n\u003e Along similar lines, we may wish to switch MAX_BLOCK_SIGOPS from\n\u003e 1-per-50-bytes across the entire block to a per-transaction limit which\n\u003e is slightly looser (though not too much looser - even with libsecp256k1\n\u003e 1-per-50-bytes represents 2 seconds of single-threaded validation in\n\u003e just sigops on my high-end workstation).\n\nI think turning MAX_BLOCK_SIGOPS and MAX_BLOCK_SIZE into a combined\nlimit would be a good addition, ie:\n\n #define MAX_BLOCK_SIZE 1500000\n #define MAX_BLOCK_DATA_SIZE 3000000\n #define MAX_BLOCK_SIGOPS 50000\n\n #define MAX_COST 3000000\n #define SIGOP_COST (MAX_COST / MAX_BLOCK_SIGOPS)\n #define BLOCK_COST (MAX_COST / MAX_BLOCK_SIZE)\n #define DATA_COST (MAX_COST / MAX_BLOCK_DATA_SIZE)\n\n if (utxo_data * BLOCK_COST + bytes * DATA_COST + sigops * SIGOP_COST\n \u003e MAX_COST)\n {\n block_is_invalid();\n }\n\nThough I think you'd need to bump up the worst-case limits somewhat to\nmake that work cleanly.\n\n\u003e 4) Instead of requiring the first four bytes of the previous block hash\n\u003e field be 0s, we allow them to contain any value. This allows Bitcoin\n\u003e mining hardware to reduce the required logic, making it easier to\n\u003e produce competitive hardware [1].\n\u003e [1] Simpler here may not be entirely true. There is potential for\n\u003e optimization if you brute force the SHA256 midstate, but if nothing\n\u003e else, this will prevent there being a strong incentive to use the\n\u003e version field as nonce space. This may need more investigation, as we\n\u003e may wish to just set the minimum difficulty higher so that we can add\n\u003e more than 4 nonce-bytes.\n\nCould you just use leading non-zero bytes of the prevhash as additional\nnonce?\n\nSo to work out the actual prev hash, set leading bytes to zero until\nyou hit a zero. Conversely, to add nonce info to a hash, if there are\nN leading zero bytes, fill up the first N-1 (or less) of them with\nnon-zero values.\n\nThat would give a little more than 255**(N-1) possible values\n((255**N-1)/254) to be exact). That would actually scale automatically\nwith difficulty, and seems easy enough to make use of in an ASIC?\n\nCheers,\naj",
"sig": "533976dfa9dc7dcec082a4211041f14e626b232f05b2c3fc48dc58e9d590b4534ad3254efd629914c4724b1717f2a082ecf53d82f146cb2b74394153dcb2a1de"
}