Anthony Towns [ARCHIVE] on Nostr: 📅 Original date posted:2015-12-08 📝 Original message:> On Mon, Dec 07, 2015 at ...
📅 Original date posted:2015-12-08
📝 Original message:> On Mon, Dec 07, 2015 at 10:02:17PM +0000, Gregory Maxwell wrote:
> > If widely used this proposal gives a 2x capacity increase
> > (more if multisig is widely used),
So from IRC, this doesn't seem quite right -- capacity is constrained as
base_size + witness_size/4 <= 1MB
rather than
base_size <= 1MB and base_size + witness_size <= 4MB
or similar. So if you have a 500B transaction and move 250B into the
witness, you're still using up 250B+250B/4 of the 1MB limit, rather than
just 250B of the 1MB limit.
In particular, if you use as many p2pkh transactions as possible, you'd
have 800kB of base data plus 800kB of witness data, and for a block
filled with 2-of-2 multisig p2sh transactions, you'd hit the limit at
670kB of base data and 1.33MB of witness data.
That would be 1.6MB and 2MB of total actual data if you hit the limits
with real transactions, so it's more like a 1.8x increase for real
transactions afaics, even with substantial use of multisig addresses.
The 4MB consensus limit could only be hit by having a single trivial
transaction using as little base data as possible, then a single huge
4MB witness. So people trying to abuse the system have 4x the blocksize
for 1 block's worth of fees, while people using it as intended only get
1.6x or 2x the blocksize... That seems kinda backwards.
Having a cost function rather than separate limits does make it easier to
build blocks (approximately) optimally, though (ie, just divide the fee by
(base_bytes+witness_bytes/4) and sort). Are there any other benefits?
But afaics, you could just have fixed consensus limits and use the cost
function for building blocks, though? ie sort txs by fee divided by [B +
S*50 + W/3] (where B is base bytes, S is sigops and W is witness bytes)
then just fill up the block until one of the three limits (1MB base,
20k sigops, 3MB witness) is hit?
(Doing a hard fork to make *all* the limits -- base data, witness data,
and sigop count -- part of a single cost function might be a win; I'm
just not seeing the gain in forcing witness data to trade off against
block data when filling blocks is already a 2D knapsack problem)
Cheers,
aj
Published at
2023-06-07 17:45:35Event JSON
{
"id": "db1ce6175b31a15f1ed613ae2e10a79485581ce9695e27c331f12e4f61dd08d7",
"pubkey": "f0feda6ad58ea9f486e469f87b3b9996494363a26982b864667c5d8acb0542ab",
"created_at": 1686159935,
"kind": 1,
"tags": [
[
"e",
"558b0da1f3869961bbef0556878e1dd6b9ae37e86128bc130bab17f5332c918d",
"",
"root"
],
[
"e",
"0eeb59d946243c3a8fcb3fb4642b49c6192043fa5595043c084c67385ff3160b",
"",
"reply"
],
[
"p",
"f0feda6ad58ea9f486e469f87b3b9996494363a26982b864667c5d8acb0542ab"
]
],
"content": "📅 Original date posted:2015-12-08\n📝 Original message:\u003e On Mon, Dec 07, 2015 at 10:02:17PM +0000, Gregory Maxwell wrote:\n\u003e \u003e If widely used this proposal gives a 2x capacity increase\n\u003e \u003e (more if multisig is widely used),\n\nSo from IRC, this doesn't seem quite right -- capacity is constrained as\n\n base_size + witness_size/4 \u003c= 1MB\n\nrather than\n\n base_size \u003c= 1MB and base_size + witness_size \u003c= 4MB\n\nor similar. So if you have a 500B transaction and move 250B into the\nwitness, you're still using up 250B+250B/4 of the 1MB limit, rather than\njust 250B of the 1MB limit.\n\nIn particular, if you use as many p2pkh transactions as possible, you'd\nhave 800kB of base data plus 800kB of witness data, and for a block\nfilled with 2-of-2 multisig p2sh transactions, you'd hit the limit at\n670kB of base data and 1.33MB of witness data.\n\nThat would be 1.6MB and 2MB of total actual data if you hit the limits\nwith real transactions, so it's more like a 1.8x increase for real\ntransactions afaics, even with substantial use of multisig addresses.\n\nThe 4MB consensus limit could only be hit by having a single trivial\ntransaction using as little base data as possible, then a single huge\n4MB witness. So people trying to abuse the system have 4x the blocksize\nfor 1 block's worth of fees, while people using it as intended only get\n1.6x or 2x the blocksize... That seems kinda backwards.\n\nHaving a cost function rather than separate limits does make it easier to\nbuild blocks (approximately) optimally, though (ie, just divide the fee by\n(base_bytes+witness_bytes/4) and sort). Are there any other benefits?\n\nBut afaics, you could just have fixed consensus limits and use the cost\nfunction for building blocks, though? ie sort txs by fee divided by [B +\nS*50 + W/3] (where B is base bytes, S is sigops and W is witness bytes)\nthen just fill up the block until one of the three limits (1MB base,\n20k sigops, 3MB witness) is hit?\n\n(Doing a hard fork to make *all* the limits -- base data, witness data,\nand sigop count -- part of a single cost function might be a win; I'm\njust not seeing the gain in forcing witness data to trade off against\nblock data when filling blocks is already a 2D knapsack problem)\n\nCheers,\naj",
"sig": "19d7ba08f688c4586632a08a3da76ed39532377ab5cfb8b2335dee5b083626b80dcb21230221c615d5b036d5d81c0d7f754da5e13076d517ca02fd5b69b87267"
}