Stefan Thomas [ARCHIVE] on Nostr: 📅 Original date posted:2012-06-15 📝 Original message:I do agree that ...
📅 Original date posted:2012-06-15
📝 Original message:I do agree that changing/lifting the block size limit is a hard fork
measure, but Mike raised the point and I do believe that whatever we
decide to do now will be informed by our long term plan as well. So I
think it is relevant to the discussion.
> Can someone please help quantify the situation? kthanks :)
According to BlockChain.info we seem to have lots of small blocks of
0-50KB and some larger 200-300 KB blocks. So in terms of near term
measure one thing I would like to know is why miners (i.e. no miners at
all) are fully exhausting the available block size despite thousands of
transactions in the memory pool. I'm not too familiar with the default
inclusion rules, so that would certainly be interesting to understand.
There are probably some low hanging fruit here.
The fact that SatoshiDice is able to afford to pay 0.0005 BTC fees and
fill up the memory pool means that either users who care about speedy
confirmation have to pay higher fees, the average actual block size has
to go up or prioritization has to get smarter. If load increases more
then we need more of any of these three tendencies as well. (Note that
the last one is only a very limited fix, because as the high priority
transactions get confirmed faster, the low priority ones take even longer.)
On 6/15/2012 7:17 PM, Jeff Garzik wrote:
> On Fri, Jun 15, 2012 at 12:56 PM, Stefan Thomas <moon at justmoon.de> wrote:
>> The artificial limits like the block size limit are essentially putting
> [...]
>
> Changing the block size is an item for the hard-fork list. The chance
> of the block size limit changing in the short term seems rather low...
> it is a "nuclear option."
>
> Hard-fork requires a very high level of community buy-in, because it
> shuts out older clients who will simply refuse to consider >1MB blocks
> valid.
>
> Anything approaching that level of change would need some good, hard
> data indicating that SatoshiDice was shutting out the majority of
> other traffic. Does anyone measure mainnet "normal tx" confirmation
> times on a regular basis? Any other hard data?
>
> Clearly SatoshiDice is a heavy user of the network, but there is a
> vast difference between a good stress test and a network flood that is
> shutting out non-SD users.
>
> Can someone please help quantify the situation? kthanks :)
>
Published at
2023-06-07 10:14:54Event JSON
{
"id": "8e637d293116a28dec608a7e8f798085164c118d05b756b68c9935ac3a180fcf",
"pubkey": "49f07bd32c0108a2903a0fa59f904ed312e0ea427d3269eb5fa910eb4a9e22c4",
"created_at": 1686132894,
"kind": 1,
"tags": [
[
"e",
"15fb2f36f1ea52cbcba5582b6aac95b847e659c0a46a17aaa5f3c86c6897706c",
"",
"root"
],
[
"e",
"46e44cc377218aaae03314e383af044b3082695a3f16b8b1aaeb18e9a82e0e5f",
"",
"reply"
],
[
"p",
"b25e10e25d470d9b215521b50da0dfe7a209bec7fedeb53860c3e180ffdc8c11"
]
],
"content": "📅 Original date posted:2012-06-15\n📝 Original message:I do agree that changing/lifting the block size limit is a hard fork\nmeasure, but Mike raised the point and I do believe that whatever we\ndecide to do now will be informed by our long term plan as well. So I\nthink it is relevant to the discussion.\n\n\u003e Can someone please help quantify the situation? kthanks :)\n\nAccording to BlockChain.info we seem to have lots of small blocks of\n0-50KB and some larger 200-300 KB blocks. So in terms of near term\nmeasure one thing I would like to know is why miners (i.e. no miners at\nall) are fully exhausting the available block size despite thousands of\ntransactions in the memory pool. I'm not too familiar with the default\ninclusion rules, so that would certainly be interesting to understand.\nThere are probably some low hanging fruit here.\n\nThe fact that SatoshiDice is able to afford to pay 0.0005 BTC fees and\nfill up the memory pool means that either users who care about speedy\nconfirmation have to pay higher fees, the average actual block size has\nto go up or prioritization has to get smarter. If load increases more\nthen we need more of any of these three tendencies as well. (Note that\nthe last one is only a very limited fix, because as the high priority\ntransactions get confirmed faster, the low priority ones take even longer.)\n\n\nOn 6/15/2012 7:17 PM, Jeff Garzik wrote:\n\u003e On Fri, Jun 15, 2012 at 12:56 PM, Stefan Thomas \u003cmoon at justmoon.de\u003e wrote:\n\u003e\u003e The artificial limits like the block size limit are essentially putting\n\u003e [...]\n\u003e\n\u003e Changing the block size is an item for the hard-fork list. The chance\n\u003e of the block size limit changing in the short term seems rather low...\n\u003e it is a \"nuclear option.\"\n\u003e\n\u003e Hard-fork requires a very high level of community buy-in, because it\n\u003e shuts out older clients who will simply refuse to consider \u003e1MB blocks\n\u003e valid.\n\u003e\n\u003e Anything approaching that level of change would need some good, hard\n\u003e data indicating that SatoshiDice was shutting out the majority of\n\u003e other traffic. Does anyone measure mainnet \"normal tx\" confirmation\n\u003e times on a regular basis? Any other hard data?\n\u003e\n\u003e Clearly SatoshiDice is a heavy user of the network, but there is a\n\u003e vast difference between a good stress test and a network flood that is\n\u003e shutting out non-SD users.\n\u003e\n\u003e Can someone please help quantify the situation? kthanks :)\n\u003e",
"sig": "60186a1e75cd0c75b025467bd705b7d2d5caeab8841bffb6c57976f5de9be4b00be6be6b03739987fc71201d80f03b246241169de1f8f268a837f6daa0ab124e"
}