Luke-Jr [ARCHIVE] on Nostr: 📅 Original date posted:2011-08-24 🗒️ Summary of this message: A discussion on ...
📅 Original date posted:2011-08-24
🗒️ Summary of this message: A discussion on Bitcoin's block size limit and difficulty adjustment, with suggestions for dynamic adaptation and increased precision, but concerns about potential issues.
📝 Original message:On Wednesday, August 24, 2011 12:46:42 PM Gregory Maxwell wrote:
> On Wed, Aug 24, 2011 at 12:15 PM, Luke-Jr <luke at dashjr.org> wrote:
> > - Replace hard limits (like 1 MB maximum block size) with something that
> > can dynamically adapt with the times. Maybe based on difficulty so it
> > can't be gamed?
>
> Too early for that.
Dynamically adapting would be by design never too early/late. Changing from a
fixed 1 MB will fork the block chain, which should be a minimized event.
> > - Adjust difficulty every block, without limits, based on a N-block
> > sliding window. I think this would solve the issue when the hashrate
> > drops overnight, but maybe also add a block time limit, or perhaps
> > include the "current block" in the difficulty calculation?
>
> The quantized scheme limits the amount of difficulty skew miners can
> create by lying about timestamps to about a half a percent. A rolling
> window with the same time constant would allow much more skew.
Depends on the implementation, I'd think.
> > Replacing the "Satoshi" 64-bit integers with
> > "Satoshi" variable-size fractions (ie, infinite numerator + denominator)
>
> Increasing precision I would agree with but, sadly, causing people to
> need more than 64 bit would create a lot of bugs.
>
> infinite numerator + denominator is absolutely completely and totally
> batshit insane. For one, it has weird consequences that the same value
> can have redundant encodings.
So? You can already have redundant transactions simply by changing the order
of inputs/outputs. A good client would minimize the transaction size by
reducing them, of course.
> Most importantly, it suffers factor inflation: If you spend inputs
> 1/977 1/983 1/991 1/997 the smallest denominator you can use for the
> output 948892238557.
I already tried to address this in my original mail. If I had those 4 coins, I
would use a denominator of 987 and discard the difference as fees.
Published at
2023-06-07 02:18:00Event JSON
{
"id": "ec62c1b8cd0a3a9065ec8d3f8a76a2afe7540a541b60220cddb42916c723a187",
"pubkey": "6ac6a519b554d8ff726a301e3daec0b489f443793778feccc6ea7a536f7354f1",
"created_at": 1686104280,
"kind": 1,
"tags": [
[
"e",
"391ebcb8eb3f9b5884cfbf66f4a99b12c015cc93baa9452c114d68f52d1168ec",
"",
"root"
],
[
"e",
"562cd7ef01bc18c7239e54f1d92879cc938f2b3e0c7f244085ed9eee1b5d3acf",
"",
"reply"
],
[
"p",
"4aa6cf9aa5c8e98f401dac603c6a10207509b6a07317676e9d6615f3d7103d73"
]
],
"content": "📅 Original date posted:2011-08-24\n🗒️ Summary of this message: A discussion on Bitcoin's block size limit and difficulty adjustment, with suggestions for dynamic adaptation and increased precision, but concerns about potential issues.\n📝 Original message:On Wednesday, August 24, 2011 12:46:42 PM Gregory Maxwell wrote:\n\u003e On Wed, Aug 24, 2011 at 12:15 PM, Luke-Jr \u003cluke at dashjr.org\u003e wrote:\n\u003e \u003e - Replace hard limits (like 1 MB maximum block size) with something that\n\u003e \u003e can dynamically adapt with the times. Maybe based on difficulty so it\n\u003e \u003e can't be gamed?\n\u003e \n\u003e Too early for that.\n\nDynamically adapting would be by design never too early/late. Changing from a \nfixed 1 MB will fork the block chain, which should be a minimized event.\n\n\u003e \u003e - Adjust difficulty every block, without limits, based on a N-block\n\u003e \u003e sliding window. I think this would solve the issue when the hashrate\n\u003e \u003e drops overnight, but maybe also add a block time limit, or perhaps\n\u003e \u003e include the \"current block\" in the difficulty calculation?\n\u003e \n\u003e The quantized scheme limits the amount of difficulty skew miners can\n\u003e create by lying about timestamps to about a half a percent. A rolling\n\u003e window with the same time constant would allow much more skew.\n\nDepends on the implementation, I'd think.\n\n\u003e \u003e Replacing the \"Satoshi\" 64-bit integers with\n\u003e \u003e \"Satoshi\" variable-size fractions (ie, infinite numerator + denominator)\n\u003e \n\u003e Increasing precision I would agree with but, sadly, causing people to\n\u003e need more than 64 bit would create a lot of bugs.\n\u003e \n\u003e infinite numerator + denominator is absolutely completely and totally\n\u003e batshit insane. For one, it has weird consequences that the same value\n\u003e can have redundant encodings.\n\nSo? You can already have redundant transactions simply by changing the order \nof inputs/outputs. A good client would minimize the transaction size by \nreducing them, of course.\n\n\u003e Most importantly, it suffers factor inflation: If you spend inputs\n\u003e 1/977 1/983 1/991 1/997 the smallest denominator you can use for the\n\u003e output 948892238557.\n\nI already tried to address this in my original mail. If I had those 4 coins, I \nwould use a denominator of 987 and discard the difference as fees.",
"sig": "2557a2cddee382f08c0165d25c1d04f1f3cb1faaa0108f2366b6f14d709abe04f1751ae88559399ac677bcf5dd22c80abc7ca1eb1cc646e0f151831ca3a694a6"
}