Roy Badami [ARCHIVE] on Nostr: 📅 Original date posted:2015-06-01 📝 Original message:> What do other people ...
📅 Original date posted:2015-06-01
📝 Original message:> What do other people think? Would starting at a max of 8 or 4 get
> consensus? Scaling up a little less than Nielsen's Law of Internet
> Bandwidth predicts for the next 20 years? (I think predictability is
> REALLY important).
TL;DR: Personally I'm in favour of doing something relatively
uncontroversial (say, a simple increase in the block size to something
in the 4-8GB range) with no further increases without a further hard
fork.
I'm not sure how relevent Nielsen's Law really is. The only relevent
data points Nielsen has really boil down to a law about how the speed
of his cable modem connection has changed during the period 1998-2014.
Interesting though that is, it's not hugely relevent to
bandwidth-intensive operations like running a full node. The problem
is he's only looking at the actual speed of his connection in Mbps,
not the amount of data usage in GB/month that his provider permits -
and there's no particular reason to expect that both of those two
figures follow the same curve. In particular, we're more interested
in the cost of backhaul and IP transit (which is what drives the
GB/month figure) than we are in improvements in DOCSIS technology,
which have little relevence to node operators even on cable modem, and
none to any other kind of full node operator, be it on DSL or in a
datacentre.
More importantly, I also think a scheduled ramp up is an unnecessary
complication. Why do we need to commit now to future block size
increases perhaps years into the future? I'd rather schedule an
uncontroversial hard fork now (if such thing is possible) even if
there's a very real expectation - even an assumption - that by the
time the fork has taken place, it's already time to start discussing
the next one. Any curve or schedule of increases that stretches years
into the future is inevitably going to be controversial - and more so
the further into the future it stretches - simply because the
uncertainties around the Bitcoin landscape are going to be greater the
further ahead we look.
If a simple increase from 1GB to 4GB or 8GB will solve the problem for
now, why not do that? Yes, it's quite likely we'll have to do it
again, but we'll be able to make that decision in the light of the
2016 or 2017 landscape and can again make a simple, hopefully
uncontroversial, increase in the limit at that time.
So, with the proviso that I think this is all bike shedding, if I had
to pick my favourite colour for the bike shed, it would be to schedule
a hard fork that increases the 1GB limit (to something in the 4-8GB
range) but with no further increases without a further hard fork.
Personally I think trying to pick the best value of the 2035 block
size now is about as foolish as trying to understand now the economics
of Bitcoin mining many halvings hence.
NB: this is not saying that I think we shouldn't go above 8GB in the
relatively foreseeable future; quite the contrary, I strongly expect
that we will. I just don't see the need to pick the 2020 block size
now when we can easily make a far better informed decision as to the
2020 block size in 2018 or even 2019.
As to knowing what the block size is going to be for the next 20 years
being "REALLY important"? 100% disagree. I also think it's
impossible, because even if you manage to get consensus on a block
size increase schedule that stretches out to 2035 (and my prediction
is you won't) the reality is that that block size schedule will have
been modified by a future hard fork long before we get to 2035.
What I personally think is REALLY important is that the Bitcoin
community demonstrates an ability to react appropriately to changing
requirements and conditions - and we'll only be able to react to those
conditions when we know what they are! My expectation is that there
will be several (hopefully _relatively_ uncontroversial) scheduled
hard forks between now and 2035, and each of those will be discussed
in suitable detail before being agreed. And that's as it should be.
roy
Published at
2023-06-07 15:36:06Event JSON
{
"id": "87829e0250a2fa5a432f195dd1a6d2677ebdbb9d3c247d92437c8a58cd5cb60d",
"pubkey": "58f160e0dbc661605704b190e36f5199f881c861e53763c7057e6bc0c13e6950",
"created_at": 1686152166,
"kind": 1,
"tags": [
[
"e",
"112d0de527f63b466f07e709b55d1d4965f1c304d12170877aafb66f5cb05c67",
"",
"root"
],
[
"e",
"876374e7472db5e38fc82f3ecfecc97fe04ae1686e29cc5193db97951a82cf76",
"",
"reply"
],
[
"p",
"ad79974d758dc4cf1bbec6f26efe5df8b5016434bf40cc8451cf2a236e43b8d8"
]
],
"content": "📅 Original date posted:2015-06-01\n📝 Original message:\u003e What do other people think? Would starting at a max of 8 or 4 get\n\u003e consensus? Scaling up a little less than Nielsen's Law of Internet\n\u003e Bandwidth predicts for the next 20 years? (I think predictability is\n\u003e REALLY important).\n\nTL;DR: Personally I'm in favour of doing something relatively\nuncontroversial (say, a simple increase in the block size to something\nin the 4-8GB range) with no further increases without a further hard\nfork.\n\nI'm not sure how relevent Nielsen's Law really is. The only relevent\ndata points Nielsen has really boil down to a law about how the speed\nof his cable modem connection has changed during the period 1998-2014.\n\nInteresting though that is, it's not hugely relevent to\nbandwidth-intensive operations like running a full node. The problem\nis he's only looking at the actual speed of his connection in Mbps,\nnot the amount of data usage in GB/month that his provider permits -\nand there's no particular reason to expect that both of those two\nfigures follow the same curve. In particular, we're more interested\nin the cost of backhaul and IP transit (which is what drives the\nGB/month figure) than we are in improvements in DOCSIS technology,\nwhich have little relevence to node operators even on cable modem, and\nnone to any other kind of full node operator, be it on DSL or in a\ndatacentre.\n\nMore importantly, I also think a scheduled ramp up is an unnecessary\ncomplication. Why do we need to commit now to future block size\nincreases perhaps years into the future? I'd rather schedule an\nuncontroversial hard fork now (if such thing is possible) even if\nthere's a very real expectation - even an assumption - that by the\ntime the fork has taken place, it's already time to start discussing\nthe next one. Any curve or schedule of increases that stretches years\ninto the future is inevitably going to be controversial - and more so\nthe further into the future it stretches - simply because the\nuncertainties around the Bitcoin landscape are going to be greater the\nfurther ahead we look.\n\nIf a simple increase from 1GB to 4GB or 8GB will solve the problem for\nnow, why not do that? Yes, it's quite likely we'll have to do it\nagain, but we'll be able to make that decision in the light of the\n2016 or 2017 landscape and can again make a simple, hopefully\nuncontroversial, increase in the limit at that time.\n\nSo, with the proviso that I think this is all bike shedding, if I had\nto pick my favourite colour for the bike shed, it would be to schedule\na hard fork that increases the 1GB limit (to something in the 4-8GB\nrange) but with no further increases without a further hard fork.\n\nPersonally I think trying to pick the best value of the 2035 block\nsize now is about as foolish as trying to understand now the economics\nof Bitcoin mining many halvings hence.\n\nNB: this is not saying that I think we shouldn't go above 8GB in the\nrelatively foreseeable future; quite the contrary, I strongly expect\nthat we will. I just don't see the need to pick the 2020 block size\nnow when we can easily make a far better informed decision as to the\n2020 block size in 2018 or even 2019.\n\nAs to knowing what the block size is going to be for the next 20 years\nbeing \"REALLY important\"? 100% disagree. I also think it's\nimpossible, because even if you manage to get consensus on a block\nsize increase schedule that stretches out to 2035 (and my prediction\nis you won't) the reality is that that block size schedule will have\nbeen modified by a future hard fork long before we get to 2035.\n\nWhat I personally think is REALLY important is that the Bitcoin\ncommunity demonstrates an ability to react appropriately to changing\nrequirements and conditions - and we'll only be able to react to those\nconditions when we know what they are! My expectation is that there\nwill be several (hopefully _relatively_ uncontroversial) scheduled\nhard forks between now and 2035, and each of those will be discussed\nin suitable detail before being agreed. And that's as it should be.\n\nroy",
"sig": "dab913199f273e569b898bad2b4cbdc406c165b82f3a6a9c82ac26ef622bf2e3a007caf1086fc5168521e0f3eccb91f11b7acba707b92c2b7bf647d9039cdd4f"
}