Why Nostr? What is Njump?
2023-06-07 17:44:46
in reply to

Emin Gün Sirer [ARCHIVE] on Nostr: 📅 Original date posted:2015-11-13 📝 Original message:By now, we have seen quite ...

📅 Original date posted:2015-11-13
📝 Original message:By now, we have seen quite a few proposals for the block size increase.
It's hard not to notice that there are potentially infinitely many
functions for future block size increases. One could, for instance, double
every N years for any rational number N, one could increase linearly, one
could double initially then increase linearly, one could ask the miners to
vote on the size, one could couple the block size increase to halvings,
etc. Without judging any of these proposals on the table, one can see that
there are countless alternative functions one could imagine creating.

I'd like to ask a question that is one notch higher: Can we enunciate what
grand goals a truly perfect function would achieve? That is, if we could
look into the future and know all the improvements to come in network
access technologies, see the expansion of the Bitcoin network across the
globe, and precisely know the placement and provisioning of all future
nodes, what metrics would we care about as we craft a function to fit what
is to come?

To be clear, I'd like to avoid discussing any specific block size increase
function. That's very much the tangible (non-meta) block size debate, and
everyone has their opinion and best good-faith attempt at what that
function should look like. I've purposefully stayed out of that issue,
because there are too many options and no metrics for evaluating proposals.

Instead, I'm asking to see if there is some agreement on how to evaluate a
good proposal. So, the meta-question: if we were looking at the best
possible function, how would we know? If we have N BIPs to choose from,
what criteria do we look for?

To illustrate, a possible meta goal might be: "increase the block size,
while ensuring that large miners never have an advantage over small miners
that [they did not have in the preceding 6 months, in 2012, pick your time
frame, or else specify the advantage in an absolute fashion]." Or "increase
block size as much as possible, subject to the constraint that 90% of the
nodes on the network are no more than 1 minute behind one of the tails of
the blockchain 99% of the time." Or "do not increase the blocksize until at
least date X." Or "the increase function should be monotonic." And it's
quite OK (and probably likely) to have a combination of these kinds of
metrics and constraints.

For disclosure, I personally do not have a horse in the block size debate,
besides wanting to see Bitcoin evolve and get more widely adopted. I ask
because as an academic, I'd like to understand if we can use various
simulation and analytic techniques to examine the proposals. A second
reason is that it is very easy to have a proliferation of block size
increase proposals, and good engineering would ask that we define the
meta-criteria first and then pick. To do that, we need some criteria for
judging proposals other than gut feeling.

Of course, even with meta-criteria in hand, there will be room for lots of
disagreement because we do not actually know the future and reasonable
people can disagree on how things will evolve. I think this is good because
it makes it easier to agree on meta-criteria than on an actual, specific
function for increasing the block size.

It looks like some specific meta-level criteria would help more at this
point than new proposals all exploring a different variants of block size
increase schedules.

Best,

- egs


P.S. This message is an off-shoot of this blog post:

http://hackingdistributed.com/2015/11/13/suggestion-for-the-blocksize-debate/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20151113/7a88aa6c/attachment.html>;
Author Public Key
npub1k6906yrdzekdkn23mrskcf6up8s2fypqldhnz9gfc44kwsan02zq5xvja6