Why Nostr? What is Njump?
2023-06-07 17:54:53
in reply to

Daniele Pinna [ARCHIVE] on Nostr: šŸ“… Original date posted:2016-12-10 šŸ“ Original message:We have models for ...

šŸ“… Original date posted:2016-12-10
šŸ“ Original message:We have models for estimating the probability that a block is orphaned
given average network bandwidth and block size.

The question is, do we have objective measures of these two quantities?
Couldn't we target an orphan_rate < max_rate?



On Dec 10, 2016 1:01 PM, <bitcoin-dev-request at lists.linuxfoundation.org>
wrote:

Send bitcoin-dev mailing list submissions to
bitcoin-dev at lists.linuxfoundation.org

To subscribe or unsubscribe via the World Wide Web, visit
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
or, via email, send a message with subject or body 'help' to
bitcoin-dev-request at lists.linuxfoundation.org

You can reach the person managing the list at
bitcoin-dev-owner at lists.linuxfoundation.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of bitcoin-dev digest..."


Today's Topics:

1. Managing block size the same way we do difficulty (aka
Block75) (t. khan)
2. Re: Managing block size the same way we do difficulty (aka
Block75) (s7r)


----------------------------------------------------------------------

Message: 1
Date: Mon, 5 Dec 2016 10:27:32 -0500
From: "t. khan" <teekhan42 at gmail.com>
To: bitcoin-dev at lists.linuxfoundation.org
Subject: [bitcoin-dev] Managing block size the same way we do
difficulty (aka Block75)
Message-ID:
<CAGCNRJqdu7DMC+AMR4mYKRAYStRMKVGqbnjtEfmzcoeMij5u=A at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

BIP Proposal - Managing Bitcoin?s block size the same way we do difficulty
(aka Block75)

The every two-week adjustment of difficulty has proven to be a reasonably
effective and predictable way of managing how quickly blocks are mined.
Bitcoin needs a reasonably effective and predictable way of managing the
maximum block size.

It?s clear at this point that human beings should not be involved in the
determination of max block size, just as they?re not involved in deciding
the difficulty.

Instead of setting an arbitrary max block size (1MB, 2MB, 8MB, etc.) or
passing the decision to miners/pool operators, the max block size should be
adjusted every two weeks (2016 blocks) using a system similar to how
difficulty is calculated.

Put another way: let?s stop thinking about what the max block size should
be and start thinking about how full we want the average block to be
regardless of size. Over the last year, we?ve had averages of 75% or
higher, so aiming for 75% full seems reasonable, hence naming this concept
?Block75?.

The target capacity over 2016 blocks would be 75%. If the last 2016 blocks
are more than 75% full, add the difference to the max block size. Like this:

MAX_BLOCK_BASE_SIZE = 1000000
TARGET_CAPACITY = 750000
AVERAGE_OVER_CAP = average block size of last 2016 blocks minus
TARGET_CAPACITY

To check if a block is valid, ? (MAX_BLOCK_BASE_SIZE + AVERAGE_OVER_CAP)

For example, if the last 2016 blocks are 85% full (average block is 850
KB), add 10% to the max block size. The new max block size would be 1,100
KB until the next 2016 blocks are mined, then reset and recalculate. The
1,000,000 byte limit that exists currently would remain, but would
effectively be the minimum max block size.

Another two weeks goes by, the last 2016 blocks are again 85% full, but now
that means they average 935 KB out of the 1,100 KB max block size. This is
93.5% of the 1,000,000 byte limit, so 18.5% would be added to that to make
the new max block size of 1,185 KB.

Another two weeks passes. This time, the average block is 1,050 KB. The new
max block size is calculated to 1,300 KB (as blocks were 105% full, minus
the 75% capacity target, so 30% added to max block size).

Repeat every 2016 blocks, forever.

If Block75 had been applied at the difficulty adjustment on November 18th,
the max block size would have been 1,080KB, as the average block during
that period was 83% full, so 8% is added to the 1,000KB limit. The current
size, after the December 2nd adjustment would be 1,150K.

Block75 would allow the max block size to grow (or shrink) in response to
transaction volume, and does so predictably, reasonably quickly, and in a
method that prevents wild swings in block size or transaction fees. It
attempts to keep blocks at 75% total capacity over each two week period,
the same way difficulty tries to keep blocks mined every ten minutes. It
also keeps blocks as small as possible.

Thoughts?

-t.k.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/
attachments/20161205/c24d6c6d/attachment-0001.html>

------------------------------

Message: 2
Date: Sat, 10 Dec 2016 12:44:31 +0200
From: s7r <s7r at sky-ip.org>
To: bitcoin-dev at lists.linuxfoundation.org
Subject: Re: [bitcoin-dev] Managing block size the same way we do
difficulty (aka Block75)
Message-ID: <c318f76d-0904-2e1b-453b-60179f8209bb at sky-ip.org>
Content-Type: text/plain; charset="utf-8"

t. khan via bitcoin-dev wrote:
> BIP Proposal - Managing Bitcoin?s block size the same way we do
> difficulty (aka Block75)
>
> The every two-week adjustment of difficulty has proven to be a
> reasonably effective and predictable way of managing how quickly blocks
> are mined. Bitcoin needs a reasonably effective and predictable way of
> managing the maximum block size.
>
> It?s clear at this point that human beings should not be involved in the
> determination of max block size, just as they?re not involved in
> deciding the difficulty.
>
> Instead of setting an arbitrary max block size (1MB, 2MB, 8MB, etc.) or
> passing the decision to miners/pool operators, the max block size should
> be adjusted every two weeks (2016 blocks) using a system similar to how
> difficulty is calculated.
>
> Put another way: let?s stop thinking about what the max block size
> should be and start thinking about how full we want the average block to
> be regardless of size. Over the last year, we?ve had averages of 75% or
> higher, so aiming for 75% full seems reasonable, hence naming this
> concept ?Block75?.
>
> The target capacity over 2016 blocks would be 75%. If the last 2016
> blocks are more than 75% full, add the difference to the max block size.
> Like this:
>
> MAX_BLOCK_BASE_SIZE = 1000000
> TARGET_CAPACITY = 750000
> AVERAGE_OVER_CAP = average block size of last 2016 blocks minus
> TARGET_CAPACITY
>
> To check if a block is valid, ? (MAX_BLOCK_BASE_SIZE + AVERAGE_OVER_CAP)
>
> For example, if the last 2016 blocks are 85% full (average block is 850
> KB), add 10% to the max block size. The new max block size would be
> 1,100 KB until the next 2016 blocks are mined, then reset and
> recalculate. The 1,000,000 byte limit that exists currently would
> remain, but would effectively be the minimum max block size.
>
> Another two weeks goes by, the last 2016 blocks are again 85% full, but
> now that means they average 935 KB out of the 1,100 KB max block size.
> This is 93.5% of the 1,000,000 byte limit, so 18.5% would be added to
> that to make the new max block size of 1,185 KB.
>
> Another two weeks passes. This time, the average block is 1,050 KB. The
> new max block size is calculated to 1,300 KB (as blocks were 105% full,
> minus the 75% capacity target, so 30% added to max block size).
>
> Repeat every 2016 blocks, forever.
>
> If Block75 had been applied at the difficulty adjustment on November
> 18th, the max block size would have been 1,080KB, as the average block
> during that period was 83% full, so 8% is added to the 1,000KB limit.
> The current size, after the December 2nd adjustment would be 1,150K.
>
> Block75 would allow the max block size to grow (or shrink) in response
> to transaction volume, and does so predictably, reasonably quickly, and
> in a method that prevents wild swings in block size or transaction fees.
> It attempts to keep blocks at 75% total capacity over each two week
> period, the same way difficulty tries to keep blocks mined every ten
> minutes. It also keeps blocks as small as possible.
>
> Thoughts?
>
> -t.k.
>

I like the idea. It is good wrt growing the max. block size
automatically without human action, but the main problem (or question)
is not how to grow this number, it is what number can the network
handle, considering both miners and users. While disk space requirements
might not be a big problem, block propagation time is. The time required
for a block to propagate in the network (or at least to all the miners)
is directly dependent of its size. If blocks take too much time to
propagate in the network, the orphan rate will increase in unpredictable
ways. For example if the internet speed in China is worse than in
Europe, and miners in China have more than 50% of the hashing power,
blocks mined by European miners might get orphaned.

The system as described can also be gamed, by filling the network with
transactions. Miners have the monetary interest to include as many
transactions as possible in a block in order to collect the fees.
Regardless how you think about it, there has to be a maximum block size
that the network will allow as a consensus rule. Increasing it
dynamically based on transaction volume will reach a point where the
number got big enough that it broke things. Bitcoin, because its
fundamental design, can scale by using offchain solutions.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 488 bytes
Desc: OpenPGP digital signature
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/
attachments/20161210/c231038d/attachment-0001.sig>

------------------------------

_______________________________________________
bitcoin-dev mailing list
bitcoin-dev at lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


End of bitcoin-dev Digest, Vol 19, Issue 4
******************************************
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20161210/1747b6c1/attachment-0001.html>;
Author Public Key
npub16z2msvydkmd6xsjyrggnavcwrlpevfwq74x5avxrkg2dpq7cg6js5jg59c