Gregory Maxwell [ARCHIVE] on Nostr: 📅 Original date posted:2015-09-26 📝 Original message:On Wed, Sep 23, 2015 at ...
📅 Original date posted:2015-09-26
📝 Original message:On Wed, Sep 23, 2015 at 9:37 PM, Gavin Andresen <gavinandresen at gmail.com> wrote:
>> Avoiding this is why I've always previously described this idea as
>> merged mined block DAG (with blocks of arbitrary strength) which are
>> always efficiently deferentially coded against prior state. A new
>> solution (regardless of who creates it) can still be efficiently
>> transmitted even if it differs in arbitrary ways (though the
>> efficiency is less the more different it is).
>
> Yup, although I don't get the 'merge mined' bit; the weak blocks are
> ephemeral, probably purged out of memory as soon as a few full blocks are
> found...
Unless the weak block transaction list can be a superset of the block
transaction list size proportional propagation costs are not totally
eliminated.
As even if the weak block criteria is MUCH lower than the block
criteria (which would become problematic in its own right at some
point) the network will sometimes find blocks when there hasn't been
any weak block priming at all (e.g. all prior priming has made it into
blocks already).
So if the weak block commitment must be exactly the block commitment
you end up having to add a small number of transactions to your block
above and beyond the latest well propagated weak-blocks... Could still
work, but then creates a pressure to crank up the weak block overhead
which could better be avoided.
Published at
2023-06-07 17:41:07Event JSON
{
"id": "20145be95b01819b5124d8d527be2e3e7d0e904a53c024b220b3cdda3c316b98",
"pubkey": "4aa6cf9aa5c8e98f401dac603c6a10207509b6a07317676e9d6615f3d7103d73",
"created_at": 1686159667,
"kind": 1,
"tags": [
[
"e",
"47e992d8066a798c3fec58fb87e76f9362fea651b87542b83f6a85adc1e96982",
"",
"root"
],
[
"e",
"8e8c6b24397c3c17934cdc7b9554afe3c52337d9c284ee9e154ef85d650a7c33",
"",
"reply"
],
[
"p",
"13bd8c1c5e3b3508a07c92598647160b11ab0deef4c452098e223e443c1ca425"
]
],
"content": "📅 Original date posted:2015-09-26\n📝 Original message:On Wed, Sep 23, 2015 at 9:37 PM, Gavin Andresen \u003cgavinandresen at gmail.com\u003e wrote:\n\u003e\u003e Avoiding this is why I've always previously described this idea as\n\u003e\u003e merged mined block DAG (with blocks of arbitrary strength) which are\n\u003e\u003e always efficiently deferentially coded against prior state. A new\n\u003e\u003e solution (regardless of who creates it) can still be efficiently\n\u003e\u003e transmitted even if it differs in arbitrary ways (though the\n\u003e\u003e efficiency is less the more different it is).\n\u003e\n\u003e Yup, although I don't get the 'merge mined' bit; the weak blocks are\n\u003e ephemeral, probably purged out of memory as soon as a few full blocks are\n\u003e found...\n\nUnless the weak block transaction list can be a superset of the block\ntransaction list size proportional propagation costs are not totally\neliminated.\n\nAs even if the weak block criteria is MUCH lower than the block\ncriteria (which would become problematic in its own right at some\npoint) the network will sometimes find blocks when there hasn't been\nany weak block priming at all (e.g. all prior priming has made it into\nblocks already).\n\nSo if the weak block commitment must be exactly the block commitment\nyou end up having to add a small number of transactions to your block\nabove and beyond the latest well propagated weak-blocks... Could still\nwork, but then creates a pressure to crank up the weak block overhead\nwhich could better be avoided.",
"sig": "98f9fa66956e10850710823914e7372cfb06b6366bc042c60a528234743dcca74556945fdd14eaf6232e02c716f01edcc2702a3436fc45e4ea0a5e40d6b5d585"
}