Matt Corallo [ARCHIVE] on Nostr: 📅 Original date posted:2012-06-15 📝 Original message:On Thu, 2012-06-14 at ...
📅 Original date posted:2012-06-15
📝 Original message:On Thu, 2012-06-14 at 13:52 +0200, Mike Hearn wrote:
> > filterinit(false positive rate, number of elements): initialize
> > filterload(data): input a serialized bloom filter table metadata and data.
>
> Why not combine these two?
I believe its because it allows the node which will have to use the
bloom filter to scan transactions to chose how much effort it wants to
put into each transaction on behalf of the SPV client. Though its
generally a small amount of CPU time/memory, if we end up with a drastic
split between SPV nodes and only a few large network nodes, those nodes
may wish to limit the CPU/memory usage each node is allowed to use,
which may be important if you are serving 1000 SPV peers. It offers a
sort of negotiation between SPV client and full node instead of letting
the client specify it outright.
>
> > 'filterload' and 'filteradd' enable special behavior changes for
> > 'mempool' and existing P2P commands, whereby only transactions
> > matching the bloom filter will be announced to the connection, and
> > only matching transactions will be sent inside serialized blocks.
>
> Need to specify the format of how these arrive. It means that when a
> new block is found instead of inv<->getdata<->block we'd see something
> like inv<->getdata<->merkleblock where a "merkleblock" structure is a
> header + list of transactions + list of merkle branches linking them
> to the root. I think CMerkleTx already knows how to serialize this,
> but it redundantly includes the block hash which would not be
> necessary for a merkleblock message.
A series of CMerkleTx's might also end up redundantly encoding branches
of the merkle tree, so, yes as a part of the BIP/implementation, I would
say we probably want a CFilteredBlock or similar
Published at
2023-06-07 10:12:49Event JSON
{
"id": "28c6c18483f5f652f630d57c8ea756a5680d23e84a83c1e253d0368b5f28549e",
"pubkey": "cd753aa8fbc112e14ffe9fe09d3630f0eff76ca68e376e004b8e77b687adddba",
"created_at": 1686132769,
"kind": 1,
"tags": [
[
"e",
"4ab728f783516dec9c5c5541517c3c7b576e19472778304e55db6fb52442f9a4",
"",
"root"
],
[
"e",
"040f7c59b8b2aeb72b740912057f43a342356b64dc7ab59ad5c46aa730af7ab8",
"",
"reply"
],
[
"p",
"f2c95df3766562e3b96b79a0254881c59e8639f23987846961cf55412a77f6f2"
]
],
"content": "📅 Original date posted:2012-06-15\n📝 Original message:On Thu, 2012-06-14 at 13:52 +0200, Mike Hearn wrote:\n\u003e \u003e filterinit(false positive rate, number of elements): initialize\n\u003e \u003e filterload(data): input a serialized bloom filter table metadata and data.\n\u003e \n\u003e Why not combine these two?\nI believe its because it allows the node which will have to use the\nbloom filter to scan transactions to chose how much effort it wants to\nput into each transaction on behalf of the SPV client. Though its\ngenerally a small amount of CPU time/memory, if we end up with a drastic\nsplit between SPV nodes and only a few large network nodes, those nodes\nmay wish to limit the CPU/memory usage each node is allowed to use,\nwhich may be important if you are serving 1000 SPV peers. It offers a\nsort of negotiation between SPV client and full node instead of letting\nthe client specify it outright.\n\u003e \n\u003e \u003e 'filterload' and 'filteradd' enable special behavior changes for\n\u003e \u003e 'mempool' and existing P2P commands, whereby only transactions\n\u003e \u003e matching the bloom filter will be announced to the connection, and\n\u003e \u003e only matching transactions will be sent inside serialized blocks.\n\u003e \n\u003e Need to specify the format of how these arrive. It means that when a\n\u003e new block is found instead of inv\u003c-\u003egetdata\u003c-\u003eblock we'd see something\n\u003e like inv\u003c-\u003egetdata\u003c-\u003emerkleblock where a \"merkleblock\" structure is a\n\u003e header + list of transactions + list of merkle branches linking them\n\u003e to the root. I think CMerkleTx already knows how to serialize this,\n\u003e but it redundantly includes the block hash which would not be\n\u003e necessary for a merkleblock message.\nA series of CMerkleTx's might also end up redundantly encoding branches\nof the merkle tree, so, yes as a part of the BIP/implementation, I would\nsay we probably want a CFilteredBlock or similar",
"sig": "400f37db34ff6e5391ec5ee860bafa8d131d31f62fb657d8861f54044187419aad185c23d1c2a57b4a526ad4d66d5ddfdb0159bd11ea0760fb4d8a540ac572fa"
}