Adam Back [ARCHIVE] on Nostr: ๐
Original date posted:2015-02-20 ๐ Original message:The idea is not mine, some ...
๐
Original date posted:2015-02-20
๐ Original message:The idea is not mine, some random guy appeared in #bitcoin-wizards one
day and said something about it, and lots of people reacted, wow why
didnt we think about that before.
It goes something like each block contains a commitment to a bloom
filter that has all of the addresses in the block stored in it.
Now the user downloads the headers and bloom data for all blocks. The
know the bloom data is correct in an SPV sense because of the
commitment. They can scan it offline and locally by searching for
addresses from their wallet in it. Not sure off hand what is the most
efficient strategy, probably its pretty fast locally anyway.
Now they know (modulo false positives) which addresses of theirs maybe
in the block.
So now they ask a full node for merkle paths + transactions for the
addresses from the UTXO set from the block(s) that it was found in.
Separately UTXO commitments could optionally be combined to improve
security in two ways:
- the normal SPV increase that you can also see that the transaction
is actually in the last blocks UTXO set.
- to avoid withholding by the full node, if the UTXO commitment is a
trie (sorted) they can expect a merkle path to lexically adjacent
nodes either side of where the claimed missing address would be as a
proof that there really are no transactions for that address in the
block. (Distinguishing false positive from node withholding)
Adam
On 20 February 2015 at 17:43, Mike Hearn <mike at plan99.net> wrote:
> Ah, I see, I didn't catch that this scheme relies on UTXO commitments
> (presumably with Mark's PATRICIA tree system?).
>
> If you're doing a binary search over block contents then does that imply
> multiple protocol round trips per synced block? I'm still having trouble
> visualising how this works. Perhaps you could write down an example run for
> me.
>
> How does it interact with the need to download chains rather than individual
> transactions, and do so without round-tripping to the remote node for each
> block? Bloom filtering currently pulls down blocks in batches without much
> client/server interaction and that is useful for performance.
>
> Like I said, I'd rather just junk the whole notion of chain scanning and get
> to a point where clients are only syncing headers. If nodes were calculating
> a script->(outpoint, merkle branch) map in LevelDB and allowing range
> queries over it, then you could quickly pull down relevant UTXOs along with
> the paths that indicated they did at one point exist. Nodes can still
> withhold evidence that those outputs were spent, but the same is true today
> and in practice this doesn't seem to be an issue.
>
> The primary advantage of that approach is it does not require a change to
> the consensus rules. But there are lots of unanswered questions about how it
> interacts with HD lookahead and so on.
>
Published at
2023-06-07 15:30:44Event JSON
{
"id": "dad686ed42b1ba9c9b58d651c80ab40327d5fc04b066a5000c595b61953bab03",
"pubkey": "ee0fa66772f633411e4432e251cfb15b1c0fe8cd8befd8b0d86eb302402a8b4a",
"created_at": 1686151844,
"kind": 1,
"tags": [
[
"e",
"d00e0f44d6037fd0ba68029864e2acf1e3fe5c4c51dbcdd7d112be31766cd3e9",
"",
"root"
],
[
"e",
"090a523d0bd12117c7d6a033a74d32f129faff0c6874354500d382a764a93af5",
"",
"reply"
],
[
"p",
"f2c95df3766562e3b96b79a0254881c59e8639f23987846961cf55412a77f6f2"
]
],
"content": "๐
Original date posted:2015-02-20\n๐ Original message:The idea is not mine, some random guy appeared in #bitcoin-wizards one\nday and said something about it, and lots of people reacted, wow why\ndidnt we think about that before.\n\nIt goes something like each block contains a commitment to a bloom\nfilter that has all of the addresses in the block stored in it.\n\nNow the user downloads the headers and bloom data for all blocks. The\nknow the bloom data is correct in an SPV sense because of the\ncommitment. They can scan it offline and locally by searching for\naddresses from their wallet in it. Not sure off hand what is the most\nefficient strategy, probably its pretty fast locally anyway.\n\nNow they know (modulo false positives) which addresses of theirs maybe\nin the block.\n\nSo now they ask a full node for merkle paths + transactions for the\naddresses from the UTXO set from the block(s) that it was found in.\n\nSeparately UTXO commitments could optionally be combined to improve\nsecurity in two ways:\n\n- the normal SPV increase that you can also see that the transaction\nis actually in the last blocks UTXO set.\n\n- to avoid withholding by the full node, if the UTXO commitment is a\ntrie (sorted) they can expect a merkle path to lexically adjacent\nnodes either side of where the claimed missing address would be as a\nproof that there really are no transactions for that address in the\nblock. (Distinguishing false positive from node withholding)\n\nAdam\n\nOn 20 February 2015 at 17:43, Mike Hearn \u003cmike at plan99.net\u003e wrote:\n\u003e Ah, I see, I didn't catch that this scheme relies on UTXO commitments\n\u003e (presumably with Mark's PATRICIA tree system?).\n\u003e\n\u003e If you're doing a binary search over block contents then does that imply\n\u003e multiple protocol round trips per synced block? I'm still having trouble\n\u003e visualising how this works. Perhaps you could write down an example run for\n\u003e me.\n\u003e\n\u003e How does it interact with the need to download chains rather than individual\n\u003e transactions, and do so without round-tripping to the remote node for each\n\u003e block? Bloom filtering currently pulls down blocks in batches without much\n\u003e client/server interaction and that is useful for performance.\n\u003e\n\u003e Like I said, I'd rather just junk the whole notion of chain scanning and get\n\u003e to a point where clients are only syncing headers. If nodes were calculating\n\u003e a script-\u003e(outpoint, merkle branch) map in LevelDB and allowing range\n\u003e queries over it, then you could quickly pull down relevant UTXOs along with\n\u003e the paths that indicated they did at one point exist. Nodes can still\n\u003e withhold evidence that those outputs were spent, but the same is true today\n\u003e and in practice this doesn't seem to be an issue.\n\u003e\n\u003e The primary advantage of that approach is it does not require a change to\n\u003e the consensus rules. But there are lots of unanswered questions about how it\n\u003e interacts with HD lookahead and so on.\n\u003e",
"sig": "4fe1d750d0473b63889f5b75c7271e971587b922f9d8305d2eb36ea1d4b95b2f58c56a522326c00f84f0f95bb2e6d5546188af8aea1bcf74aff29b2ecf709c8d"
}