Gregory Maxwell [ARCHIVE] on Nostr: 📅 Original date posted:2018-05-17 📝 Original message:On Thu, May 17, 2018 at ...
📅 Original date posted:2018-05-17
📝 Original message:On Thu, May 17, 2018 at 8:19 PM, Jim Posen <jim.posen at gmail.com> wrote:
> In my opinion, it's overly pessimistic to design the protocol in an insecure
> way because some light clients historically have taken shortcuts.
Any non-commited form is inherently insecure. A nearby network
attacker (or eclipse attacker) or whatnot can moot whatever kind of
comparisons you make, and non-comparison based validation doesn't seem
like it would be useful without mooting all the bandwidth improvements
unless I'm missing something.
It isn't a question of 'some lite clients' -- I am aware of no
implementation of these kinds of measures in any cryptocurrency ever.
The same kind of comparison to the block could have been done with
BIP37 filtering, but no one has implemented that. (similarly, the
whitepaper suggests doing that for all network rules when a
disagreement has been seen, though that isn't practical for all
network rules it could be done for many of them-- but again no
implementation or AFAIK any interest in implementing that)
> If the
> protocol can provide clients the option of getting additional security, it
> should.
Sure, but at what cost? And "additional" while nice doesn't
necessarily translate into a meaningful increase in delivered security
for any particular application.
I think we might be speaking too generally here.
What I'm suggesting would still allow a lite client to verify that
multiple parties are offering the same map for a given block (by
asking them for the map hash). It would still allow a future
commitment so that lite client could verify that the hashpower they're
hearing from agrees that the map they got is the correct corresponding
map for the block. It would still allow downloading a block and
verifying that all the outpoints in the block were included. So still
a lot better than BIP37.
What it would not permit is for a lite client to download a whole
block and completely verify the filter (they could only tell if the
filter at least told them about all the outputs in the block, but if
extra bits were set or inputs were omitted, they couldn't tell).
But in exchange the filters for a given FP rate would be probably
about half the current size (actual measurements would be needed
because the figure depends on much scriptpubkey reuse there is, it
probably could be anywhere between 1/3 and 2/3rd). In some
applications it would likely have better anonymity properties as well,
because a client that always filters for both an output and and input
as distinct items (and then leaks matches by fetching blocks) is more
distinguishable.
I think this trade-off is at leat worth considering because if you
always verify by downloading you wash out the bandwidth gains, strong
verification will eventually need a commitment in any case. A client
can still partially verify, and can still multi-party comparison
verify. ... and a big reduction in filter bandwidth
Monitoring inputs by scriptPubkey vs input-txid also has a massive
advantage for parallel filtering: You can usually known your pubkeys
well in advance, but if you have to change what you're watching block
N+1 for based on the txids that paid you in N you can't filter them
in parallel.
> On the general topic, Peter makes a good point that in many cases filtering
> by txid of spending transaction may be preferable to filtering by outpoint
> spend, which has the nice benefit that there are obviously fewer txs in a
> block than txins. This wouldn't work for malleable transactions though.
I think Peter missed Matt's point that you can monitor for a specific
transaction's confirmation by monitoring for any of the outpoints that
transaction contains. Because the txid commits to the outpoints there
shouldn't be any case where the txid is knowable but (an) outpoint is
not. Removal of the txid and monitoring for any one of the outputs
should be a strict reduction in the false positive rate for a given
filter size (the filter will contain strictly fewer elements and the
client will match for the same (or usually, fewer) number).
I _think_ dropping txids as matt suggests is an obvious win that costs
nothing. Replacing inputs with scripts as I suggested has some
trade-offs.
Published at
2023-06-07 18:12:12Event JSON
{
"id": "61a734ac0f7c6411d815615eb3851efa4aae86d967ffaed627c45fa326e4bbf7",
"pubkey": "4aa6cf9aa5c8e98f401dac603c6a10207509b6a07317676e9d6615f3d7103d73",
"created_at": 1686161532,
"kind": 1,
"tags": [
[
"e",
"aefee7e3913729b7ef736d47c6f2a24954de10011f27b89dcaeaac62c51b6a6f",
"",
"root"
],
[
"e",
"700746f277b9e5dec291abf11a3823e2c34714a888023a7469fff120d0e26e9b",
"",
"reply"
],
[
"p",
"9e2723f47c6c16d3093735bd6acdc8b0dd1b91c78216f7001bdd2f7562b69ed1"
]
],
"content": "📅 Original date posted:2018-05-17\n📝 Original message:On Thu, May 17, 2018 at 8:19 PM, Jim Posen \u003cjim.posen at gmail.com\u003e wrote:\n\u003e In my opinion, it's overly pessimistic to design the protocol in an insecure\n\u003e way because some light clients historically have taken shortcuts.\n\nAny non-commited form is inherently insecure. A nearby network\nattacker (or eclipse attacker) or whatnot can moot whatever kind of\ncomparisons you make, and non-comparison based validation doesn't seem\nlike it would be useful without mooting all the bandwidth improvements\nunless I'm missing something.\n\nIt isn't a question of 'some lite clients' -- I am aware of no\nimplementation of these kinds of measures in any cryptocurrency ever.\n\nThe same kind of comparison to the block could have been done with\nBIP37 filtering, but no one has implemented that. (similarly, the\nwhitepaper suggests doing that for all network rules when a\ndisagreement has been seen, though that isn't practical for all\nnetwork rules it could be done for many of them-- but again no\nimplementation or AFAIK any interest in implementing that)\n\n\u003e If the\n\u003e protocol can provide clients the option of getting additional security, it\n\u003e should.\n\nSure, but at what cost? And \"additional\" while nice doesn't\nnecessarily translate into a meaningful increase in delivered security\nfor any particular application.\n\nI think we might be speaking too generally here.\n\nWhat I'm suggesting would still allow a lite client to verify that\nmultiple parties are offering the same map for a given block (by\nasking them for the map hash). It would still allow a future\ncommitment so that lite client could verify that the hashpower they're\nhearing from agrees that the map they got is the correct corresponding\nmap for the block. It would still allow downloading a block and\nverifying that all the outpoints in the block were included. So still\na lot better than BIP37.\n\nWhat it would not permit is for a lite client to download a whole\nblock and completely verify the filter (they could only tell if the\nfilter at least told them about all the outputs in the block, but if\nextra bits were set or inputs were omitted, they couldn't tell).\n\nBut in exchange the filters for a given FP rate would be probably\nabout half the current size (actual measurements would be needed\nbecause the figure depends on much scriptpubkey reuse there is, it\nprobably could be anywhere between 1/3 and 2/3rd). In some\napplications it would likely have better anonymity properties as well,\nbecause a client that always filters for both an output and and input\nas distinct items (and then leaks matches by fetching blocks) is more\ndistinguishable.\n\nI think this trade-off is at leat worth considering because if you\nalways verify by downloading you wash out the bandwidth gains, strong\nverification will eventually need a commitment in any case. A client\ncan still partially verify, and can still multi-party comparison\nverify. ... and a big reduction in filter bandwidth\n\nMonitoring inputs by scriptPubkey vs input-txid also has a massive\nadvantage for parallel filtering: You can usually known your pubkeys\nwell in advance, but if you have to change what you're watching block\n N+1 for based on the txids that paid you in N you can't filter them\nin parallel.\n\n\u003e On the general topic, Peter makes a good point that in many cases filtering\n\u003e by txid of spending transaction may be preferable to filtering by outpoint\n\u003e spend, which has the nice benefit that there are obviously fewer txs in a\n\u003e block than txins. This wouldn't work for malleable transactions though.\n\nI think Peter missed Matt's point that you can monitor for a specific\ntransaction's confirmation by monitoring for any of the outpoints that\ntransaction contains. Because the txid commits to the outpoints there\nshouldn't be any case where the txid is knowable but (an) outpoint is\nnot. Removal of the txid and monitoring for any one of the outputs\nshould be a strict reduction in the false positive rate for a given\nfilter size (the filter will contain strictly fewer elements and the\nclient will match for the same (or usually, fewer) number).\n\nI _think_ dropping txids as matt suggests is an obvious win that costs\nnothing. Replacing inputs with scripts as I suggested has some\ntrade-offs.",
"sig": "9af8b970a56aa551410b6b9f1edd5f4875c0eb29345c31a2bc438629ea43c3d81b13f68c2b829ee0b77f3f7d24ce403072fc6ac8e5624c7d201d791f6abf15ac"
}