Alex Myers [ARCHIVE] on Nostr: π
Original date posted:2022-05-27 π Original message: > > The update contains a ...
π
Original date posted:2022-05-27
π Original message:
> > The update contains a block number. Let's say we allow an update every
> > 100 blocks. This must be <= current block height (and presumably, newer
> > than height - 2016).
> >
> > If you send an update number 600000, and then 600100, it will propagate.
> > 600099 will not.
>
>
> Ah, this is an additional proposal on top, and requires a gossip "hard fork", which means your new
> protocol would only work for taproot channels, and any old/unupgraded channels will have to be
> propagated via the old mechanism. I'd kinda prefer to be able to rip out the old gossip sync code
> sooner than a few years from now :(.
I viewed it as a soft fork, where if you want to use set reconciliation, anything added to the set would be subject to a constricted ruleset - in this case the gossip must be accompanied by a blockheight tlv (or otherwise reference a blockheight) and it must not replace a message in the current 100 block range.
It doesn't necessarily have to reference blockheight, but that would simplify many edge cases. The key is merely that a node is responsible for limiting it's own gossip to a predefined interval, and it must be easily verifiable for any other nodes building and reconciling sketches. Given that we have access to a timechain, this just made the most sense.
> > If some nodes have 600000 and others have 600099 (because you broke the
> > ratelimiting recommendation, and propagated both approx the same
> > time), then the network will split, sure.
>
>
> Right, so what do you do in that case, though? AFAIU, in your proposed sync mechanism if a node does
> this once, you're stuck with all of your gossip reconciliations with every peer "wasting" one
> difference "slot" for a day or however long it takes before the peer does a sane update. In my
> proposed alternative it only appears once and then you move on (or maybe once more on startup, but
> we can maybe be willing to take on some extra cost there?).
This case may not be all that difficult. Easiest answer is you offer a spam proof to your peer. Send both messages, signed by the offending node as proof they violated the set reconciliation rate limit, then remove both from your sketch. You may want to keep the evidence it in your data store, at least until it's superceded by the next valid update, but there's no reason it must occupy a slot of the sketch. Meanwhile, feel free to use the message as you wish, just keep both out of the sketch. It's not perfect, but the sketch capacity is not compromised and the second incidence of spam should not propagate at all. (It may be possible to keep one, but this is the simplest answer.)
> Heh, I'm surprised you'd complain about this - IIUC your existing gossip storage system keeps this
> as a side-effect so it'd be a single integer for y'all :p. In any case, if it makes the protocol a
> chunk more efficient I don't see why its a big deal to keep track of the set of which invoices have
> changed recently, you could even make it super efficient by just saying "anything more recent than
> timestamp X except a few exceptions that we got with some lag against the update timestamp".
The benefit of a single global sketch is less overhead in adding additional gossip peers, though looking at the numbers, sketch decoding time seems to be the more significant driving factor than rebuilding sketches (when they're incremental.) I also like maximizing the utility of the sketch by adding the full gossip store if possible.
I still think getting the rate-limit responsibility to the originating node would be a win in either case. It will chew into sketch capacity regardless.
-Alex
------- Original Message -------
On Thursday, May 26th, 2022 at 5:19 PM, Matt Corallo <lf-lists at mattcorallo.com> wrote:
>
> On 5/26/22 1:25 PM, Rusty Russell wrote:
>
> > Matt Corallo lf-lists at mattcorallo.com writes:
> >
> > > > > I agree there should be some rough consensus, but rate-limits are a locally-enforced thing, not a
> > > > > global one. There will always be races and updates you reject that your peers dont, no matter the
> > > > > rate-limit, and while I agree we should have guidelines, we can't "just make them the same" - it
> > > > > both doesn't solve the problem and means we can't change them in the future.
> > > >
> > > > Sure it does! It severly limits the set divergence to race conditions
> > > > (down to block height divergence, in practice).
> > >
> > > Huh? There's always some line you draw, if an update happens right on the line (which they almost
> > > certainly often will because people want to update, and they'll update every X hours to whatever the
> > > rate limit is), then ~half the network will accept the update and half won't. I don't see how you
> > > solve this problem.
> >
> > The update contains a block number. Let's say we allow an update every
> > 100 blocks. This must be <= current block height (and presumably, newer
> > than height - 2016).
> >
> > If you send an update number 600000, and then 600100, it will propagate.
> > 600099 will not.
>
>
> Ah, this is an additional proposal on top, and requires a gossip "hard fork", which means your new
> protocol would only work for taproot channels, and any old/unupgraded channels will have to be
> propagated via the old mechanism. I'd kinda prefer to be able to rip out the old gossip sync code
> sooner than a few years from now :(.
>
> > If some nodes have 600000 and others have 600099 (because you broke the
> > ratelimiting recommendation, and propagated both approx the same
> > time), then the network will split, sure.
>
>
> Right, so what do you do in that case, though? AFAIU, in your proposed sync mechanism if a node does
> this once, you're stuck with all of your gossip reconciliations with every peer "wasting" one
> difference "slot" for a day or however long it takes before the peer does a sane update. In my
> proposed alternative it only appears once and then you move on (or maybe once more on startup, but
> we can maybe be willing to take on some extra cost there?).
>
> > > > Maybe. What's a "non-update" based sketch? Some huge percentage of
> > > > gossip is channel_update, so it's kind of the thing we want?
> > >
> > > Oops, maybe we're on very different pages, here - I mean doing sketches based on "the things that
> > > I received since the last sync, ie all the gossip updates from the last hour" vs doing sketches
> > > based on "the things I have, ie my full gossip store".
> >
> > But that requires state. Full store requires none, keeping it
> > super-simple
>
>
> Heh, I'm surprised you'd complain about this - IIUC your existing gossip storage system keeps this
> as a side-effect so it'd be a single integer for y'all :p. In any case, if it makes the protocol a
> chunk more efficient I don't see why its a big deal to keep track of the set of which invoices have
> changed recently, you could even make it super efficient by just saying "anything more recent than
> timestamp X except a few exceptions that we got with some lag against the update timestamp".
>
> Better, the state is global, not per-peer, and a small fraction of the total state of the gossip
> store anyway, so its not like its introducing some new giant or non-constant-factor blowup.
>
> Matt
Published at
2023-06-09 13:06:05Event JSON
{
"id": "d3a2ee7c8d3e15b9867b22d47edc56ce290dd3cd92ea781158107728a6d4684b",
"pubkey": "52b603cc7527b671d1636a488828123ae7e57dadefec3b4182df7f134485426e",
"created_at": 1686315965,
"kind": 1,
"tags": [
[
"e",
"6f44fc36e4e07717b0ba01136a1c618064fe2a94c36ab961a35969fe16e02201",
"",
"root"
],
[
"e",
"cc2ff47367abfd2b3107965f2bb38f898348b06e79abb9245179dfcceadfb78b",
"",
"reply"
],
[
"p",
"cd753aa8fbc112e14ffe9fe09d3630f0eff76ca68e376e004b8e77b687adddba"
]
],
"content": "π
Original date posted:2022-05-27\nπ Original message:\n\u003e \u003e The update contains a block number. Let's say we allow an update every\n\u003e \u003e 100 blocks. This must be \u003c= current block height (and presumably, newer\n\u003e \u003e than height - 2016).\n\u003e \u003e\n\u003e \u003e If you send an update number 600000, and then 600100, it will propagate.\n\u003e \u003e 600099 will not.\n\u003e\n\u003e\n\u003e Ah, this is an additional proposal on top, and requires a gossip \"hard fork\", which means your new\n\u003e protocol would only work for taproot channels, and any old/unupgraded channels will have to be\n\u003e propagated via the old mechanism. I'd kinda prefer to be able to rip out the old gossip sync code\n\u003e sooner than a few years from now :(.\n\nI viewed it as a soft fork, where if you want to use set reconciliation, anything added to the set would be subject to a constricted ruleset - in this case the gossip must be accompanied by a blockheight tlv (or otherwise reference a blockheight) and it must not replace a message in the current 100 block range.\n\nIt doesn't necessarily have to reference blockheight, but that would simplify many edge cases. The key is merely that a node is responsible for limiting it's own gossip to a predefined interval, and it must be easily verifiable for any other nodes building and reconciling sketches. Given that we have access to a timechain, this just made the most sense.\n\n\u003e \u003e If some nodes have 600000 and others have 600099 (because you broke the\n\u003e \u003e ratelimiting recommendation, and propagated both approx the same\n\u003e \u003e time), then the network will split, sure.\n\u003e\n\u003e\n\u003e Right, so what do you do in that case, though? AFAIU, in your proposed sync mechanism if a node does\n\u003e this once, you're stuck with all of your gossip reconciliations with every peer \"wasting\" one\n\u003e difference \"slot\" for a day or however long it takes before the peer does a sane update. In my\n\u003e proposed alternative it only appears once and then you move on (or maybe once more on startup, but\n\u003e we can maybe be willing to take on some extra cost there?).\n\nThis case may not be all that difficult. Easiest answer is you offer a spam proof to your peer. Send both messages, signed by the offending node as proof they violated the set reconciliation rate limit, then remove both from your sketch. You may want to keep the evidence it in your data store, at least until it's superceded by the next valid update, but there's no reason it must occupy a slot of the sketch. Meanwhile, feel free to use the message as you wish, just keep both out of the sketch. It's not perfect, but the sketch capacity is not compromised and the second incidence of spam should not propagate at all. (It may be possible to keep one, but this is the simplest answer.)\n\n\u003e Heh, I'm surprised you'd complain about this - IIUC your existing gossip storage system keeps this\n\u003e as a side-effect so it'd be a single integer for y'all :p. In any case, if it makes the protocol a\n\u003e chunk more efficient I don't see why its a big deal to keep track of the set of which invoices have\n\u003e changed recently, you could even make it super efficient by just saying \"anything more recent than\n\u003e timestamp X except a few exceptions that we got with some lag against the update timestamp\".\n\nThe benefit of a single global sketch is less overhead in adding additional gossip peers, though looking at the numbers, sketch decoding time seems to be the more significant driving factor than rebuilding sketches (when they're incremental.) I also like maximizing the utility of the sketch by adding the full gossip store if possible.\n\nI still think getting the rate-limit responsibility to the originating node would be a win in either case. It will chew into sketch capacity regardless.\n\n-Alex\n\n\n------- Original Message -------\nOn Thursday, May 26th, 2022 at 5:19 PM, Matt Corallo \u003clf-lists at mattcorallo.com\u003e wrote:\n\n\n\u003e\n\u003e On 5/26/22 1:25 PM, Rusty Russell wrote:\n\u003e\n\u003e \u003e Matt Corallo lf-lists at mattcorallo.com writes:\n\u003e \u003e\n\u003e \u003e \u003e \u003e \u003e I agree there should be some rough consensus, but rate-limits are a locally-enforced thing, not a\n\u003e \u003e \u003e \u003e \u003e global one. There will always be races and updates you reject that your peers dont, no matter the\n\u003e \u003e \u003e \u003e \u003e rate-limit, and while I agree we should have guidelines, we can't \"just make them the same\" - it\n\u003e \u003e \u003e \u003e \u003e both doesn't solve the problem and means we can't change them in the future.\n\u003e \u003e \u003e \u003e\n\u003e \u003e \u003e \u003e Sure it does! It severly limits the set divergence to race conditions\n\u003e \u003e \u003e \u003e (down to block height divergence, in practice).\n\u003e \u003e \u003e\n\u003e \u003e \u003e Huh? There's always some line you draw, if an update happens right on the line (which they almost\n\u003e \u003e \u003e certainly often will because people want to update, and they'll update every X hours to whatever the\n\u003e \u003e \u003e rate limit is), then ~half the network will accept the update and half won't. I don't see how you\n\u003e \u003e \u003e solve this problem.\n\u003e \u003e\n\u003e \u003e The update contains a block number. Let's say we allow an update every\n\u003e \u003e 100 blocks. This must be \u003c= current block height (and presumably, newer\n\u003e \u003e than height - 2016).\n\u003e \u003e\n\u003e \u003e If you send an update number 600000, and then 600100, it will propagate.\n\u003e \u003e 600099 will not.\n\u003e\n\u003e\n\u003e Ah, this is an additional proposal on top, and requires a gossip \"hard fork\", which means your new\n\u003e protocol would only work for taproot channels, and any old/unupgraded channels will have to be\n\u003e propagated via the old mechanism. I'd kinda prefer to be able to rip out the old gossip sync code\n\u003e sooner than a few years from now :(.\n\u003e\n\u003e \u003e If some nodes have 600000 and others have 600099 (because you broke the\n\u003e \u003e ratelimiting recommendation, and propagated both approx the same\n\u003e \u003e time), then the network will split, sure.\n\u003e\n\u003e\n\u003e Right, so what do you do in that case, though? AFAIU, in your proposed sync mechanism if a node does\n\u003e this once, you're stuck with all of your gossip reconciliations with every peer \"wasting\" one\n\u003e difference \"slot\" for a day or however long it takes before the peer does a sane update. In my\n\u003e proposed alternative it only appears once and then you move on (or maybe once more on startup, but\n\u003e we can maybe be willing to take on some extra cost there?).\n\u003e\n\u003e \u003e \u003e \u003e Maybe. What's a \"non-update\" based sketch? Some huge percentage of\n\u003e \u003e \u003e \u003e gossip is channel_update, so it's kind of the thing we want?\n\u003e \u003e \u003e\n\u003e \u003e \u003e Oops, maybe we're on very different pages, here - I mean doing sketches based on \"the things that\n\u003e \u003e \u003e I received since the last sync, ie all the gossip updates from the last hour\" vs doing sketches\n\u003e \u003e \u003e based on \"the things I have, ie my full gossip store\".\n\u003e \u003e\n\u003e \u003e But that requires state. Full store requires none, keeping it\n\u003e \u003e super-simple\n\u003e\n\u003e\n\u003e Heh, I'm surprised you'd complain about this - IIUC your existing gossip storage system keeps this\n\u003e as a side-effect so it'd be a single integer for y'all :p. In any case, if it makes the protocol a\n\u003e chunk more efficient I don't see why its a big deal to keep track of the set of which invoices have\n\u003e changed recently, you could even make it super efficient by just saying \"anything more recent than\n\u003e timestamp X except a few exceptions that we got with some lag against the update timestamp\".\n\u003e\n\u003e Better, the state is global, not per-peer, and a small fraction of the total state of the gossip\n\u003e store anyway, so its not like its introducing some new giant or non-constant-factor blowup.\n\u003e\n\u003e Matt",
"sig": "2b1c677f6596033c0c96cf186e2609ffd11ea7caa885d2d809a567a722abbd9c5da355da5eba2f858520de5dcd5f253086540ce834c6895d67a7d6477b283b4a"
}