Christian Decker [ARCHIVE] on Nostr: 📅 Original date posted:2018-02-09 📝 Original message: Rusty Russell <rusty at ...
📅 Original date posted:2018-02-09
📝 Original message:
Rusty Russell <rusty at rustcorp.com.au> writes:
> Finally catching up. I prefer the simplicity of the timestamp
> mechanism, with a more ambitious mechanism TBA.
Fabrice and I had a short chat a few days ago and decided that we'll
simulate both approaches and see what consumes less bandwidth. With
zombie channels and the chances for missing channels during a weak form
of synchronization, it's not that clear to us which one has the better
tradeoff. With some numbers behind it it may become easier to decide :-)
> Deployment suggestions:
>
> 1. This should be a feature bit pair. As usual, even == 'support this or
> disconnect', and odd == 'ok even if you don't understand'.
If we add the timestamp to the end of the `init` message, instead of
introducing a new message altogether, we are forced to use the required
bit, otherwise we just made any future field appended to the `init`
message unparseable to non-supporting nodes. Let's say we add another
field to it later, that the peer supports, but it follows the timestamp
which the peer does not. The peer doesn't know how many bytes to skip
(if any) for the timestamp bit he doesn't understand, to get to the
bytes he does know how to parse. I'm slowly getting to like the extra
message more, since it also allows a number of cute tricks later.
> 2. This `timestamp_routing_sync`? feature overrides `initial_routing_sync`.
> That lets you decide what old nodes do, using the older
> `initial_routing_sync` option. Similarly, a future `fancy_sync` would
> override `timestamp_routing_sync`.
So you'd set both bits, and if the peer knows `timestamp_routing_sync`
that then force-sets the `initial_routing_sync`? Sounds ok, if we allow
optional implementations, even though I'd like to avoid feature
interactions as much as possible.
> 3. We can append an optional 4 byte `routing_sync_timestamp` field to to
> `init` without issues, since all lengths in there are explicit. If you
> don't offer the `timestamp_sync` feature, this Must Be Zero (for appending
> more stuff in future).
That'd still require the peer to know that it has to skip 4 bytes to get
any future fields, which is why I am convinced that either forcing it to
be mandatory, or adding a new message is the better way to go, even if
now everybody implements it correctly.
> Now, as to the proposal specifics.
>
> I dislike the re-transmission of all old channel_announcement and
> node_announcement messages, just because there's been a recent
> channel_update. Simpler to just say 'send anything >=
> routing_sync_timestamp`.
I'm afraid we can't really omit the `channel_announcement` since a
`channel_update` that isn't preceded by a `channel_announcement` is
invalid and will be dropped by peers (especially because the
`channel_update` doesn't contain the necessary information for
validation).
> Background: c-lightning internally keeps an tree of gossip in the order
> we received them, keeping a 'current' pointer for each peer. This is
> very efficient (though we don't remember if a peer sent us a gossip msg
> already, so uses twice the bandwidth it could).
We can solve that by keeping a filter of the messages we received from
the peer, it's more of an optimization than anything, other than the
bandwidth cost, it doesn't hurt.
> But this isn't *quite* the same as timestamp order, so we can't just set
> the 'current' pointer based on the first entry >=
> `routing_sync_timestamp`; we need to actively filter. This is still a
> simple traverse, however, skipping over any entry less than
> routing_sync_timestamp.
>
> OTOH, if we need to retransmit announcements, when do we stop
> retransmitting them? If a new channel_update comes in during this time,
> are we still to dump the announcements? Do we have to remember which
> ones we've sent to each peer?
That's more of an implementation detail. In c-lightning we can just
remember the index at which the initial sync started, and send
announcements along until the index is larger than the initial sync
index.
A more general approach would be to have 2 timestamps, one highwater and
one lowwater mark. Anything inbetween these marks will be forwarded
together with all associated announcements (node / channel), anything
newer than that will only forward the update. The two timestamps
approach, combined with a new message, would also allow us to send
multiple `timestamp_routing_sync` messages, e.g., first sync the last
hour, then the last day, then the last week, etc. It gives the syncing
node control over what timewindow to send, inverting the current initial
sync.
Cheers,
Christian
Published at
2023-06-09 12:48:52Event JSON
{
"id": "48d512e22c1a5258a23ccd37a043f3e849b6cada90700567cf3f7a5af79d8b33",
"pubkey": "72cd40332ec782dd0a7f63acb03e3b6fdafa6d91bd1b6125cd8b7117a1bb8057",
"created_at": 1686314932,
"kind": 1,
"tags": [
[
"e",
"c3f2627a28f3f44c130da7dce36c10040222ba86792d87bfb3e3dcd970958ae9",
"",
"root"
],
[
"e",
"4caf03649dd256a5a76e7a1329517f441cc212d6e0facd9e8ee272c1487211c2",
"",
"reply"
],
[
"p",
"13bd8c1c5e3b3508a07c92598647160b11ab0deef4c452098e223e443c1ca425"
]
],
"content": "📅 Original date posted:2018-02-09\n📝 Original message:\nRusty Russell \u003crusty at rustcorp.com.au\u003e writes:\n\u003e Finally catching up. I prefer the simplicity of the timestamp\n\u003e mechanism, with a more ambitious mechanism TBA.\n\nFabrice and I had a short chat a few days ago and decided that we'll\nsimulate both approaches and see what consumes less bandwidth. With\nzombie channels and the chances for missing channels during a weak form\nof synchronization, it's not that clear to us which one has the better\ntradeoff. With some numbers behind it it may become easier to decide :-)\n\n\u003e Deployment suggestions:\n\u003e\n\u003e 1. This should be a feature bit pair. As usual, even == 'support this or\n\u003e disconnect', and odd == 'ok even if you don't understand'.\n\nIf we add the timestamp to the end of the `init` message, instead of\nintroducing a new message altogether, we are forced to use the required\nbit, otherwise we just made any future field appended to the `init`\nmessage unparseable to non-supporting nodes. Let's say we add another\nfield to it later, that the peer supports, but it follows the timestamp\nwhich the peer does not. The peer doesn't know how many bytes to skip\n(if any) for the timestamp bit he doesn't understand, to get to the\nbytes he does know how to parse. I'm slowly getting to like the extra\nmessage more, since it also allows a number of cute tricks later.\n\n\u003e 2. This `timestamp_routing_sync`? feature overrides `initial_routing_sync`.\n\u003e That lets you decide what old nodes do, using the older\n\u003e `initial_routing_sync` option. Similarly, a future `fancy_sync` would\n\u003e override `timestamp_routing_sync`.\n\nSo you'd set both bits, and if the peer knows `timestamp_routing_sync`\nthat then force-sets the `initial_routing_sync`? Sounds ok, if we allow\noptional implementations, even though I'd like to avoid feature\ninteractions as much as possible.\n\n\u003e 3. We can append an optional 4 byte `routing_sync_timestamp` field to to\n\u003e `init` without issues, since all lengths in there are explicit. If you\n\u003e don't offer the `timestamp_sync` feature, this Must Be Zero (for appending\n\u003e more stuff in future).\n\nThat'd still require the peer to know that it has to skip 4 bytes to get\nany future fields, which is why I am convinced that either forcing it to\nbe mandatory, or adding a new message is the better way to go, even if\nnow everybody implements it correctly.\n\n\u003e Now, as to the proposal specifics.\n\u003e\n\u003e I dislike the re-transmission of all old channel_announcement and\n\u003e node_announcement messages, just because there's been a recent\n\u003e channel_update. Simpler to just say 'send anything \u003e=\n\u003e routing_sync_timestamp`.\n\nI'm afraid we can't really omit the `channel_announcement` since a\n`channel_update` that isn't preceded by a `channel_announcement` is\ninvalid and will be dropped by peers (especially because the\n`channel_update` doesn't contain the necessary information for\nvalidation).\n\n\u003e Background: c-lightning internally keeps an tree of gossip in the order\n\u003e we received them, keeping a 'current' pointer for each peer. This is\n\u003e very efficient (though we don't remember if a peer sent us a gossip msg\n\u003e already, so uses twice the bandwidth it could).\n\nWe can solve that by keeping a filter of the messages we received from\nthe peer, it's more of an optimization than anything, other than the\nbandwidth cost, it doesn't hurt.\n\n\u003e But this isn't *quite* the same as timestamp order, so we can't just set\n\u003e the 'current' pointer based on the first entry \u003e=\n\u003e `routing_sync_timestamp`; we need to actively filter. This is still a\n\u003e simple traverse, however, skipping over any entry less than\n\u003e routing_sync_timestamp.\n\u003e\n\u003e OTOH, if we need to retransmit announcements, when do we stop\n\u003e retransmitting them? If a new channel_update comes in during this time,\n\u003e are we still to dump the announcements? Do we have to remember which\n\u003e ones we've sent to each peer?\n\nThat's more of an implementation detail. In c-lightning we can just\nremember the index at which the initial sync started, and send\nannouncements along until the index is larger than the initial sync\nindex.\n\nA more general approach would be to have 2 timestamps, one highwater and\none lowwater mark. Anything inbetween these marks will be forwarded\ntogether with all associated announcements (node / channel), anything\nnewer than that will only forward the update. The two timestamps\napproach, combined with a new message, would also allow us to send\nmultiple `timestamp_routing_sync` messages, e.g., first sync the last\nhour, then the last day, then the last week, etc. It gives the syncing\nnode control over what timewindow to send, inverting the current initial\nsync.\n\nCheers,\nChristian",
"sig": "462c4e403e3e01a0a43cd3a12b134a59fad6b4e540e50419cbc86a1bb71f9e07c5d2a656de3b48caaee3a5e6558daf8c7bd805e40a2a3afb6b8b8652ff395aa3"
}