Matt Corallo [ARCHIVE] on Nostr: 📅 Original date posted:2022-04-22 📝 Original message: On 4/22/22 9:15 AM, Alex ...
📅 Original date posted:2022-04-22
📝 Original message:
On 4/22/22 9:15 AM, Alex Myers wrote:
> Hi Matt,
>
> Appreciate your responses. Hope you'll bear with me as I'm a bit new to this.
>
> Instead of trying to make sure everyone’s gossip acceptance matches exactly, which as you point
> it seems like a quagmire, why not (a) do a sync on startup and (b) do syncs of the *new* things.
>
> I'm not opposed to this technique, and maybe it ends up as a better solution. The rationale for not
> going full Erlay approach was that it's far less overhead to maintain a single sketch than to
> maintain a per-peer sketch and associated state for every gossip peer. In this way there's very
> little cost to adding additional gossip peers, which further encourages propagation and convergence
> of the gossip network.
I'm not sure what you mean by per-node state here - I'd think you can implement it with a simple
"list of updates that happened since time X" data, instead of having to maintain per-peer state.
> IIUC Erlay's design was concerned for privacy of originating nodes. Lightning gossip is public by
> nature, so I'm not sure we should constrain ourselves to the same design route without trying the
> alternative first.
Part of the design of Erlay, especially the insight of syncing updates instead of full mempools, was
actually this precise issue - Bitcoin Core nodes differ in policy for a number of reasons
(especially across updates), and thus syncing the full mempool will result in degenerate cases of
trying over and over and over again to sync stuff your peer is rejecting. At least if I recall
correctly.
> if we're gonna add a minisketch-based sync anyway, please lets also use it for initial sync
> after restart
>
> This was out of the scope of what I had in mind, but I will give this some thought. I could see how
> a block_height reference coupled with set reconciliation could provide some better options here.
> This may not be all that difficult to shoe-horn in.
>
> Regardless of single sketch or per-peer set reconciliation, it should be easier to implement with
> tighter rules on rate-limiting. (Keep in mind, the node's graph can presumably be updated
> independently of the gossip it rebroadcasts if desired.) As a thought experiment, if we consider a
> CLN-LDK set reconciliation, and that each node is gossiping with 5 other peers in an evenly spaced
> frequency, we would currently see 42.8 commonly accepted channel_updates over an average 60s window
> along with 11 more updates which LDK accepts and CLN rejects (spam.)[1] Assuming the other 5 peers
> have shared 5/6ths of this gossip before the CLN/LDK set reconciliation, we're left with CLN seeing
> 7 updates to reconcile, while LDK sees 18. Already we've lost 60% efficiency due to lack of a
> common rate-limit heuristic.
I do not believe that we will ever form a strong agreement on exactly what the rate-limits should
be. And even if we do, we still have the issue of upgrades, where a simple change to the rate-limits
causes sync to suddenly blow up and hit degenerate cases all over the place. Unless we can make the
sync system relatively robust against slightly different policies, I think we're kinda screwed.
Worse, what happens if someone sends updates at exactly the limit of the rate-limiters? Presumably
people will do this because "that's what the limit is and I want to send updates as often as I can
becaux...". Now you'll still have similar issues, I believe.
> I understand gossip traffic is manageable now, but I'm not sure it will be that long before it
> becomes an issue. Furthermore, any particular set reconciliation technique would benefit from a
> simple common rate-limit heuristic, not to mention originating nodes, who may not currently realize
> their channel updates are being rejected by a portion of the network due to differing criteria
> across implementations.
Yes, I agree there is definitely a concern with differing criteria resulting in nodes not realizing
their gossip is not propagating. I agree guidelines would be nice, but guidelines doesn't solve the
issue for sync, sadly, I think. Luckily lightning does provide a mechanism to bypass the rejection -
send an update back with an HTLC failure. If you're trying to route an HTLC and a node has new
parameters for you, it'll helpfully let you know when you try to use the old parameters.
Matt
Published at
2023-06-09 13:05:53Event JSON
{
"id": "bf2a60b31666791c66a0af395562f88475bdbd20412134e691a29478cb07d0c6",
"pubkey": "cd753aa8fbc112e14ffe9fe09d3630f0eff76ca68e376e004b8e77b687adddba",
"created_at": 1686315953,
"kind": 1,
"tags": [
[
"e",
"90c8a22893d09190339208f6b0eb636d6de40a2b6c7cff5708a261796a93b71e",
"",
"root"
],
[
"e",
"f45f6025165d1aa0ee9ff35609f5fe8343a083b6271972a25ae5064c3dd1f0f5",
"",
"reply"
],
[
"p",
"52b603cc7527b671d1636a488828123ae7e57dadefec3b4182df7f134485426e"
]
],
"content": "📅 Original date posted:2022-04-22\n📝 Original message:\nOn 4/22/22 9:15 AM, Alex Myers wrote:\n\u003e Hi Matt,\n\u003e \n\u003e Appreciate your responses. Hope you'll bear with me as I'm a bit new to this.\n\u003e \n\u003e Instead of trying to make sure everyone’s gossip acceptance matches exactly, which as you point\n\u003e it seems like a quagmire, why not (a) do a sync on startup and (b) do syncs of the *new* things.\n\u003e \n\u003e I'm not opposed to this technique, and maybe it ends up as a better solution. The rationale for not \n\u003e going full Erlay approach was that it's far less overhead to maintain a single sketch than to \n\u003e maintain a per-peer sketch and associated state for every gossip peer. In this way there's very \n\u003e little cost to adding additional gossip peers, which further encourages propagation and convergence \n\u003e of the gossip network.\n\nI'm not sure what you mean by per-node state here - I'd think you can implement it with a simple \n\"list of updates that happened since time X\" data, instead of having to maintain per-peer state.\n\n\u003e IIUC Erlay's design was concerned for privacy of originating nodes. Lightning gossip is public by \n\u003e nature, so I'm not sure we should constrain ourselves to the same design route without trying the \n\u003e alternative first.\n\nPart of the design of Erlay, especially the insight of syncing updates instead of full mempools, was \nactually this precise issue - Bitcoin Core nodes differ in policy for a number of reasons \n(especially across updates), and thus syncing the full mempool will result in degenerate cases of \ntrying over and over and over again to sync stuff your peer is rejecting. At least if I recall \ncorrectly.\n\n\u003e if we're gonna add a minisketch-based sync anyway, please lets also use it for initial sync\n\u003e after restart\n\u003e \n\u003e This was out of the scope of what I had in mind, but I will give this some thought. I could see how \n\u003e a block_height reference coupled with set reconciliation could provide some better options here. \n\u003e This may not be all that difficult to shoe-horn in.\n\u003e \n\u003e Regardless of single sketch or per-peer set reconciliation, it should be easier to implement with \n\u003e tighter rules on rate-limiting. (Keep in mind, the node's graph can presumably be updated \n\u003e independently of the gossip it rebroadcasts if desired.) As a thought experiment, if we consider a \n\u003e CLN-LDK set reconciliation, and that each node is gossiping with 5 other peers in an evenly spaced \n\u003e frequency, we would currently see 42.8 commonly accepted channel_updates over an average 60s window \n\u003e along with 11 more updates which LDK accepts and CLN rejects (spam.)[1] Assuming the other 5 peers \n\u003e have shared 5/6ths of this gossip before the CLN/LDK set reconciliation, we're left with CLN seeing \n\u003e 7 updates to reconcile, while LDK sees 18. Already we've lost 60% efficiency due to lack of a \n\u003e common rate-limit heuristic.\n\nI do not believe that we will ever form a strong agreement on exactly what the rate-limits should \nbe. And even if we do, we still have the issue of upgrades, where a simple change to the rate-limits \ncauses sync to suddenly blow up and hit degenerate cases all over the place. Unless we can make the \nsync system relatively robust against slightly different policies, I think we're kinda screwed.\n\nWorse, what happens if someone sends updates at exactly the limit of the rate-limiters? Presumably \npeople will do this because \"that's what the limit is and I want to send updates as often as I can \nbecaux...\". Now you'll still have similar issues, I believe.\n\n\u003e I understand gossip traffic is manageable now, but I'm not sure it will be that long before it \n\u003e becomes an issue. Furthermore, any particular set reconciliation technique would benefit from a \n\u003e simple common rate-limit heuristic, not to mention originating nodes, who may not currently realize \n\u003e their channel updates are being rejected by a portion of the network due to differing criteria \n\u003e across implementations.\n\nYes, I agree there is definitely a concern with differing criteria resulting in nodes not realizing \ntheir gossip is not propagating. I agree guidelines would be nice, but guidelines doesn't solve the \nissue for sync, sadly, I think. Luckily lightning does provide a mechanism to bypass the rejection - \nsend an update back with an HTLC failure. If you're trying to route an HTLC and a node has new \nparameters for you, it'll helpfully let you know when you try to use the old parameters.\n\nMatt",
"sig": "bba89bc8008f5148419b5c4deda5fa0d7a8d73341b0b8bdf98bae01a966287f443c1add44b5a573ba5e2cd98cb6e3b16d1790c09502744c68904de1598d960f3"
}