Matt Corallo [ARCHIVE] on Nostr: đź“… Original date posted:2022-04-22 đź“ť Original message: On 4/21/22 7:20 PM, Rusty ...
đź“… Original date posted:2022-04-22
đź“ť Original message:
On 4/21/22 7:20 PM, Rusty Russell wrote:
> Matt Corallo <lf-lists at mattcorallo.com> writes:
>> Sure, if you’re rejecting a large % of channel updates in total
>> you’re gonna end up hitting degenerate cases, but we can consider
>> tuning the sync frequency if that becomes an issue.
>
> Let's be clear: it's a problem.
>
> Allowing only 1 a day, ended up with 18% of channels hitting the spam
> limit. We cannot fit that many channel differences inside a set!
>
> Perhaps Alex should post his more detailed results, but it's pretty
> clear that we can't stay in sync with this many differences :(
Right, the fact that most nodes don't do any limiting at all and y'all have a *very* aggressive (by
comparison) limit is going to be an issue in any context. We could set some guidelines and improve
things, but luckily regular-update-sync bypasses some of these issues anyway - if we sync once per
block and your limit is once per block, getting 1000 updates per block for some channel doesn't
result in multiple failures in the sync. Sure, multiple peers sending different updates for that
channel can still cause some failures, but its still much better.
>> gossip queries is broken in at least five ways.
>
> Naah, it's perfect if you simply want to ask "give me updates since XXX"
> to get you close enough on reconnect to start using set reconciliation.
> This might allow us to remove some of the other features?
Sure, but that's *just* the "gossip_timestamp_filter" message, there's several other messages and a
whole query system that we can throw away if we just want that message :)
> But we might end up with a gossip2 if we want to enable taproot, and use
> blockheight as timestamps, in which case we could probably just support
> that one operation (and maybe a direct query op).
>
>> Like eclair, we don’t bother to rate limit and don’t see any issues with it, though we will skip relaying outbound updates if we’re saturating outbound connections.
>
> Yeah, we did as a trial, and in some cases it's become limiting. In
> particular, people restarting their LND nodes once a day resulting in 2
> updates per day (which, in 0.11.0, we now allow).
What do you mean "its become limiting"? As in you hit some reasonably-low CPU/disk/bandwidth limit
in doing this? We have a pretty aggressive bandwidth limit for this kinda stuff (well, indirect
bandwidth limit) and it very rarely hits in my experience (unless the peer is very overloaded and
not responding to pings, which is a somewhat separate thing...)
Matt
Published at
2023-06-09 13:05:54Event JSON
{
"id": "e9b252cd817493146d87b685ce92f07983222f61e08e81d7bec2699ef7ebc35e",
"pubkey": "cd753aa8fbc112e14ffe9fe09d3630f0eff76ca68e376e004b8e77b687adddba",
"created_at": 1686315954,
"kind": 1,
"tags": [
[
"e",
"90c8a22893d09190339208f6b0eb636d6de40a2b6c7cff5708a261796a93b71e",
"",
"root"
],
[
"e",
"a4b3598b417f6791a3d7f8613d3b9a7f214a6d80692ce4d408452cb6b4138a21",
"",
"reply"
],
[
"p",
"13bd8c1c5e3b3508a07c92598647160b11ab0deef4c452098e223e443c1ca425"
]
],
"content": "📅 Original date posted:2022-04-22\n📝 Original message:\nOn 4/21/22 7:20 PM, Rusty Russell wrote:\n\u003e Matt Corallo \u003clf-lists at mattcorallo.com\u003e writes:\n\u003e\u003e Sure, if you’re rejecting a large % of channel updates in total\n\u003e\u003e you’re gonna end up hitting degenerate cases, but we can consider\n\u003e\u003e tuning the sync frequency if that becomes an issue.\n\u003e \n\u003e Let's be clear: it's a problem.\n\u003e \n\u003e Allowing only 1 a day, ended up with 18% of channels hitting the spam\n\u003e limit. We cannot fit that many channel differences inside a set!\n\u003e \n\u003e Perhaps Alex should post his more detailed results, but it's pretty\n\u003e clear that we can't stay in sync with this many differences :(\n\nRight, the fact that most nodes don't do any limiting at all and y'all have a *very* aggressive (by \ncomparison) limit is going to be an issue in any context. We could set some guidelines and improve \nthings, but luckily regular-update-sync bypasses some of these issues anyway - if we sync once per \nblock and your limit is once per block, getting 1000 updates per block for some channel doesn't \nresult in multiple failures in the sync. Sure, multiple peers sending different updates for that \nchannel can still cause some failures, but its still much better.\n\n\u003e\u003e gossip queries is broken in at least five ways.\n\u003e \n\u003e Naah, it's perfect if you simply want to ask \"give me updates since XXX\"\n\u003e to get you close enough on reconnect to start using set reconciliation.\n\u003e This might allow us to remove some of the other features?\n\nSure, but that's *just* the \"gossip_timestamp_filter\" message, there's several other messages and a \nwhole query system that we can throw away if we just want that message :)\n\n\u003e But we might end up with a gossip2 if we want to enable taproot, and use\n\u003e blockheight as timestamps, in which case we could probably just support\n\u003e that one operation (and maybe a direct query op).\n\u003e \n\u003e\u003e Like eclair, we don’t bother to rate limit and don’t see any issues with it, though we will skip relaying outbound updates if we’re saturating outbound connections.\n\u003e \n\u003e Yeah, we did as a trial, and in some cases it's become limiting. In\n\u003e particular, people restarting their LND nodes once a day resulting in 2\n\u003e updates per day (which, in 0.11.0, we now allow).\n\nWhat do you mean \"its become limiting\"? As in you hit some reasonably-low CPU/disk/bandwidth limit \nin doing this? We have a pretty aggressive bandwidth limit for this kinda stuff (well, indirect \nbandwidth limit) and it very rarely hits in my experience (unless the peer is very overloaded and \nnot responding to pings, which is a somewhat separate thing...)\n\nMatt",
"sig": "a27d8d95543bbdd384ccb37b744d1cd9c9e933316c6a16999ac529d06ec0ec4ee1b38b210b6502fa0787915083918c99c9d035d16435594515130f2a2d2e060b"
}