Why Nostr? What is Njump?
2023-06-09 13:03:29
in reply to

Stefan Richter [ARCHIVE] on Nostr: 📅 Original date posted:2021-08-17 📝 Original message: Good morning Zmn! ...

📅 Original date posted:2021-08-17
📝 Original message:
Good morning Zmn!


ZmnSCPxj <ZmnSCPxj at protonmail.com> schrieb am Mo., 16. Aug. 2021, 10:27:

>
> A reason why I suggest this is that the cost function in actual
> implementation is *already* IMO overloaded.
>
> In particular, actual implementations will have some kind of conversion
> between cltv-delta and fees-at-node.
>

That's an interesting aspect. Would this lead to a constant per edge if
incorporated in the cost function? If so, this would lead to another
generally hard problem, which, again, needs to be explored more in the
concrete cases we have here to see if we can still solve/approximate it.

However, I think that in practice, most users cannot intuitively understand
> `riskfactor`.
>

I don't think they have to. Only people like you who write actual software
probably need to.


> Similarly, I think it is easier for users to think in terms of "fee
> budget" instead.
>
> Of course, algorithms should try to keep costs as low as possible, if
> there are two alternate payment plans that are both below the fee budget,
> the one with lower actual fee is still preferred.
> But perhaps we should focus more on payment success *within some fee and
> timelock budget*.
>
> Indeed, as you point out, your real-world experiments you have done have
> involved only probability as cost.
> However, by the paper you claim to have sent 40,000,000,000msat for a cost
> of 814,000msat, or 0.002035% fee percentage, far below the 0.5% default
> `maxfeepercent` we have, which I think is fairly reasonable argument for
> "let us ignore fees and timelocks unless it hits the budget".
> (on the other hand, those numbers come from a section labelled
> "Simulation", so that may not reflect the real world experiments you had
> --- what numbers did you get for those?)
>

René is going to publish those results very soon.

Regarding payment success *within some fee and timelock budget*: the
situation is a little more complex than it appears. As you have pointed
out, at the moment, most of the routes are very cheap (too cheap, IMHO), so
you have to be very unlucky to hit an expensive flow. So in the current
environment, your approach seems to work pretty well, which is also why we
first thought about it.

Unfortunately, as you know, we have to think adversarially in this domain.
And it is clear that if we simply disregarded fees in routing, people would
try to take advantage of this. If we just set a fee budget, and try again
if it is missed, then I see some problems arise: First, what edges do you
exclude in the next try? Where is that boundary? Second, I am pretty sure
an adversary could design a DOS vector in this way by forcing people to go
through exponentially many min-cost flow rounds (which are not cheap
anyway) excluding only few edges per round.

Indeed, if you read the paper closely you will have seen that this kind of
problem (optimizing for some cost while staying under a budget for a second
cost) is (weakly) np-hard even for the single path case. So there is some
intuition that this is not as simple as you might imagine it. I personally
think that the Lagrangian style of combining the costs in a linear fashion
is very promising, but you might be successful with more direct methods as
well.

Is my suggestion not reasonable in practice?
> Is the algorithm runtime too high?
>

See above. I don't know, but I believe it would be hard to make safe
against adversaries. Including the fees in the cost function appears to be
the more holistic approach to me, since min-cost flow algorithms always
give you a globally optimized answer.

While we certainly need to defer to economic requirements, we *also* need
> to defer to engineering requirements (else Lightning cannot be implemented
> in practice, so any economic benefits it might provide are not achievable
> anyway).
>

Yes. I wholeheartedly agree. However, I prefer watering down a
mathematically correct solution as needed to building increasingly complex
ad-hoc heuristics.

As I understand the argument of Matt, we may encounter an engineering
> reason to charge some base fee (or something very much like it), so
> encouraging #zerobasefee *now* might not be the wisest course of action, as
> a future engineering problem may need to be solved with non-zero basefee
> (or something very much like it).
>

If we encountered such a reason, we could still encourage something else
IMHO. I do agree that we should not shorten our options by making a
protocol change at this time.

Best regards,

Stefan

P. S. : I have been using Clboss for some time now and I am very impressed.
Thank you for your amazing work! I would love a zerobasefee flag, though ;)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20210817/90a07096/attachment.html>;
Author Public Key
npub19fnl48y9j4fk9w0a284mqvszq6fuppelqp2tcxf0s37l95avdtussf4wf0