Why Nostr? What is Njump?
2023-06-09 13:03:29
in reply to

ZmnSCPxj [ARCHIVE] on Nostr: 📅 Original date posted:2021-08-16 📝 Original message: Good morning Stefan, > >I ...

📅 Original date posted:2021-08-16
📝 Original message:
Good morning Stefan,

> >I propose that the algorithm be modified >as such, that is, it *ignore* the fee  scheme.
>
> We actually started out thinking like this in the event we couldn't find a proper way to handle fees, and the real world experiments we've done so far have only involved probability costs, no fees at all. 
>
> However, I think it is non-trivial to deal with the many cases in which too high fees could occur, and in the end the most systematic way of dealing with them is actually including them in the cost function. 

A reason why I suggest this is that the cost function in actual implementation is *already* IMO overloaded.

In particular, actual implementations will have some kind of conversion between cltv-delta and fees-at-node.

This conversion implies some kind of "conversion rate" between blocks-locked-up and fees-at-node.
For example, in C-Lightning this is the `riskfactor` argument to `getroute`, which is also exposed at `pay`.

However, I think that in practice, most users cannot intuitively understand `riskfactor`.
I myself cannot; when I write my own `pay` (e.g. in CLBOSS) I just start `riskfactor` to the in-manual default value, then tweak it higher if the total lockup time exceeds some maximum cltv budget for the payment and call `getroute` again.

Similarly, I think it is easier for users to think in terms of "fee budget" instead.

Of course, algorithms should try to keep costs as low as possible, if there are two alternate payment plans that are both below the fee budget, the one with lower actual fee is still preferred.
But perhaps we should focus more on payment success *within some fee and timelock budget*.

Indeed, as you point out, your real-world experiments you have done have involved only probability as cost.
However, by the paper you claim to have sent 40,000,000,000msat for a cost of 814,000msat, or 0.002035% fee percentage, far below the 0.5% default `maxfeepercent` we have, which I think is fairly reasonable argument for "let us ignore fees and timelocks unless it hits the budget".
(on the other hand, those numbers come from a section labelled "Simulation", so that may not reflect the real world experiments you had --- what numbers did you get for those?)


>
> That said, I agree with Matt that more research needs to be done about the effect of  base fees on these computations. We do know they make the problem hard in general, but we might find a way to deal with them reasonably in practice. 

Is my suggestion not reasonable in practice?
Is the algorithm runtime too high?

>
> I tend to agree with AJ, that I don't  believe the base fee is economically helpful, but I also think that the market will decide that rather than the devs (though I would argue for default Zerobasefee in the implementations). 
>
> In my view, nobody is really earning any money with the base fee, so the discussion is kind of artificial. On the other hand, I would estimate our approach should lead to liquidity being priced correctly in the proportional fee instead of the price being undercut by hobbyists as is the case now. So in the long run I expect our routing method to make running a well-stocked LN router much more profitable.

While we certainly need to defer to economic requirements, we *also* need to defer to engineering requirements (else Lightning cannot be implemented in practice, so any economic benefits it might provide are not achievable anyway).
As I understand the argument of Matt, we may encounter an engineering reason to charge some base fee (or something very much like it), so encouraging #zerobasefee *now* might not be the wisest course of action, as a future engineering problem may need to be solved with non-zero basefee (or something very much like it).


Regards,
ZmnSCPxj
Author Public Key
npub1g5zswf6y48f7fy90jf3tlcuwdmjn8znhzaa4vkmtxaeskca8hpss23ms3l