ZmnSCPxj [ARCHIVE] on Nostr: 📅 Original date posted:2021-08-16 📝 Original message: Good morning Stefan, > >I ...
📅 Original date posted:2021-08-16
📝 Original message:
Good morning Stefan,
> >I propose that the algorithm be modified >as such, that is, it *ignore* the fee scheme.
>
> We actually started out thinking like this in the event we couldn't find a proper way to handle fees, and the real world experiments we've done so far have only involved probability costs, no fees at all.
>
> However, I think it is non-trivial to deal with the many cases in which too high fees could occur, and in the end the most systematic way of dealing with them is actually including them in the cost function.
A reason why I suggest this is that the cost function in actual implementation is *already* IMO overloaded.
In particular, actual implementations will have some kind of conversion between cltv-delta and fees-at-node.
This conversion implies some kind of "conversion rate" between blocks-locked-up and fees-at-node.
For example, in C-Lightning this is the `riskfactor` argument to `getroute`, which is also exposed at `pay`.
However, I think that in practice, most users cannot intuitively understand `riskfactor`.
I myself cannot; when I write my own `pay` (e.g. in CLBOSS) I just start `riskfactor` to the in-manual default value, then tweak it higher if the total lockup time exceeds some maximum cltv budget for the payment and call `getroute` again.
Similarly, I think it is easier for users to think in terms of "fee budget" instead.
Of course, algorithms should try to keep costs as low as possible, if there are two alternate payment plans that are both below the fee budget, the one with lower actual fee is still preferred.
But perhaps we should focus more on payment success *within some fee and timelock budget*.
Indeed, as you point out, your real-world experiments you have done have involved only probability as cost.
However, by the paper you claim to have sent 40,000,000,000msat for a cost of 814,000msat, or 0.002035% fee percentage, far below the 0.5% default `maxfeepercent` we have, which I think is fairly reasonable argument for "let us ignore fees and timelocks unless it hits the budget".
(on the other hand, those numbers come from a section labelled "Simulation", so that may not reflect the real world experiments you had --- what numbers did you get for those?)
>
> That said, I agree with Matt that more research needs to be done about the effect of base fees on these computations. We do know they make the problem hard in general, but we might find a way to deal with them reasonably in practice.
Is my suggestion not reasonable in practice?
Is the algorithm runtime too high?
>
> I tend to agree with AJ, that I don't believe the base fee is economically helpful, but I also think that the market will decide that rather than the devs (though I would argue for default Zerobasefee in the implementations).
>
> In my view, nobody is really earning any money with the base fee, so the discussion is kind of artificial. On the other hand, I would estimate our approach should lead to liquidity being priced correctly in the proportional fee instead of the price being undercut by hobbyists as is the case now. So in the long run I expect our routing method to make running a well-stocked LN router much more profitable.
While we certainly need to defer to economic requirements, we *also* need to defer to engineering requirements (else Lightning cannot be implemented in practice, so any economic benefits it might provide are not achievable anyway).
As I understand the argument of Matt, we may encounter an engineering reason to charge some base fee (or something very much like it), so encouraging #zerobasefee *now* might not be the wisest course of action, as a future engineering problem may need to be solved with non-zero basefee (or something very much like it).
Regards,
ZmnSCPxj
Published at
2023-06-09 13:03:29Event JSON
{
"id": "c89ea2318db224f93aded38668b740d6babfa87cabdf9b1727c8572198b181ff",
"pubkey": "4505072744a9d3e490af9262bfe38e6ee5338a77177b565b6b37730b63a7b861",
"created_at": 1686315809,
"kind": 1,
"tags": [
[
"e",
"9c2d9ebc7daf40e6a14ddb62eed3add34417e7bccd1a15aa0b16d2df92878dde",
"",
"root"
],
[
"e",
"875c30ff2cf688a6b3dc9d3877e94c93cdef184356c17afea5add63e89686ec7",
"",
"reply"
],
[
"p",
"2a67fa9c85955362b9fd51ebb032020693c0873f0054bc192f847df2d3ac6af9"
]
],
"content": "📅 Original date posted:2021-08-16\n📝 Original message:\nGood morning Stefan,\n\n\u003e \u003eI propose that the algorithm be modified \u003eas such, that is, it *ignore* the fee scheme.\n\u003e\n\u003e We actually started out thinking like this in the event we couldn't find a proper way to handle fees, and the real world experiments we've done so far have only involved probability costs, no fees at all. \n\u003e\n\u003e However, I think it is non-trivial to deal with the many cases in which too high fees could occur, and in the end the most systematic way of dealing with them is actually including them in the cost function. \n\nA reason why I suggest this is that the cost function in actual implementation is *already* IMO overloaded.\n\nIn particular, actual implementations will have some kind of conversion between cltv-delta and fees-at-node.\n\nThis conversion implies some kind of \"conversion rate\" between blocks-locked-up and fees-at-node.\nFor example, in C-Lightning this is the `riskfactor` argument to `getroute`, which is also exposed at `pay`.\n\nHowever, I think that in practice, most users cannot intuitively understand `riskfactor`.\nI myself cannot; when I write my own `pay` (e.g. in CLBOSS) I just start `riskfactor` to the in-manual default value, then tweak it higher if the total lockup time exceeds some maximum cltv budget for the payment and call `getroute` again.\n\nSimilarly, I think it is easier for users to think in terms of \"fee budget\" instead.\n\nOf course, algorithms should try to keep costs as low as possible, if there are two alternate payment plans that are both below the fee budget, the one with lower actual fee is still preferred.\nBut perhaps we should focus more on payment success *within some fee and timelock budget*.\n\nIndeed, as you point out, your real-world experiments you have done have involved only probability as cost.\nHowever, by the paper you claim to have sent 40,000,000,000msat for a cost of 814,000msat, or 0.002035% fee percentage, far below the 0.5% default `maxfeepercent` we have, which I think is fairly reasonable argument for \"let us ignore fees and timelocks unless it hits the budget\".\n(on the other hand, those numbers come from a section labelled \"Simulation\", so that may not reflect the real world experiments you had --- what numbers did you get for those?)\n\n\n\u003e\n\u003e That said, I agree with Matt that more research needs to be done about the effect of base fees on these computations. We do know they make the problem hard in general, but we might find a way to deal with them reasonably in practice. \n\nIs my suggestion not reasonable in practice?\nIs the algorithm runtime too high?\n\n\u003e\n\u003e I tend to agree with AJ, that I don't believe the base fee is economically helpful, but I also think that the market will decide that rather than the devs (though I would argue for default Zerobasefee in the implementations). \n\u003e\n\u003e In my view, nobody is really earning any money with the base fee, so the discussion is kind of artificial. On the other hand, I would estimate our approach should lead to liquidity being priced correctly in the proportional fee instead of the price being undercut by hobbyists as is the case now. So in the long run I expect our routing method to make running a well-stocked LN router much more profitable.\n\nWhile we certainly need to defer to economic requirements, we *also* need to defer to engineering requirements (else Lightning cannot be implemented in practice, so any economic benefits it might provide are not achievable anyway).\nAs I understand the argument of Matt, we may encounter an engineering reason to charge some base fee (or something very much like it), so encouraging #zerobasefee *now* might not be the wisest course of action, as a future engineering problem may need to be solved with non-zero basefee (or something very much like it).\n\n\nRegards,\nZmnSCPxj",
"sig": "77cf4dd0d21c00e76271a8fafcedfc571d372705600afb2a77fdca20d830b4c7b29c156c077da1d381bf44e3c185f25a9f65345de9b28a0589d1a7889aef9018"
}