Why Nostr? What is Njump?
2023-06-09 12:49:18
in reply to

ZmnSCPxj [ARCHIVE] on Nostr: πŸ“… Original date posted:2018-03-02 πŸ“ Original message: Good morning Rene, Please ...

πŸ“… Original date posted:2018-03-02
πŸ“ Original message:
Good morning Rene,

Please consider the recent discussion about AMP, atomic multi-path. https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/000993.html

Note that only the source (payer) can split the payment into multiple smaller payments; we cannot safely let intermediaries split the payment as the intermediaries may very well decide to send it to a ridiculously high-fee channel. So the payer will make multiple payments that can only be merged at the destination, each sub-payment has a single route and itself cannot be split unless the payer decides to split.

> Not sure however how the impacts to the HTLC would be and if it would actually be possible to fragment a payment that is encapsulated within the onion routing.

The timeouts in particular would be impossible to handle. At any point the payment should reach the payee within some N blocks and each hop reduces that margin by a small amount (14 blocks for c-lightning if I remember correctly). It is likely that there will not be enough time if it goes through a detour, unless the detour has equal or smaller reduction (delay) than the original hop with insufficient monetary capacity.

Regards,
ZmnSCPxj

Sent with [ProtonMail](https://protonmail.com) Secure Email.

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On March 1, 2018 11:26 PM, RenΓ© Pickhardt via Lightning-dev <lightning-dev at lists.linuxfoundation.org> wrote:

> Hey everyone,
>
> disclaimer I am new here and have not a full understanding of the complete specs yet - however since I decided to participate in lighting dev I will just be brave and try to add my ideas on the problem jimpo posed. So even in case by ideas are complete bs please just tell me in a friendly way and I know I need to read more code and specs in order to participate.
>
> Reading about the described problem techniques like IP-Fragmentation ( https://en.wikipedia.org/wiki/IP_fragmentation ) come to my mind. The setting is a little bit different but in from my current understanding it should still be applicable and also be the favorable solution in comparison to the proposed ping:
>
> 1.) IP setting: In IP-Fragmentation one would obviously just split the IP-package in order to utilize a link layer protocol that doesn't have enough bandwidth for a bigger size package.
> 2.) Lightning case: When the capacity of a channel during routing is not high enough - which means that the channel balance on that side is somewhere between 0 and the size of the payment - one would have to to send the second part of the fragmented package on a different route. This is obvious since the insufficient channel balance cannot come out of thin air (as in IP-Routing).
>
> My first intuition was that this would become a problem for onion routing since the router in question does not know the final destination but only knows the next hop which can't be utilized as the channel doesn't have enough funds. However I imagine one could just encapsulate the second part of the fragmented payment in a new onion routed package that goes on a detour (using different payment channels) to the original "next" hop und progresses from there as it was originally thought of.
>
> Not sure however how the impacts to the HTLC would be and if it would actually be possible to fragment a payment that is encapsulated within the onion routing.
>
> If possible the advantage in comparison to your proposed ping method is that fragmentation would be highly dynamic and still work if a channel runs out of funds while routing payment. In your ping scenario it could very well happen that you do a ping for a designated rout, everything looks great, you send a payment but while it is on its way a channel on that way could run dry.
>
> best Rene
>
> On Thu, Mar 1, 2018 at 3:45 PM, Jim Posen <jim.posen at gmail.com> wrote:
>
>> My understanding is that the best strategy for choosing a route to send funds over is to determine all possible routes, rank them by estimated fees based on channel announcements and number of hops, then try them successively until one works.
>>
>> It seems inefficient to me to actually do a full HTLC commitment handshake on each hop just to find out that the last hop in the route didn't have sufficient remaining capacity in the first place. Depending on how many people are using the network, I could also forsee situations where this creates more payment failures because bandwidth is locked up in HTLCs that are about to fail anyway.
>>
>> One idea that would likely help is the ability to send a ping over an onion route asking "does every hop have capacity to send X msat?" Every hop would forward the onion request if the answer is yes, or immediately send the response back up the circuit if the answer is no. This should reveal no additional information about the channel capacities that the sender couldn't determine by sending a test payment to themself (assuming they could find a loop). Additionally, the hops could respond with the latest fee rate in case channel updates are slow to propagate.
>>
>> The main benefit is that this should make it quicker to send a successful payment because latency is lower than sending an actual payment and the sender could ping all possible routes in parallel, whereas they can't send multiple payments in parallel. The main downside I can think of is that, by the same token, it is faster and cheaper for someone to extract information about channel capacities on the network with a binary search.
>>
>> -jimpo
>>
>> _______________________________________________
>> Lightning-dev mailing list
>> Lightning-dev at lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
> --
> Skype: rene.pickhardt
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20180302/fed112ba/attachment-0001.html>;
Author Public Key
npub1g5zswf6y48f7fy90jf3tlcuwdmjn8znhzaa4vkmtxaeskca8hpss23ms3l