ZmnSCPxj [ARCHIVE] on Nostr: š
Original date posted:2020-10-11 š Original message: Good morning t-bast, > ...
š
Original date posted:2020-10-11
š Original message:
Good morning t-bast,
> Hey Zman,
>
> > raising the minimum payment size is another headache
>
> It's true that it may (depending on the algorithm) lower the success rate of MPP-split.
> But it's already a parameter that node operators can configure at will (at channel creation time),
> so IMO it's a complexity we have to deal with anyway. Making it dynamic shouldn't have a high
> impact on MPP algorithms (apart from failures while `channel_update`s are propagating).
Right, it should not have much impact.
For the most part, when considering the possibility of splicing in the future, we should consider that such parameters must be made changeable largely.
>
> To be fully honest, my (maybe unpopular) opinion about MPP is that it's not necessary on the
> network's backbone, only at its edges. Once the network matures, I expect channels between
> "serious" routing nodes to be way bigger than the size of individual payments. The only places
> where there may be small or almost-empty channels are between end-users (wallets) and
> routing nodes.
> If something like Trampoline were to be implemented, MPP would only be needed to reach a
> first routing node (short route), that routing node would aggregate the parts and forward as a
> single HTLC to the next routing node. It would be split again once it reaches the other edge
> of the network (for a short route as well). In a network like this, the MPP routes would only have
> to be computed on a small subset of the network, which makes brute-force algorithms completely
> reasonable and the success rate higher.
This makes me wonder if we really need the onions-per-channel model we currently use.
For instance, Tor is basically two-layer: there is a lower-level TCP/IP layer where packets are sent out to specific nodes on the network and this layer is completely open about where the packet should go, but there is a higher layer where onion routing between nodes is used.
We could imitate this, with HTLC packets that openly show the next destination node, but once all parts reach the destination node, it decodes and turns out to be an onion to be sent to the next destination node, and the current destination node is just another forwarder.
HTLC packets could be split arbitrarily, and later nodes could potentially merge with the lower CLTV used in subsequent hops.
Or not, *shrug*.
It has the bad problem of being more expensive on average than purely source-based routing, and probably having worse payment latency.
For your proposal, how sure is the receiver that the input end of the trampoline node is "nearer" to the payer than itself?
Regards,
ZmnSCPxj
Published at
2023-06-09 13:00:57Event JSON
{
"id": "0c988e9fbaeffaabd369e970c39cdea2eb8b2c0f3a49692eb9b3cb4f602958a6",
"pubkey": "4505072744a9d3e490af9262bfe38e6ee5338a77177b565b6b37730b63a7b861",
"created_at": 1686315657,
"kind": 1,
"tags": [
[
"e",
"bf5d675863f97a951d35eeee3ef3828dfc349552a8c3471bf6b8f80c10432a22",
"",
"root"
],
[
"e",
"e20d5218abf5129454bc5a8c34df4d5bcd6dd13d315179418562e710bed282cc",
"",
"reply"
],
[
"p",
"f26569a10f83f6935797b8b53a87974fdcc1de6abd31e3b1a3a19bdaed8031cb"
]
],
"content": "š
Original date posted:2020-10-11\nš Original message:\nGood morning t-bast,\n\n\u003e Hey Zman,\n\u003e\n\u003e \u003e raising the minimum payment size is another headache\n\u003e\n\u003e It's true that it may (depending on the algorithm) lower the success rate of MPP-split.\n\u003e But it's already a parameter that node operators can configure at will (at channel creation time),\n\u003e so IMO it's a complexity we have to deal with anyway. Making it dynamic shouldn't have a high\n\u003e impact on MPP algorithms (apart from failures while `channel_update`s are propagating).\n\nRight, it should not have much impact.\n\nFor the most part, when considering the possibility of splicing in the future, we should consider that such parameters must be made changeable largely.\n\n\n\u003e\n\u003e To be fully honest, my (maybe unpopular) opinion about MPP is that it's not necessary on the\n\u003e network's backbone, only at its edges. Once the network matures, I expect channels between\n\u003e \"serious\" routing nodes to be way bigger than the size of individual payments. The only places\n\u003e where there may be small or almost-empty channels are between end-users (wallets) and\n\u003e routing nodes.\n\u003e If something like Trampoline were to be implemented, MPP would only be needed to reach a\n\u003e first routing node (short route), that routing node would aggregate the parts and forward as a\n\u003e single HTLC to the next routing node. It would be split again once it reaches the other edge\n\u003e of the network (for a short route as well). In a network like this, the MPP routes would only have\n\u003e to be computed on a small subset of the network, which makes brute-force algorithms completely\n\u003e reasonable and the success rate higher.\n\nThis makes me wonder if we really need the onions-per-channel model we currently use.\n\nFor instance, Tor is basically two-layer: there is a lower-level TCP/IP layer where packets are sent out to specific nodes on the network and this layer is completely open about where the packet should go, but there is a higher layer where onion routing between nodes is used.\n\nWe could imitate this, with HTLC packets that openly show the next destination node, but once all parts reach the destination node, it decodes and turns out to be an onion to be sent to the next destination node, and the current destination node is just another forwarder.\n\nHTLC packets could be split arbitrarily, and later nodes could potentially merge with the lower CLTV used in subsequent hops.\n\nOr not, *shrug*.\nIt has the bad problem of being more expensive on average than purely source-based routing, and probably having worse payment latency.\n\n\nFor your proposal, how sure is the receiver that the input end of the trampoline node is \"nearer\" to the payer than itself?\n\nRegards,\nZmnSCPxj",
"sig": "6c2fa51384af9acaa9d04c04c601ed22ba16d37435d049bc926962f0053322be025217a02548d943c94634880f5e70fb62cdcbf216b579a83be2b5b8ce2a98c2"
}