Why Nostr? What is Njump?
2023-06-09 12:55:50
in reply to

fiatjaf [ARCHIVE] on Nostr: πŸ“… Original date posted:2019-08-05 πŸ“ Original message: No. My question was more ...

πŸ“… Original date posted:2019-08-05
πŸ“ Original message:
No. My question was more like why does Alice decide to build a route that
for through T1 and RT2 and not only through one trampoline router she knows.

That makes sense you me in the context of ZmnSCPxj's virtual space idea,
but not necessarily in the current network conditions. You also said we're
going to need some hierarchy, but what it's that? Is it required?

Anyway, I'm probably missing something, but another way of putting my
question would be: why does your example use 2 trampolines instead of 1?

On Monday, August 5, 2019, Bastien TEINTURIER <bastien at acinq.fr> wrote:
> Good morning fiatjaf,
> This is a good question, I'm glad you asked.
> As:m ZmnSCPxj points out, Alice doesn't know. By not syncing the full
network graph, Alice has to accept
> "being in the dark" for some decisions. She is merely hoping that RT2 can
find a route to Bob. Note that
> it's quite easy to help Alice make informed decision by proving routing
hints in the invoice and in gossip
> messages (which we already do for "normal" routing).
> The graph today is strongly connected, so it's quite a reasonable
assumption (and Alice can easily retry
> with another choice of trampoline node if the first one fails - just like
we do today with normal payments).
> I fully agree with ZmnSCPxj though that in the future this might not be
true anymore. When/if the network
> becomes too large we will likely lose its strongly connected nature. When
that happens, the Lightning
> Network will need some kind of hierarchical / packet switched routing
architecture and we won't require
> trampoline nodes to know the whole network graph and be able to route to
mostly anyone.
> I argue that trampoline routing is a first step towards enabling that.
It's a good engineering trade-off between
> ease of implementation and deployment, fixing a problem we have today and
enabling future scaling for
> problems we'll have tomorrow. It's somewhat easy once we have trampoline
payments to evolve that to a
> system closer to the internet's packet switching infrastructure, so we'll
deal with that once the need for it
> becomes obvious.
> Does that answer your question?
> Cheers,
> Bastien
> Le sam. 3 aoΓ»t 2019 Γ  05:48, ZmnSCPxj <ZmnSCPxj at protonmail.com> a Γ©crit :
>>
>> Good morning fiatjaf,
>>
>> I proposed before that we could institute a rule where nodes are mapped
to some virtual space, and nodes should preferably retain the part of the
network graph that connects itself to those nodes near to it in this
virtual space (and possibly prefer to channel to those nodes).
>>
>>
https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-April/001959.html
>>
>> Thus Alice might **not** know that some route exists between T1 and T2.
>>
>> T1 itself might not know of a route from itself to T2.
>> But if T1 knows a route to T1.5, and it knows that T1.5 is nearer to T2
than to itself in the virtual space, it can **try** to route through T1.5
in the hope T1.5 knows a route from itself to T2.
>> This can be done if T1 can remove itself from the trampoline route and
replace itself with T1.5, offerring in exchange some of the fee to T1.5.
>>
>> Other ways of knowing some distillation of the public network without
remembering the channel level details are also possible.
>> My recent pointlessly long spam email for example has a section on
Hierarchical Maps.
>>
>>
https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-August/002095.html
>>
>> Regards,
>> ZmnSCPxj
>>
>>
>> Sent with ProtonMail Secure Email.
>>
>> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
>> On Saturday, August 3, 2019 12:29 AM, fiatjaf <fiatjaf at alhur.es> wrote:
>>
>> > Ok, since you seem to imply each question is valuable, here's mine:
how does Alice know RT2 has a route to Bob? If she knows that, can she also
know T1 has a route to Bob? In any case, why can't she just build her small
onion with Alice -> T1 -> Bob? I would expect that to be the most common
case, am I right?
>> >
>> > On Friday, August 2, 2019, Bastien TEINTURIER <bastien at acinq.fr> wrote:
>> >
>> > > Good morning list,
>> > >
>> > > I realized that trampoline routing has only been briefly described
to this list (credits to cdecker and pm47 for laying
>> > > out the foundations). I just published an updated PR [1] and want to
take this opportunity to present the high level
>> > > view here and the parts that need a concept ACK and more feedback.
>> > >
>> > > Trampoline routing is conceptually quite simple. Alice wants to send
a payment to Bob, but she doesn't know a
>> > > route to get there because Alice only keeps a small area of the
routing table locally (Alice has a crappy phone,
>> > > damn it Alice sell some satoshis and buy a real phone). However,
Alice has a few trampoline nodes in her
>> > > friends-of-friends and knows some trampoline nodes outside of her
local area (but she doesn't know how to reach
>> > > them). Alice would like to send a payment to a trampoline node she
can reach and defer calculation of the rest of
>> > > the route to that node.
>> > >
>> > > The onion routing part is very simple now that we have
variable-length onion payloads (thanks again cdecker!).
>> > > Just like russian dolls, we simply put a small onion inside a big
onion. And the HTLC management forwards very
>> > > naturally.
>> > >
>> > > It's always simpler with an example. Let's imagine that Alice can
reach three trampoline nodes: T1, T2 and T3.
>> > > She also knows the details of many remote trampoline nodes that she
cannot reach: RT1, RT2, RT3 and RT4.
>> > > Alice selects T1 and RT2 to use as trampoline hops. She builds a
small onion that describes the following route:
>> > >
>> > > Alice -> T1 -> RT2 -> Bob
>> > >
>> > > She finds a route to T1 and builds a normal onion to send a payment
to T1:
>> > >
>> > > Alice -> N1 -> N2 -> T1
>> > >
>> > > In the payload for T1, Alice puts the small trampoline onion.
>> > > When T1 receives the payment, he is able to peel one layer of the
trampoline onion and discover that he must
>> > > forward the payment to RT2. T1 finds a route to RT2 and builds a
normal onion to send a payment to RT2:
>> > >
>> > > T1 -> N3 -> RT2
>> > >
>> > > In the payload for RT2, T1 puts the peeled small trampoline onion.
>> > > When RT2 receives the payment, he is able to peel one layer of the
trampoline onion and discover that he must
>> > > forward the payment to Bob. RT2 finds a route to Bob and builds a
normal onion to send a payment:
>> > >
>> > > RT2 -> N4 -> N5 -> Bob
>> > >
>> > > In the payload for Bob, RT2 puts the peeled small trampoline onion.
>> > > When Bob receives the payment, he is able to peel the last layer of
the trampoline onion and discover that he is
>> > > the final recipient, and fulfills the payment.
>> > >
>> > > Alice has successfully sent a payment to Bob deferring route
calculation to some chosen trampoline nodes.
>> > > That part was simple and (hopefully) not controversial, but it left
out some important details:
>> > >
>> > > 1. How do trampoline nodes specify their fees and cltv requirements?
>> > > 2. How does Alice sync the fees and cltv requirements for her
remote trampoline nodes?
>> > >
>> > > To answer 1., trampoline nodes needs to estimate a fee and cltv that
allows them to route to (almost) any other
>> > > trampoline node. This is likely going to increase the fees paid by
end-users, but they can't eat their cake and
>> > > have it too: by not syncing the whole network, users are trading
fees for ease of use and payment reliability.
>> > >
>> > > To answer 2., we can re-use the existing gossip infrastructure to
exchange a new node_update message that
>> > > contains the trampoline fees and cltv. However Alice doesn't want to
receive every network update because she
>> > > doesn't have the bandwidth to support it (damn it again Alice,
upgrade your mobile plan). My suggestion is to
>> > > create a filter system (similiar to BIP37) where Alice sends gossip
filters to her peers, and peers only forward to
>> > > Alice updates that match these filters. This doesn't have the issues
BIP37 has for Bitcoin because it has a cost
>> > > for Alice: she has to open a channel (and thus lock funds) to get a
connection to a peer. Peers can refuse to serve
>> > > filters if they are too expensive to compute, but the filters I
propose in the PR are very cheap (a simple xor or a
>> > > node distance comparison).
>> > >
>> > > If you're interested in the technical details, head over to [1].
>> > > I would really like to get feedback from this list on the concept
itself, and especially on the gossip and fee estimation
>> > > parts. If you made it that far, I'm sure you have many questions and
suggestions ;).
>> > >
>> > > Cheers,
>> > > Bastien
>> > >
>> > > [1] https://github.com/lightningnetwork/lightning-rfc/pull/654
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20190805/7c8c7c1b/attachment-0001.html>;
Author Public Key
npub1v2xa40strmvauf2gr5gjj5c3yqlytar7p3v64nfg0ke6e0vkvvkqxpmakl