Why Nostr? What is Njump?
2023-06-09 13:01:20
in reply to

Bastien TEINTURIER [ARCHIVE] on Nostr: 📅 Original date posted:2020-10-14 📝 Original message: To be honest the current ...

📅 Original date posted:2020-10-14
📝 Original message:
To be honest the current protocol can be hard to grasp at first (mostly
because it's hard to reason
about two commit txs being constantly out of sync), but from an
implementation's point of view I'm
not sure your proposals are simpler.

One of the benefits of the current HTLC state machine is that once you
describe your state as a set
of local changes (proposed by you) plus a set of remote changes (proposed
by them), where each of
these is split between proposed, signed and acked updates, the flow is
straightforward to implement
and deterministic.

The only tricky part (where we've seen recurring compatibility issues) is
what happens on
reconnections. But it seems to me that the only missing requirement in the
spec is on the order of
messages sent, and more specifically that if you are supposed to send a
`revoke_and_ack`, you must
send that first (or at least before sending any `commit_sig`). Adding test
scenarios in the spec
could help implementers get this right.

It's a bit tricky to get it right at first, but once you get it right you
don't need to touch that
code again and everything runs smoothly. We're pretty close to that state,
so why would we want to
start from scratch? Or am I missing something?

Cheers,
Bastien

Le mar. 13 oct. 2020 à 13:58, Christian Decker <decker.christian at gmail.com>
a écrit :

> I wonder if we should just go the tried-and-tested leader-based
> mechanism:
>
> 1. The node with the lexicographically lower node_id is determined to
> be the leader.
> 2. The leader receives proposals for changes from itself and the peer
> and orders them into a logical sequence of changes
> 3. The leader applies the changes locally and streams them to the peer.
> 4. Either node can initiate a commitment by proposing a `flush` change.
> 5. Upon receiving a `flush` the nodes compute the commitment
> transaction and exchange signatures.
>
> This is similar to your proposal, but does away with turn changes (it's
> always the leader's turn), and therefore reduces the state we need to
> keep track of (and re-negotiate on reconnect).
>
> The downside is that we add a constant overhead to one side's
> operations, but since we pipeline changes, and are mostly synchronous
> during the signing of the commitment tx today anyway, this comes out to
> 1 RTT for each commitment.
>
> On the other hand a token-passing approach (which I think is what you
> propose) require a synchronous token handover whenever a the direction
> of the updates changes. This is assuming I didn't misunderstand the turn
> mechanics of your proposal :-)
>
> Cheers,
> Christian
>
> Rusty Russell <rusty at rustcorp.com.au> writes:
> > Hi all,
> >
> > Our HTLC state machine is optimal, but complex[1]; the Lightning
> > Labs team recently did some excellent work finding another place the spec
> > is insufficient[2]. Also, the suggestion for more dynamic changes makes
> it
> > more difficult, usually requiring forced quiescence.
> >
> > The following protocol returns to my earlier thoughts, with cost of
> > latency in some cases.
> >
> > 1. The protocol is half-duplex, with each side taking turns; opener
> first.
> > 2. It's still the same form, but it's always one-direction so both sides
> > stay in sync.
> > update+-> commitsig-> <-revocation <-commitsig revocation->
> > 3. A new message pair "turn_request" and "turn_reply" let you request
> > when it's not your turn.
> > 4. If you get an update in reply to your turn_request, you lost the race
> > and have to defer your own updates until after peer is finished.
> > 5. On reconnect, you send two flags: send-in-progress (if you have
> > sent the initial commitsig but not the final revocation) and
> > receive-in-progress (if you have received the initial commitsig
> > not not received the final revocation). If either is set,
> > the sender (as indicated by the flags) retransmits the entire
> > sequence.
> > Otherwise, (arbitrarily) opener goes first again.
> >
> > Pros:
> > 1. Way simpler. There is only ever one pair of commitment txs for any
> > given commitment index.
> > 2. Fee changes are now deterministic. No worrying about the case where
> > the peer's changes are also in flight.
> > 3. Dynamic changes can probably happen more simply, since we always
> > negotiate both sides at once.
> >
> > Cons:
> > 1. If it's not your turn, it adds 1 RTT latency.
> >
> > Unchanged:
> > 1. Database accesses are unchanged; you need to commit when you send or
> > receive a commitsig.
> > 2. You can use the same state machine as before, but one day (when
> > this would be compulsory) you'll be able signficantly simplify;
> > you'll need to record the index at which HTLCs were changed
> > (added/removed) in case peer wants you to rexmit though.
> >
> > Cheers,
> > Rusty.
> >
> > [1] This is my fault; I was persuaded early on that optimality was more
> > important than simplicity in a classic nerd-snipe.
> > [2] https://github.com/lightningnetwork/lightning-rfc/issues/794
> > _______________________________________________
> > Lightning-dev mailing list
> > Lightning-dev at lists.linuxfoundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
> _______________________________________________
> Lightning-dev mailing list
> Lightning-dev at lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20201014/dcec5288/attachment.html>;
Author Public Key
npub17fjkngg0s0mfx4uhhz6n4puhflwvrhn2h5c78vdr5xda4mvqx89swntr0s