Christian Decker [ARCHIVE] on Nostr: 📅 Original date posted:2020-10-26 📝 Original message: Rusty Russell <rusty at ...
📅 Original date posted:2020-10-26
📝 Original message:
Rusty Russell <rusty at rustcorp.com.au> writes:
>> This is in stark contrast to the leader-based approach, where both
>> parties can just keep queuing updates without silent times to
>> transferring the token from one end to the other.
>
> You've swayed me, but it needs new wire msgs to indicate "these are
> your proposals I'm reflecting to you".
>
> OTOH they don't need to carry data, so we can probably just have:
>
> update_htlcs_ack:
> * [`channel_id`:`channel_id`]
> * [`u16`:`num_added`]
> * [`num_added*u64`:`added`]
> * [`u16`:`num_removed`]
> * [`num_removed*u64`:`removed`]
>
> update_fee can stay the same.
>
> Thoughts?
So this would pretty much be a batch-ack, sent after a whole series of
changes were proposed to the leader, and referenced by their `htlc_id`,
correct? This is one optimization step further than what I was thinking,
but it can work. My proposal would have been to either reflect the whole
message (nodes need to remember proposals they've sent anyway in case of
disconnects, so matching incoming changes with the pending ones should
not be too hard), or send back individual acks, containing the hash of
the message if we want to safe on bytes transferred. Alternatively we
could also use reference the change by its htlc_id.
The latter however means that we are now tightly binding the
linearization protocol (in which order should the changes be applied)
with the internals of these changes (namely we look into the change, and
reference the htlc_id). My goal ultimately is introduce a better
layering between the change proposal/commitment scheme, and the
semantics of the the individual changes ("which order" vs. "what").
I wonder what the performance increase of the batching would be compared
to just acking each update individually. My expectation would be that in
most cases we'd be acking a batch of size 1 :-)
Personally I think just reflecting the changes as a whole, interleaving
my updates with yours is likely the simplest protocol, with the least
implied state that can get out of sync, and cause nodes to drift apart
like we had a number of times ("bad signature" anyone ^^). And looking
(much much) further it is also a feasible protocol for multiparty
channels with eltoo or similar constructions, where the leader
reflecting my own changes back to me is more of a special case than the
norm.
Cheers,
Christian
Published at
2023-06-09 13:01:21Event JSON
{
"id": "92243c06cc33d871378850a930dd869b885c940d1e5c76805dcaf76e1e7cfa12",
"pubkey": "72cd40332ec782dd0a7f63acb03e3b6fdafa6d91bd1b6125cd8b7117a1bb8057",
"created_at": 1686315681,
"kind": 1,
"tags": [
[
"e",
"979ba1732a5bff6ecbedadd53fb6514308c996861998729f33c9b71e23f85e90",
"",
"root"
],
[
"e",
"b01ac57a7bd60570afb9d94f79e841f34fe8a7facdf18eba43e6847232f2986e",
"",
"reply"
],
[
"p",
"13bd8c1c5e3b3508a07c92598647160b11ab0deef4c452098e223e443c1ca425"
]
],
"content": "📅 Original date posted:2020-10-26\n📝 Original message:\nRusty Russell \u003crusty at rustcorp.com.au\u003e writes:\n\u003e\u003e This is in stark contrast to the leader-based approach, where both\n\u003e\u003e parties can just keep queuing updates without silent times to\n\u003e\u003e transferring the token from one end to the other.\n\u003e\n\u003e You've swayed me, but it needs new wire msgs to indicate \"these are\n\u003e your proposals I'm reflecting to you\".\n\u003e\n\u003e OTOH they don't need to carry data, so we can probably just have:\n\u003e\n\u003e update_htlcs_ack:\n\u003e * [`channel_id`:`channel_id`]\n\u003e * [`u16`:`num_added`]\n\u003e * [`num_added*u64`:`added`]\n\u003e * [`u16`:`num_removed`]\n\u003e * [`num_removed*u64`:`removed`]\n\u003e\n\u003e update_fee can stay the same.\n\u003e\n\u003e Thoughts?\n\nSo this would pretty much be a batch-ack, sent after a whole series of\nchanges were proposed to the leader, and referenced by their `htlc_id`,\ncorrect? This is one optimization step further than what I was thinking,\nbut it can work. My proposal would have been to either reflect the whole\nmessage (nodes need to remember proposals they've sent anyway in case of\ndisconnects, so matching incoming changes with the pending ones should\nnot be too hard), or send back individual acks, containing the hash of\nthe message if we want to safe on bytes transferred. Alternatively we\ncould also use reference the change by its htlc_id.\n\nThe latter however means that we are now tightly binding the\nlinearization protocol (in which order should the changes be applied)\nwith the internals of these changes (namely we look into the change, and\nreference the htlc_id). My goal ultimately is introduce a better\nlayering between the change proposal/commitment scheme, and the\nsemantics of the the individual changes (\"which order\" vs. \"what\").\n\nI wonder what the performance increase of the batching would be compared\nto just acking each update individually. My expectation would be that in\nmost cases we'd be acking a batch of size 1 :-)\n\nPersonally I think just reflecting the changes as a whole, interleaving\nmy updates with yours is likely the simplest protocol, with the least\nimplied state that can get out of sync, and cause nodes to drift apart\nlike we had a number of times (\"bad signature\" anyone ^^). And looking\n(much much) further it is also a feasible protocol for multiparty\nchannels with eltoo or similar constructions, where the leader\nreflecting my own changes back to me is more of a special case than the\nnorm.\n\nCheers,\nChristian",
"sig": "e8d90ce3b506f94f9113314e9dec2c2254ecb2ffef771355edf5411ed9f229f44da6d078708fa4faed7f1d8488a42abc3c9e024959b28be4f231a0fdd33fb231"
}