Rusty Russell [ARCHIVE] on Nostr: 📅 Original date posted:2018-10-24 📝 Original message: Conner Fromknecht <conner ...
📅 Original date posted:2018-10-24
📝 Original message:
Conner Fromknecht <conner at lightning.engineering> writes:
> In light of this, and if I'm following along, it seems our hand is forced in
> splicing via a single on-chain transaction. In my book, this is preferable
> anyway. I'd much rather push complexity off-chain than having to do a
> mutli-stage splicing pipeline.
Agreed. As Christian pointed out, at least our design space is reduced now?
> I would propose sending a distinct message, which references the
> `active_channel_id` and a `splice_channel_id` for the pending splice:
>
> 1. type: XXX (`commitment_splice_signed`) (`option_splice`)
> 2. data:
> * [`32`:`active_channel_id`]
> * [`32`:`splice_channel_id`]
> * [`64`:`signature`]
> * [`2`:`num_htlcs`]
> * [`num_htlcs*64`:`htlc_signature`]
>
> This more directly addresses handling multiple pending splices, as well as
> preventing us from running into any size constraints. The purpose of
> including the `active_channel_id` would be to remote node locate the
> spliced channel, since it may not be populated indexes containing
> active channels. If we don't want to include this, the existing message
> can be used without modification.
Yes, I like this! I don't think the `splice_channel_id` helps us much,
since we need to wait we receive all pending commitement_splice_signed
before sending revoke_and_ack, and I think we should simply insist they
be in splice order which makes implementation easier (simple counter).
>> We shouldn't allow more than one pending splice operation anyway, as
>> stated in your proposal initially. We are already critically reliant on
> our
>> transaction being confirmed on-chain, so I don't see this as much of an
>> added issue.
>
> IMO there's no reason to limit ourselves to one pending splice at the
> message
> level. I think it'd be an oversight to not to plan ahead with RBF in mind,
> given that funding transactions have gone unconfirmed precisely because of
> improperly chosen fee rates. Arguably, funding flow should be extended to
> support this as well.
Good reminder re: RBF and funding. I've put this on the brainstorming
list with your name next to it ;)
> Adding a splice-reject message/error code should be sufficient to allow
> implementations to signal that their local tolerance for number of pending
> splices has been reached. It's likely we'd all start with getting one splice
> working, but then the messages won't need to modified if we want to
> implement
> additional pending splices via RBF.
>
> A node that wants to RBF but receives a reject can then proceed with CPFP
> as a
> last resort.
>
> Are there any downsides I'm overlooking with this approach?
No, I think you've covered it.
>> | Bit Position | Name | Field
> |
>> | ------------- | ------------------------- |
> -------------------------------- |
>> | 0 | `option_channel_htlc_max` | `htlc_maximum_msat`
> |
>> | 1 | `option_channel_moving` | `moving_txid
> |
>>
>> The `channel_update` gains the following field:
>> * [`32`: moving_txid`] (option_channel_moving)
>
> Do we actually need to send the `moving_txid` via a channel update? I think
> it's
> enough for both parties to send `channel_update`s with the
> `option_channel_moving` bit set, and continue to keep the channel in our
> routing
> table.
It helps because they can't broadcast the new channel for 6 confirms.
OTOH, that's probably not too long to wait.
> If we receive later receive two `channel_update`s whose `short_channel_id`s
> reference the spending transaction (and the node pubkeys are the same), we
> assume the splice was successful and that this channel has been
> subsumed.
So rule would be: if we've seen both channel_updates with
option_channel_moving set, we remember the txid which closed it, and
start a 100-block countdown the "real close". If we
a (valid) channel_announce for that closing tx with same node pubkeys,
we simply delete the 100-block countdown.
> I
> think this works so long as the spending transaction doesn't contain
> multiple
> funding outputs, though I think the current proposal is fallible to this as
> well.
I think variant above works even in that case?
> To me, this proposal has the benefit of not bloating gossip bandwidth with
> an
> extra field that would need to parsed indefinitely, and gracefully
> supporting
> RBF down the road. Otherwise we'd need to gossip and store each potential
> txid.
>
> With regards to forwarding, both `short_channel_id`s would be accepted by
> the
> splicers for up to 100 blocks (after splice confirm?), at which point they
> can
> both forget the prior `short_channel_id`.
Technically, the need to remember for some grace period after they
announce the block. We have a similar recommendation for old fee
values, though it's soft. 100 seems overkill.
I think we can assume gossip will propagate widely within 6 blocks and
say they should accept it at least up to 6 blocks after announcing? Or
1 hour, though I prefer using the blockchain as a clock in general.
> ## Shachain
>
>> I thought about restarting the revocation sequence, but it seems like
>> that only saves a tiny amount since we only store log(N) entries. We
>> can drop old HTLC info post-splice though, and (after some delay for
>> obscurity) tell watchtowers to drop old entries I think.
>
> I agree the additional state isn't too burdensome, and that we would still
> be
> able to drop watchtower state after some delay as you mentioned.
>
> On one hand, it does seem like the opportune time to remove such state if
> desired.
>
> OTOH, it is _really_ nice from an atomicity perspective that the current
> channel and (potentially) N pending channels can be revoked using a single
> commitment secret and message. Doing so would mean we don't have to
> modify the `revoke_and_ack` or `channel_reestablish` messages. The receiver
> would just apply the commitment secrets/points to the current channel and
> any
> pending splices.
Agreed; on balance, it's fine to avoid reset.
> ## Misc
>
>> Any reason to now make the splicing_add_* messages allow one to add
> several
>> inputs in a single message? Given "acceptable" constraints for how large
> the
>> witness and pkScripts can be, we can easily enforce an upper limit on the
>> number of inputs/outputs to add.
>
> Yes, I prefer this simplification.
Just harder to write the spec that way :) I'll come up with something.
>> Additionally, as the size of the channel is either expanding or
> contracting,
>> both sides should be allowed to modify things like the CSV param, reserve,
>> max accepted htlc's, max htlc size, etc. Many of these parameters like the
>> CSV value should scale with the size of the channel, not allowing these
>> parameters to be re-negotiated could result in odd scenarios like still
>> maintain a 1 week CSV when the channel size has dipped from 1 BTC to 100k
>> satoshis.
>
> Agreed!
"CSV should scale with value" seems like voodoo, though. It make us
feel better that we're being conservative with large amounts of money,
but it makes no sense from a time-value-of-money perspective. Sure,
bigger amounts are more important, but it's also more painful to have
them locked up.
I'd really like most of these parameters to go away, rather than
introducing YA negotiation pain point. See other post.
>> These all seem marginal to me. I think if we start hitting max values,
>> we should discuss increasing them.
>
> Doesn't this defeat the goal of firewalling funds against individual channel
> failures?
That's kind of true, but you should be more concerned about node
failure, and thus diversify your channels between different nodes.
That's better for everyone.
> Splice out,
> Conner
Nice touch :)
Cheers,
Rusty.
Published at
2023-06-09 12:51:41Event JSON
{
"id": "9e983ec8369abc7cd82f900637a2a1e1bda4bb049bc018442c23d20bc5842a5b",
"pubkey": "13bd8c1c5e3b3508a07c92598647160b11ab0deef4c452098e223e443c1ca425",
"created_at": 1686315101,
"kind": 1,
"tags": [
[
"e",
"d33837a10906296cdab5e5ff391835a12bd3f5d27bdac072961b20f094855a4d",
"",
"root"
],
[
"e",
"2a759e533e7029bc4d0c6b2df92ee880a1d5f3a8ef5577b750552bbec01be857",
"",
"reply"
],
[
"p",
"175fd2f52497b9ba272cebdb436ee9876f111b6aa2af3ea9bc03e7cdf4b45246"
]
],
"content": "📅 Original date posted:2018-10-24\n📝 Original message:\nConner Fromknecht \u003cconner at lightning.engineering\u003e writes:\n\u003e In light of this, and if I'm following along, it seems our hand is forced in\n\u003e splicing via a single on-chain transaction. In my book, this is preferable\n\u003e anyway. I'd much rather push complexity off-chain than having to do a\n\u003e mutli-stage splicing pipeline.\n\nAgreed. As Christian pointed out, at least our design space is reduced now?\n\n\u003e I would propose sending a distinct message, which references the\n\u003e `active_channel_id` and a `splice_channel_id` for the pending splice:\n\u003e\n\u003e 1. type: XXX (`commitment_splice_signed`) (`option_splice`)\n\u003e 2. data:\n\u003e * [`32`:`active_channel_id`]\n\u003e * [`32`:`splice_channel_id`]\n\u003e * [`64`:`signature`]\n\u003e * [`2`:`num_htlcs`]\n\u003e * [`num_htlcs*64`:`htlc_signature`]\n\u003e\n\u003e This more directly addresses handling multiple pending splices, as well as\n\u003e preventing us from running into any size constraints. The purpose of\n\u003e including the `active_channel_id` would be to remote node locate the\n\u003e spliced channel, since it may not be populated indexes containing\n\u003e active channels. If we don't want to include this, the existing message\n\u003e can be used without modification.\n\nYes, I like this! I don't think the `splice_channel_id` helps us much,\nsince we need to wait we receive all pending commitement_splice_signed\nbefore sending revoke_and_ack, and I think we should simply insist they\nbe in splice order which makes implementation easier (simple counter).\n\n\u003e\u003e We shouldn't allow more than one pending splice operation anyway, as\n\u003e\u003e stated in your proposal initially. We are already critically reliant on\n\u003e our\n\u003e\u003e transaction being confirmed on-chain, so I don't see this as much of an\n\u003e\u003e added issue.\n\u003e\n\u003e IMO there's no reason to limit ourselves to one pending splice at the\n\u003e message\n\u003e level. I think it'd be an oversight to not to plan ahead with RBF in mind,\n\u003e given that funding transactions have gone unconfirmed precisely because of\n\u003e improperly chosen fee rates. Arguably, funding flow should be extended to\n\u003e support this as well.\n\nGood reminder re: RBF and funding. I've put this on the brainstorming\nlist with your name next to it ;)\n\n\u003e Adding a splice-reject message/error code should be sufficient to allow\n\u003e implementations to signal that their local tolerance for number of pending\n\u003e splices has been reached. It's likely we'd all start with getting one splice\n\u003e working, but then the messages won't need to modified if we want to\n\u003e implement\n\u003e additional pending splices via RBF.\n\u003e\n\u003e A node that wants to RBF but receives a reject can then proceed with CPFP\n\u003e as a\n\u003e last resort.\n\u003e\n\u003e Are there any downsides I'm overlooking with this approach?\n\nNo, I think you've covered it.\n\n\u003e\u003e | Bit Position | Name | Field\n\u003e |\n\u003e\u003e | ------------- | ------------------------- |\n\u003e -------------------------------- |\n\u003e\u003e | 0 | `option_channel_htlc_max` | `htlc_maximum_msat`\n\u003e |\n\u003e\u003e | 1 | `option_channel_moving` | `moving_txid\n\u003e |\n\u003e\u003e\n\u003e\u003e The `channel_update` gains the following field:\n\u003e\u003e * [`32`: moving_txid`] (option_channel_moving)\n\u003e\n\u003e Do we actually need to send the `moving_txid` via a channel update? I think\n\u003e it's\n\u003e enough for both parties to send `channel_update`s with the\n\u003e `option_channel_moving` bit set, and continue to keep the channel in our\n\u003e routing\n\u003e table.\n\nIt helps because they can't broadcast the new channel for 6 confirms.\nOTOH, that's probably not too long to wait.\n\n\u003e If we receive later receive two `channel_update`s whose `short_channel_id`s\n\u003e reference the spending transaction (and the node pubkeys are the same), we\n\u003e assume the splice was successful and that this channel has been\n\u003e subsumed.\n\nSo rule would be: if we've seen both channel_updates with\noption_channel_moving set, we remember the txid which closed it, and\nstart a 100-block countdown the \"real close\". If we\na (valid) channel_announce for that closing tx with same node pubkeys,\nwe simply delete the 100-block countdown.\n\n\u003e I\n\u003e think this works so long as the spending transaction doesn't contain\n\u003e multiple\n\u003e funding outputs, though I think the current proposal is fallible to this as\n\u003e well.\n\nI think variant above works even in that case?\n\n\u003e To me, this proposal has the benefit of not bloating gossip bandwidth with\n\u003e an\n\u003e extra field that would need to parsed indefinitely, and gracefully\n\u003e supporting\n\u003e RBF down the road. Otherwise we'd need to gossip and store each potential\n\u003e txid.\n\u003e\n\u003e With regards to forwarding, both `short_channel_id`s would be accepted by\n\u003e the\n\u003e splicers for up to 100 blocks (after splice confirm?), at which point they\n\u003e can\n\u003e both forget the prior `short_channel_id`.\n\nTechnically, the need to remember for some grace period after they\nannounce the block. We have a similar recommendation for old fee\nvalues, though it's soft. 100 seems overkill.\n\nI think we can assume gossip will propagate widely within 6 blocks and\nsay they should accept it at least up to 6 blocks after announcing? Or\n1 hour, though I prefer using the blockchain as a clock in general.\n\n\u003e ## Shachain\n\u003e\n\u003e\u003e I thought about restarting the revocation sequence, but it seems like\n\u003e\u003e that only saves a tiny amount since we only store log(N) entries. We\n\u003e\u003e can drop old HTLC info post-splice though, and (after some delay for\n\u003e\u003e obscurity) tell watchtowers to drop old entries I think.\n\u003e\n\u003e I agree the additional state isn't too burdensome, and that we would still\n\u003e be\n\u003e able to drop watchtower state after some delay as you mentioned.\n\u003e\n\u003e On one hand, it does seem like the opportune time to remove such state if\n\u003e desired.\n\u003e\n\u003e OTOH, it is _really_ nice from an atomicity perspective that the current\n\u003e channel and (potentially) N pending channels can be revoked using a single\n\u003e commitment secret and message. Doing so would mean we don't have to\n\u003e modify the `revoke_and_ack` or `channel_reestablish` messages. The receiver\n\u003e would just apply the commitment secrets/points to the current channel and\n\u003e any\n\u003e pending splices.\n\nAgreed; on balance, it's fine to avoid reset.\n\n\u003e ## Misc\n\u003e\n\u003e\u003e Any reason to now make the splicing_add_* messages allow one to add\n\u003e several\n\u003e\u003e inputs in a single message? Given \"acceptable\" constraints for how large\n\u003e the\n\u003e\u003e witness and pkScripts can be, we can easily enforce an upper limit on the\n\u003e\u003e number of inputs/outputs to add.\n\u003e\n\u003e Yes, I prefer this simplification.\n\nJust harder to write the spec that way :) I'll come up with something.\n\n\u003e\u003e Additionally, as the size of the channel is either expanding or\n\u003e contracting,\n\u003e\u003e both sides should be allowed to modify things like the CSV param, reserve,\n\u003e\u003e max accepted htlc's, max htlc size, etc. Many of these parameters like the\n\u003e\u003e CSV value should scale with the size of the channel, not allowing these\n\u003e\u003e parameters to be re-negotiated could result in odd scenarios like still\n\u003e\u003e maintain a 1 week CSV when the channel size has dipped from 1 BTC to 100k\n\u003e\u003e satoshis.\n\u003e\n\u003e Agreed!\n\n\"CSV should scale with value\" seems like voodoo, though. It make us\nfeel better that we're being conservative with large amounts of money,\nbut it makes no sense from a time-value-of-money perspective. Sure,\nbigger amounts are more important, but it's also more painful to have\nthem locked up.\n\nI'd really like most of these parameters to go away, rather than\nintroducing YA negotiation pain point. See other post.\n\n\u003e\u003e These all seem marginal to me. I think if we start hitting max values,\n\u003e\u003e we should discuss increasing them.\n\u003e\n\u003e Doesn't this defeat the goal of firewalling funds against individual channel\n\u003e failures?\n\nThat's kind of true, but you should be more concerned about node\nfailure, and thus diversify your channels between different nodes.\nThat's better for everyone.\n\n\u003e Splice out,\n\u003e Conner\n\nNice touch :)\n\nCheers,\nRusty.",
"sig": "dfd7adb9fa6bf6da9685cc07b99887a223ec89d4b18e17ff6d273416c62782a75ac33c907fbd797244b1117c5e9e9a65cacd6484a8523da943186572e1f7fbc1"
}