Christian Decker [ARCHIVE] on Nostr: 📅 Original date posted:2016-09-06 📝 Original message: On Mon, Sep 05, 2016 at ...
📅 Original date posted:2016-09-06
📝 Original message:
On Mon, Sep 05, 2016 at 11:55:22AM +0930, Rusty Russell wrote:
> Christian Decker <decker.christian at gmail.com> writes:
> > I'd like to pick up the conversation about the onion routing protocol
> > again, since we are close to merging our implementation into the
> > lightningd node.
> >
> > As far as I can see we mostly agree on the spec, with some issues that
> > should be deferred until later/to other specs:
> >
> > - Key-rotation policies
>
> OK, I've been thinking about the costs of key-rotation.
>
I forgot that we have two potential key-rotations:
- Rotating the key used in transactions that hit the Bitcoin network
- Rotating the public key used for the DH shared secret generation
for the onion routing protocol
For the moment I was concentrating on the latter.
> Assumptions:
> 1) We simply use a single pubkey for everything about a node, aka its ID.
> 2) Medium-scale public network, 250,000 nodes and 1M channels.
> 3) Every node knows the entire public network.
>
> Each node ID is 33 bytes (pubkey) each channel is 6 bytes (blocknum +
> txnum). You need to associate channels -> ids, say another 8 bytes per
> channel.
>
> That's 22.25MB each node has to keep.
>
> The proofs are larger: to prove which IDs owns a channel, each one needs
> a merkle proof (12 x 32 bytes) plus the funding tx (227 bytes, we can
> skip some though), the two pubkeys (66 bytes), and a signature of the ID
> using those pubkeys (128 bytes, schnorr would be 64?).
>
> That's an additional 800M each node has to download to completely
> validate, and of course some nodes will have to keep this so we can
> download it from somewhere. That's even bigger than Pokemon Go :(
>
> Change Assumptions:
> 1) We use a "comms" key for each node instead of its ID.
> 2) Nodes send out a new comms key, signed by ID.
>
> That's another 33 bytes each to keep, or 8.25MB. To rotate a comms key,
> we need the new key (33 bytes), and a signature from the id (64 bytes),
> and probably a timestamp, (4 bytes), that's 25.25MB.
>
> That's not too bad if we rotate daily. Probably not if we rotate
> hourly..
>
A node's public key used for DH shared secret generation exists
independently of its channels. I think we probably should not bind the
rotation of the key we use to talk to that node to one of its
channels. However, it does make sense to require that a node also has
at least one active channel in order for us to care at all :-)
The comms key approach is in line with what I was thinking as well.
We can bind the new communication key with the channel's existence by
showing a derivation path from the node's (fixed) public key and the
new key. So a node wanting to rotate its communication key just sends
the following: "I am <pubkey> (33 byte), please use key <derivation
number> (~4 byte) and here is a <signature> (64 bytes) that I signed
this rotation off.". The communication overhead is identical to your
proposal, but, since you send only the new key, I think in your
proposal we'd have to churn through all known node ids to find which
one signed the rotation, or where you also using timestamp based
derivation?
Another case we could consider is having passive rotations: when an
endpoint announces a channel's existence it also sends its rotation
interval along. Every <rotation interval> nodes simply derive the new
key and use that for the DH shared secret generation should they want
to talk to this node. And nodes have a switchover window in which they
accept both (would be necessary in the active rotation as well due to
delays). The passive rotation incurs no communication overhead and can
be bound to the node's channels, so as long as we believe one of its
channels to exist, we rotate its keys.
Possibly a mix of active and passive would make sense, with the active
rotation enabling emergency rotations in case a key was compromised,
but we're in a lot of trouble then anyway :-)
> Cheers,
> Rusty.
Cheers,
Christian
Published at
2023-06-09 12:46:41Event JSON
{
"id": "cbf95890ce97a72530263a056adc576024f25d9a1e835eb50d2efdae2f9edb89",
"pubkey": "72cd40332ec782dd0a7f63acb03e3b6fdafa6d91bd1b6125cd8b7117a1bb8057",
"created_at": 1686314801,
"kind": 1,
"tags": [
[
"e",
"6e65ece61bd323b60e8549dd5955ce1565ad06118112038c52f2e00d9538477b",
"",
"root"
],
[
"e",
"4d1d67b7194e92a07491ece3d73f04f445fa30c1bfdbb692432d218f1491e16f",
"",
"reply"
],
[
"p",
"13bd8c1c5e3b3508a07c92598647160b11ab0deef4c452098e223e443c1ca425"
]
],
"content": "📅 Original date posted:2016-09-06\n📝 Original message:\nOn Mon, Sep 05, 2016 at 11:55:22AM +0930, Rusty Russell wrote:\n\u003e Christian Decker \u003cdecker.christian at gmail.com\u003e writes:\n\u003e \u003e I'd like to pick up the conversation about the onion routing protocol\n\u003e \u003e again, since we are close to merging our implementation into the\n\u003e \u003e lightningd node.\n\u003e \u003e\n\u003e \u003e As far as I can see we mostly agree on the spec, with some issues that\n\u003e \u003e should be deferred until later/to other specs:\n\u003e \u003e\n\u003e \u003e - Key-rotation policies\n\u003e \n\u003e OK, I've been thinking about the costs of key-rotation.\n\u003e\n\nI forgot that we have two potential key-rotations:\n\n - Rotating the key used in transactions that hit the Bitcoin network\n - Rotating the public key used for the DH shared secret generation\n for the onion routing protocol\n\nFor the moment I was concentrating on the latter.\n\n\u003e Assumptions:\n\u003e 1) We simply use a single pubkey for everything about a node, aka its ID.\n\u003e 2) Medium-scale public network, 250,000 nodes and 1M channels.\n\u003e 3) Every node knows the entire public network.\n\u003e \n\u003e Each node ID is 33 bytes (pubkey) each channel is 6 bytes (blocknum +\n\u003e txnum). You need to associate channels -\u003e ids, say another 8 bytes per\n\u003e channel.\n\u003e \n\u003e That's 22.25MB each node has to keep.\n\u003e \n\u003e The proofs are larger: to prove which IDs owns a channel, each one needs\n\u003e a merkle proof (12 x 32 bytes) plus the funding tx (227 bytes, we can\n\u003e skip some though), the two pubkeys (66 bytes), and a signature of the ID\n\u003e using those pubkeys (128 bytes, schnorr would be 64?).\n\u003e \n\u003e That's an additional 800M each node has to download to completely\n\u003e validate, and of course some nodes will have to keep this so we can\n\u003e download it from somewhere. That's even bigger than Pokemon Go :(\n\u003e\n\u003e Change Assumptions:\n\u003e 1) We use a \"comms\" key for each node instead of its ID.\n\u003e 2) Nodes send out a new comms key, signed by ID.\n\u003e \n\u003e That's another 33 bytes each to keep, or 8.25MB. To rotate a comms key,\n\u003e we need the new key (33 bytes), and a signature from the id (64 bytes),\n\u003e and probably a timestamp, (4 bytes), that's 25.25MB.\n\u003e \n\u003e That's not too bad if we rotate daily. Probably not if we rotate\n\u003e hourly..\n\u003e\n\nA node's public key used for DH shared secret generation exists\nindependently of its channels. I think we probably should not bind the\nrotation of the key we use to talk to that node to one of its\nchannels. However, it does make sense to require that a node also has\nat least one active channel in order for us to care at all :-)\n\nThe comms key approach is in line with what I was thinking as well.\nWe can bind the new communication key with the channel's existence by\nshowing a derivation path from the node's (fixed) public key and the\nnew key. So a node wanting to rotate its communication key just sends\nthe following: \"I am \u003cpubkey\u003e (33 byte), please use key \u003cderivation\nnumber\u003e (~4 byte) and here is a \u003csignature\u003e (64 bytes) that I signed\nthis rotation off.\". The communication overhead is identical to your\nproposal, but, since you send only the new key, I think in your\nproposal we'd have to churn through all known node ids to find which\none signed the rotation, or where you also using timestamp based\nderivation?\n\nAnother case we could consider is having passive rotations: when an\nendpoint announces a channel's existence it also sends its rotation\ninterval along. Every \u003crotation interval\u003e nodes simply derive the new\nkey and use that for the DH shared secret generation should they want\nto talk to this node. And nodes have a switchover window in which they\naccept both (would be necessary in the active rotation as well due to\ndelays). The passive rotation incurs no communication overhead and can\nbe bound to the node's channels, so as long as we believe one of its\nchannels to exist, we rotate its keys.\n\nPossibly a mix of active and passive would make sense, with the active\nrotation enabling emergency rotations in case a key was compromised,\nbut we're in a lot of trouble then anyway :-)\n\n\u003e Cheers,\n\u003e Rusty.\n\nCheers,\nChristian",
"sig": "c56ab85e016b27823702e952efae56f05808bbd0273b61fc11c1b0251f690fa88b3933813483cc9804d4c4bedcd0b26e6d528155da7648f5a850d67897ab791e"
}