Why Nostr? What is Njump?
2023-06-09 12:46:31
in reply to

Christian Decker [ARCHIVE] on Nostr: 📅 Original date posted:2016-08-08 📝 Original message: On Thu, Aug 4, 2016 at ...

📅 Original date posted:2016-08-08
📝 Original message:
On Thu, Aug 4, 2016 at 8:24 PM Olaoluwa Osuntokun <laolu32 at gmail.com> wrote:

> > I'm going back and forth about including the payloads in the header
> HMAC. I
> > think we have three options here:
> >
> > 1) Include the payload in the header HMAC computation
>
> I'd say personally, I prefer the first option. This results in "fail fast"
> behavior w.r.t packet forwarding, and additionally adds the smallest
> overhead.
>
> > we also lose the ability to do anonymous rendezvous meetings, where the
> final
> > recipient provides half of the route in the form of a precompiled header
> > (something that Hornet is using).
>
> It doesn't appear that we loose the ability to do rendezvous routing if we
> follow through with the first option. The final recipient can still
> provide a
> precompiled header which is the e2e payload sent from the source to the
> rendezvous node. As the source knows the exact nested mix-header when
> sending,
> it can still be protected under the mix-header wide MAC.
>
> Additionally, in order to hide the next-hop after the rendezvous node from
> the
> source node, the destination can wrap the nested header in a layer of
> encryption, decryptable only by the rendezvous node.
>

Sounds good, however I'm not clear on how the final recipient can provide a
precompiled valid header with the HMACs that include the per-hop payloads
and the end-to-end payload without knowing them upfront.

>
> > Both the per-hop checkable schemes, combined with nodes signing the
> packets
> > they forward, would give us the ability to identify misbehaving nodes and
> > denounce them: if we receive a packet we check the integrity and if it
> doesn't
> > match then we can prove to others that the node forwarded something
> broken
> > with its signature, or it did not check the packet itself.
>
> Great observation. However, it seems like this is currently out of scope
> (the
> implications of "denouncing" a node") and should be re-visited at a future
> time
> when we brainstorm some sort of "reputation" scheme.
>

Happy to shelve the idea for now, I'll add it to my future-topics list :-)

>
> > There is a tradeoff between small packets and keeping the size uniform.
> I think
> > we could bucketize sizes, e.g., have multiples of 32 bytes or 64 bytes
> for the
> > fields, to have packets with similar sized payloads have the same packet
> size.
> > That would allow us to also drop the e2e payload by setting a size of 0,
> and
> > still be able to forward it, should we ever find a use for it.
>
> Good point, we may have uses for non-uniform sizes as far as mix-headers
> in the
> future. So with this, then it appears there may be 3 types of mix-header
> formats:
> 1. Regular. Meaning no e2e payload, weighing in at 1234 bytes.
> 2. Extended. Meaning bearing the e2e payload with a size of 2468 bytes.
> 3. Rendezvous. Which nests another mix-header within the end-to-end
> payload,
> with a size which is double that of the regular.
>
> If we like this taxonomy, then we may want to reserve the first 2 version
> bytes
> within the draft. A version 0 packet would encompass processing the first
> two
> types, while a version 1 packet denotes that this is a rendezvous packet.
> The
> rendezvous case needs to be distinct as it modifies the
> processing/forwarding
> at the final hop.
>
> Alternatively, we can use solely a version of 0 in the initial spec, with
> the
> final hop checking if the [1:34] bytes of the payload (if one is present)
> are a
> point on the curve. If so, this triggers the rendezvous forwarding, with
> the
> mid-point node processing the packet again as normal, completing the
> rendezvous
> route.
>

Enumerating types of packets sounds like a good tradeoff between
flexibility and packet size. However size and semantics are orthogonal and
keeping them separate might be a cleaner choice.

I'd prefer having a rendezvous scheme that merges the two ends of the route
in a seamless way, which should not be too hard to do, unless we keep the
per-hop checkable HMACs.

>
> > We have to be careful when using timestamps in the packet as it makes
> individual hops collatable.
>
> Excellent observation. Assuming we roll out a reasonably efficient
> solution for
> the collatable HTLC R-values across hops, naively selecting timestamps
> would
> present another correlation vector.
>
> > So my proposal would be to include a timestamp rounded up to the closest
> hour
> > and have a sliding window of accepted timestamps of +/- 1 hour,
> remembering the
> > secrets for that period and rejecting anything that is too far in the
> future or
> > too far in the past.
>
> This seems reasonable, also the size of the sliding window can easily be
> tuned
> in the future should we find it too large or small.
>
> > The more coarse the bigger the less likely an attacker is to guess which
> > packets belong to the same route, but the more storage is required on the
> > node's side.
>
> Yep, there's a clear trade off between the window size of the accepted
> timestamps, and a node's storage overhead. We can tune this value to a
> ballpark
> estimate of the number of HTLCs/sec a large node with high frequency
> bi-directional throughput may forward at peak times.
>
> Let's say a HFB (High-Frequency Bitcoin) node on the network at peak
> forwards
> 5k HTLC's per second: (5000/sec * 32 bytes) * 3600 sec = 576MB, if nodes
> are
> required to wait 1 hour between log prunings, and 288MB if we use a
> 30-minute
> interval. Even with such a high throughput value, that seems reasonable.
>

Do we need both a timestamped backlog of secrets and key-rotation? If we
get the key rotation quick enough it's probably sufficient to simply keep
all secrets for the current key, especially if we use bloom-filters to
store the seen secrets.


> > We could just use the announced key, i.e., the one that participated in
> the
> > channel setup, as a root key for HD key derivation. The derivation path
> could
> > then be based on floor(blockheight / 144) so that we rotate keys every
> day, and
> > don't need any additional communication to announce new public keys.
>
> Great suggestion! However, I think instead of directly using the key which
> participated in the channel setup, we'd use a new independent key as the
> root
> for this HD onion derivation. This new independent key would then be
> authenticated via a signature of a schnorr multi-sign of the channel
> multi-sig
> key and the node's identity key (or alternatively two sigs). This
> safeguards
> against the compromise of one of the derived private keys leading to
> compromise
> of the master root HD priv key which would allow possibly stealing a node's
> coins. Additionally, a node can switch keys more easily, avoiding a channel
> tear down.
>
> However, the HD Onion Key approach can potentially cancel out the forward
> secrecy benefits. If an attacker gains access to the root HD pubkey, along
> with
> any of the child derived onion keys, then they can compute the root
> privkey.
> This allows the attacker to derive all the child priv keys, giving them the
> ability to decrypt all mix-headers encrypted since the HD Onion Key was
> published.
>
> I think we can patch this exploit by adding some precomputation for each
> node,
> and introducing an intermediate onion derivation point. Assuming we rotate
> every 144+ (1 day) blocks, then using the HD Onion PrivKey, each node
> pre-generates 365 (or a smaller batch size) keys. Then, generates an
> independent "onion derivation" key. The OD key then combined with each of
> the
> child onion keys, produces the final child onion key (C_i = final onion
> key,
> B_i = intermediate child key, A = OD):
> * C_i = B_i + A
>
> After the precomputation, the OD key (A) should be *destroyed*. If so,
> even if
> an attacker gains access to one of the intermediate child onion keys,
> they're
> unable to derive the final child onion key as the OD key has been
> destroyed.
> This safeguards the forward secrecy of the scheme in the face of the HD
> root+child exploit. As before, in the case of a root/child compromise the
> original node can simply authenticate a new HD Onion Key.
>
> So perhaps we can combine the two approaches, publishing a blockhash
> (buried
> under a "safe" re-org depth), along with an authenticated HD root pubkey.
> With
> this new scheme we're able to push key rotation out to the edges in a
> non-interactive manner. Having the blockhash as an anchor will reduce the
> amount of guessing required by a node to fetch the correct onion key.
>

That's a great idea, I hadn't thought about forward secrecy. I like the
non-interactive nature of the scheme, since we'll be communicating enough,
even without every node broadcasting new keys upon switch. Potentially
there is also a way to define your own key-rotation period with the channel
establishment announcement so that low-memory devices can switch at a
higher rate, trading memory savings for slightly higher fail rates.

>
> > I'm also trying to figure out how to enable intermediate nodes to reply
> to a
> > packet, e.g., if capacities are insufficient or the next node is
> unreachable,
> > by recycling the routing info.
>
> Yeah I've also attempted to tackle this issue a bit myself. The inclusion
> of
> onion routing certainly makes certain classes of failures harder to
> reconcile.
> There has been a bit of discussion of this in the past, at the time called
> "unrolling the onion". In a similar vein it's also more difficult to
> ascribe blame node's which directly cause a payment to fail.
>
> One of my naive ideas was to include a "backwards" mix-header within each
> node's per-hop payload (though I hadn't included the per-hop payload in my
> mental model at the time), however this would result in a quadratic blow up
> space complexity parametrized by our max-hop limit. Essentially, we'd
> include
> a SURB within the mix-header for each hop in the route.
>
> > Maybe we could continue blinding the ephemeral key on the return path,
> and
> > have a mechanism to tell the node the total blinding factor along the
> path so
> > that it can encrypt something in the routing info for the return path?
> That
> > would neatly combine Hornet and Sphinx, eliminating the initial
> roundtrip to
> > setup forwarding segments.
>
> Could you elaborate on this a bit more? It seems that this alone is
> insufficient to allow "backwards" replies to the source w/o revealing the
> source's identity.
>
> It seems the primary question is: how can we re-use the information
> present at
> a hop, post-processing to reply to the sender without an additional round
> trip?
> If we can solve this, then we can greatly increase the robustness of onion
> routing within the network. I think they may be worth spinning some cycles
> on,
> although I don't consider it blocking w.r.t the initial specification.
>

I don't think this is a high priority issue for the routing spec since we
have to keep the HTLC information around anyway. I was thinking along
sending a factor along with the header that'd tell each hop that the next
time they see this packet it'll have the current ephemeral key blinded by
this factor. The hop could then compute its shared secret and write routing
info into its position in the header before rotating it to the back. The
factor would then be divided by the blinding factor applied to the
ephemeral key before forwarding it to the next hop. On the return path the
ephemeral key is what we precomputed and we can decrypt the info we stored
in the header before.

So far all my attempts either did not work or were leaking too much
information about shared secrets or blinding factors. But then again I'm
stuck at Crypto 101 :-)

Cheers,
Christian

>
> -- Laolu
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20160808/4c15b2fb/attachment-0001.html>;
Author Public Key
npub1wtx5qvewc7pd6znlvwktq03mdld05mv3h5dkzfwd3dc30gdmsptsugtuyn