Why Nostr? What is Njump?
2023-06-09 12:46:30
in reply to

Rusty Russell [ARCHIVE] on Nostr: 📅 Original date posted:2016-08-05 📝 Original message: Olaoluwa Osuntokun ...

📅 Original date posted:2016-08-05
📝 Original message:
Olaoluwa Osuntokun <laolu32 at gmail.com> writes:
>> I'm going back and forth about including the payloads in the header HMAC.
> I
>> think we have three options here:
>>
>> 1) Include the payload in the header HMAC computation
>
> I'd say personally, I prefer the first option. This results in "fail fast"
> behavior w.r.t packet forwarding, and additionally adds the smallest
> overhead.

Agreed. I worry about forwarding corrupted packets leading to the
ability for malicious nodes to probe routes.

>> we also lose the ability to do anonymous rendezvous meetings, where the
> final
>> recipient provides half of the route in the form of a precompiled header
>> (something that Hornet is using).
>
> It doesn't appear that we loose the ability to do rendezvous routing if we
> follow through with the first option. The final recipient can still provide
> a
> precompiled header which is the e2e payload sent from the source to the
> rendezvous node. As the source knows the exact nested mix-header when
> sending,
> it can still be protected under the mix-header wide MAC.
>
> Additionally, in order to hide the next-hop after the rendezvous node from
> the
> source node, the destination can wrap the nested header in a layer of
> encryption, decryptable only by the rendezvous node.

In practice, you can do this one level up: simply agree with a rendevous
node that a given H-hash is to be fwded to you. Then direct the payer
to the rendevous node.

So I don't think it's worth any complexity in the routing protocol.

>> There is a tradeoff between small packets and keeping the size uniform. I
> think
>> we could bucketize sizes, e.g., have multiples of 32 bytes or 64 bytes
> for the
>> fields, to have packets with similar sized payloads have the same packet
> size.
>> That would allow us to also drop the e2e payload by setting a size of 0,
> and
>> still be able to forward it, should we ever find a use for it.
>
> Good point, we may have uses for non-uniform sizes as far as mix-headers in
> the
> future. So with this, then it appears there may be 3 types of mix-header
> formats:
> 1. Regular. Meaning no e2e payload, weighing in at 1234 bytes.
> 2. Extended. Meaning bearing the e2e payload with a size of 2468 bytes.
> 3. Rendezvous. Which nests another mix-header within the end-to-end
> payload,
> with a size which is double that of the regular.
>
> If we like this taxonomy, then we may want to reserve the first 2 version
> bytes
> within the draft. A version 0 packet would encompass processing the first
> two
> types, while a version 1 packet denotes that this is a rendezvous packet.

Keep it simple; let's just support regular for now. Nodes will have to
broadcast what extensions they support, and this can be used for
extended formats later. Including ones we *didn't* think of yet...

>> We have to be careful when using timestamps in the packet as it makes
> individual hops collatable.
>
> Excellent observation. Assuming we roll out a reasonably efficient solution
> for
> the collatable HTLC R-values across hops, naively selecting timestamps would
> present another correlation vector.
>
>> So my proposal would be to include a timestamp rounded up to the closest
> hour
>> and have a sliding window of accepted timestamps of +/- 1 hour,
> remembering the
>> secrets for that period and rejecting anything that is too far in the
> future or
>> too far in the past.
>
> This seems reasonable, also the size of the sliding window can easily be
> tuned
> in the future should we find it too large or small.
>
>> The more coarse the bigger the less likely an attacker is to guess which
>> packets belong to the same route, but the more storage is required on the
>> node's side.
>
> Yep, there's a clear trade off between the window size of the accepted
> timestamps, and a node's storage overhead. We can tune this value to a
> ballpark
> estimate of the number of HTLCs/sec a large node with high frequency
> bi-directional throughput may forward at peak times.
>
> Let's say a HFB (High-Frequency Bitcoin) node on the network at peak
> forwards
> 5k HTLC's per second: (5000/sec * 32 bytes) * 3600 sec = 576MB, if nodes are
> required to wait 1 hour between log prunings, and 288MB if we use a
> 30-minute
> interval. Even with such a high throughput value, that seems reasonable.

I think we're over-designing. How about: daily key rotation (which we
want anywat), remember all onions for current and previous key.

Remember: if we switch from C-hash to C-point, then it's simpler: we
only need to guard against retransmissions for *unresolved* htlcs. If
someone retransmits an HTLC for which we already know the C-point value,
they risk us redeeming it immediately and not forwarding at all.

(We need to remember all previous HTLCs anyway, so off the top of my
head checking this is not too hard...).

>> We could just use the announced key, i.e., the one that participated in
> the
>> channel setup, as a root key for HD key derivation. The derivation path
> could
>> then be based on floor(blockheight / 144) so that we rotate keys every
> day, and
>> don't need any additional communication to announce new public keys.
>
> Great suggestion! However, I think instead of directly using the key which
> participated in the channel setup, we'd use a new independent key as the
> root
> for this HD onion derivation. This new independent key would then be
> authenticated via a signature of a schnorr multi-sign of the channel
> multi-sig
> key and the node's identity key (or alternatively two sigs). This safeguards
> against the compromise of one of the derived private keys leading to
> compromise
> of the master root HD priv key which would allow possibly stealing a node's
> coins. Additionally, a node can switch keys more easily, avoiding a channel
> tear down.
>
> However, the HD Onion Key approach can potentially cancel out the forward
> secrecy benefits. If an attacker gains access to the root HD pubkey, along
> with
> any of the child derived onion keys, then they can compute the root privkey.
> This allows the attacker to derive all the child priv keys, giving them the
> ability to decrypt all mix-headers encrypted since the HD Onion Key was
> published.

Broadcasting new node keys (up to?) once a day is probably fine.
Perhaps include a validity time range with each key, so you can spot if
you're missing one. Recommend allowing 12 hours overlap or something.

It'd be great to avoid it, but that seems complex enough to push to a
future spec.

To summarize the keys for each node:

1. channel key: bitcoin key used to sign commitment txs. One per channel.
2. id key: used to tie channels together ("I own these channels"). Signed
by channel keys (or could use OP_RETURN, but that's a bit spammy), and
signs channel keys.
3. comms key: rotated key for onion messages. Signed by id key.
4. (various ephemeral keys for inter-node comms).

id and comms keys don't have to be bitcoin keys; could be Schnorr. But
not much point AFIACT: the big win is making the channel keys
(ie. bitcoin) use Schnorr so they can all compactly sign the id key.

>> I'm also trying to figure out how to enable intermediate nodes to reply
> to a
>> packet, e.g., if capacities are insufficient or the next node is
> unreachable,
>> by recycling the routing info.
>
> Yeah I've also attempted to tackle this issue a bit myself. The inclusion of
> onion routing certainly makes certain classes of failures harder to
> reconcile.

Yeah, this one's troubling. In particular, it'd be nice to prove that
a node is misbehaving:

(1) When a node gives a fail message, we want to be able to publish it
to prove (eg) it's lying about its fees. That means that the
failure msg needs to be tied to the request so both can be
published.

(2) If a node corrupts a fail message on return, we want to prove that.

Caveats:
1. We don't want to expose the source of fail message (ie. leak
the route).
2. Ideally the proof can be published in a way which minimizes data
exposure for the originator.

> If we can solve this, then we can greatly increase the robustness of onion
> routing within the network. I think they may be worth spinning some cycles
> on,
> although I don't consider it blocking w.r.t the initial specification.

I look forward to what you come up with!

Yay!
Rusty.
Author Public Key
npub1zw7cc8z78v6s3grujfvcv3ckpvg6kr0w7nz9yzvwyglyg0qu5sjsqhkhpx