Why Nostr? What is Njump?
2023-06-09 12:44:51

Rusty Russell [ARCHIVE] on Nostr: ๐Ÿ“… Original date posted:2015-10-18 ๐Ÿ“ Original message: Mats Jerratsch <matsjj at ...

๐Ÿ“… Original date posted:2015-10-18
๐Ÿ“ Original message:
Mats Jerratsch <matsjj at gmail.com> writes:
> Other post was quite long already and they are actually dealing with
> two different issues.
>
>
> So currently I can think of 3 different broadcast messages, that are
> differently dynamic and needs different handling, so I attach them
> each with a new signature (which bloats a lot unfortunately).

Indeed, I think this breakdown is correct.

> (1) Pubkey-Channel-Relationships (see other post on ML)
> Very static, relayed every 10 days
> 264 Bytes
>
> (2) Node addresses/IP
> Depends on the nodes (dynamic/static IP), approx every 12h
> 133 Bytes (some estimate, as I want to support addresses too, not just IPs)
>
> (3) Channel-Status (capacity, fee, ...)
> Highly depending on actual traffic and node usage - once an hour?
> 176 Bytes (estimated, depends on actual content)

These estimates seem about the right ballpark to me. But once per hour
may be extremely optimistic when channels approach exhaustion. That's
because (1) it's logical for fees to rise significantly at that point,
and (2) you want to know if capacity is sufficient for the amount you're
sending.

A random beacon model has the advantage of requiring only partial
topology knowledge, which makes these numbers sacle much better.
However, it introduces another factor.

> I think we can either realise it as some kind of gossip protocol (easy
> to implement, overhead of an efficient gossip protocol can be very
> low) or use some DHT approach (difficult to bootstrap, has to be
> designed to be highly resistant to failure/highly redundant).

Bram Cohen was supportive of the idea of using BitTorrent's DHT. I
think that's the most sensible approach if we are going to go that route
for (1 and 2).

For #3, we need our own inline protocol.

> A new node would want to retrieve the full dataset similar to the
> blockchain before actually opening a channel with a new node. So we
> need to design a way of retrieving the full dataset for fresh nodes,
> probably in some load-distributed way, although 330MB isn't too much
> to retrieve from 1-5 nodes. (and 100k nodes is a pretty optimistic
> view of the network too currently, although rusty usually starts even
> higher...)

Yeah, my design point has been 1M nodes. Ideally, on a cell phone :)

In the very short term, Bitcoin used IRC as the peer protocol. It has
the advantage of being really easy to debug, and trivial to implement,
so I'm going to aim at that while we research our more ambitious
proposals...

Cheers,
Rusty.
Author Public Key
npub1zw7cc8z78v6s3grujfvcv3ckpvg6kr0w7nz9yzvwyglyg0qu5sjsqhkhpx