Why Nostr? What is Njump?
2023-06-09 12:54:42
in reply to

ZmnSCPxj [ARCHIVE] on Nostr: 📅 Original date posted:2019-04-03 📝 Original message: Good morning list, > One ...

📅 Original date posted:2019-04-03
📝 Original message:
Good morning list,

> One way you could have both determinism and encourage a diverse distribution of network maps is to treat it as a spatial indexing problem, where the space we use is the lexicographical space of the node ids (or hashes of), borrowing some similarities from DHTs.
>
> If for example, we take a quadtree, you can take the 2 most-significant bits of the public key hash, which would put your node into one of 4 buckets. Nodes could advertise a feature bit indicating that they are committed to keeping the entire routemap of the bucket which matches their own public key hash, which would be all of the nodes in the same bucket - and all of the channels with one or both of their endpoints in the bucket.

I would be hesitant to divide the world in such manner.
I understand that in typical computer science, splitting your objects up into smaller parts is a long-accepted method of doing things.

However, when it comes to finances and political power (the power to censor or disrupt), such splitting is highly undesirable.

Thus I am strongly allergic to things that would "hierarchical"ize or "bucket"ize or even just have a separation between "endpoint" and "core" network.

I would rather propose for acceptance into BOLT, such proposals that would keep the network homogeneous.

Various nodes may have different resources (BTC, CPU, memory, bandwidth, latency).
Such inequalities are inevitable in this universe.

But these nodes should still, as much as we can, remain peers on the network.

(I am not responding m.a.holden specifically, but am addressing the entire list)


Thus let me propose instead:

1. Nodes already have "addresses" - the public key that identifies the node.
2. We can hash this address, then treat the hash as a 256-bit signed number in two's complement notation.
3. From this, we can synthesize an artificial "distance" measure: dist(x, y) = abs(as_signed(hash(x)) - as_signed(hash(y)).


Then, we can have the following global rule:

* Given three nodes X, Y, and Z, and dist(X, Z) < dist(Y, Z), then X SHOULD be more likely to know a route from itself to Z, than Y to know a route from itself to Z.

This rule can be probabilistically fulfilled by having each node N know *at least* routes to some nodes Ls..., where for all L <- Ls, dist(N, L) < some constant.
The constant here can be selected by each node independently, depending on its memory and CPU capacity.
(the node knowing more routes than that is perfectly fine)


Thus, suppose some node N is selected as trampoline.
It is unable to locate the next node D in the trampoline onion.
However, it knows some nodes Ls in its local routemap.
It looks for any node L <- Ls such that dist(L, D) < dist(N, D).
In fact, it can just sort those nodes according to dist(L, D) and start with the lowest distance, and fail once it reaches a dist(L, D) that exceeds its own dist(N, D).

Then it can delegate the routing to D to L, by rewrapping the onion to tell L to give it to D.
If node L itself does not know D, it does the same algorithm.

The above algorithm converges since dist(N, D) for each N that is delegated to will progressively get smaller until we reach some N that knows the destination D.


At the same time, nodes are equal and there are no a priori divisions/distinctions between nodes.
All nodes remain peers, there are no special "flare nodes", no special "knows the entire map" nodes, no special "knows my octant" nodes, no "endpoint" nodes, no hubs and spokes.
The network remains homogeneous and all are still peers, some might have smaller routemaps than others, but all are still equal in the network, as all things should be.


It does require that trampoline routing allow a trampoline to delegate searching for the next trampoline to another node.


--


Now, we want to be able to consider, how can each node N prune its routemap Ls such that it prioritizes keeping routes to those nodes with low dist(N, L)?

I present here an algorithm inspired by mark-and-sweep garbage collection.
Improvements in GC algorithms might be possible to adapt to this algorithm.
For instance, it might be useful to be able to perform the marking operation concurrently with gossiping.

We define two thresholds, T1, and T2, on the number of channels+nodes in the routemap.
(this can be changed to the number of bytes each channel/node takes up in the routemap, plus whatever bytes are needed for whatever bookkeeping structures are needed for fast routing through the routemap, but the principle remains the same)

T1 and T2 must be reasonably far from each other.

Each channel and node includes a "mark" bit.

While gossiping, we allocate new channels and nodes and keep track of how many are allocated.
Once the number reaches T2, we trigger the mark-and-sweep algorithm.

1. Clear all mark bits and set a "number of marked objects" variable to 0.
2. Sort nodes from smallest to largest dist(N, L), where N is your own node and L is the node being sorted.
3. While number of marked objects < T1:
1. Get the next node L in the sorted sequence of nodes.
2. Find a route from L to N.
3. If route found:
1. Mark all channels and nodes on that route.
Increment the number of marked objects if changing from unmarked to marked.
4. If route not found:
1. Mark all channels and nodes within some channel distance of L on the routemap.
Increment the number of marked objects if changing from unmarked to marked.
4. Prune all unmarked nodes and channels.
Set the number of allocated nodes and channels correctly from the marked nodes and channels.
(the node might keep at least the node addresses of unmarked nodes around so that it has a larger pool of nodes to put in a trampoline onion, but that can be kept in a flat array without pointers to its channels)

The above algorithm results in a limit on the routemap size, but allows dynamic changes in the routemap to occur.



Regards,
ZmnSCPxj
Author Public Key
npub1g5zswf6y48f7fy90jf3tlcuwdmjn8znhzaa4vkmtxaeskca8hpss23ms3l