CJP [ARCHIVE] on Nostr: 📅 Original date posted:2015-07-07 📝 Original message: The routing design has ...
📅 Original date posted:2015-07-07
📝 Original message:
The routing design has important implications for privacy, but also for
the enforcement of regulations on the Lightning network.
Imagine, for instance, that a couple of large nodes start requiring
their neighbors to provide identity information (KYC-style regulation),
and then require them to recursively provide identity information for
all their neighbors' neighbors, and so on. If it is visible to
intermediate nodes which other nodes participate in a transaction, this
would cause the Lightning network to split into a regulated and a
non-regulated part: nobody would dare to interface between the two,
since that would prove to the regulated side that they illegally provide
connectivity on the non-regulated side.
So, I don't want nodes to explicitly know the shape of the entire
network. Based on how Wikipedia explains source routing to me, I think
it is incompatible with what I want. Please also note that IP almost
never uses source routing.
Also, as a counter-measure against censorship (or persecution) based on
destination address, I think the function of "destination address of a
route" should be de-coupled from the function of "payer endpoint" or
"payee endpoint" of a route. In many cases, the "payer endpoint" or
especially the "payee endpoint" will also fulfill the role of
"destination address", but they may also choose a neutral "meeting
point" node in the middle, and both route towards its address. This will
allow nodes to secretly interface between regulated and non-regulated
parts of the network, for transactions going in both directions.
The time-out value is a bit of a problem in this concept, since it is an
indication of the number of hops from the payee endpoint. However, if
nodes are free to choose the time-out increment for themselves, they
could choose to make that increment smaller, to be able to route through
a node that provides an interface to the regulated part.
An additional advantage of separating destination addresses from the
payment endpoints is that routing tables can be much smaller. Most
consumers, and a lot of small shops can choose not to have their own
destination address, but instead route through the destination address
of their Lightning provider (a bit like a NAT router's IP address).
In my view, routing tables are a sort of a heuristic, that tells you how
likely a payment (of a certain amount!) to/from a certain destination
address is to succeed on one of your interfaces. It is an optimization
over the dumb algorithm of simply trying out all your interfaces one by
one(*). It is TBD how to determine these heuristics, and how to exchange
them between nodes.
This is probably quite different from how routing on the Internet works,
and I'm not sure how it will scale and how it can be defended against
DoS attacks, but it sort of follows automatically from the desire to
keep the network free.
CJP
(*) Which is currently the only routing method implemented in Amiko Pay.
Rusty Russell schreef op vr 03-07-2015 om 11:40 [+0930]:
> Hi all,
>
> One of the fun open questions for LN is how routing will work.
> I'd like to kick off that discussion now, to see if we can create a
> strawman which doesn't immediately collapse.
>
> Assumptions:
> 1. I'm assuming each node is known by its pubkey.
>
> 2. Source routing seems the easiest route; best for privacy, best for
> any tradeoffs between reliability/price etc.
>
> 3. We should do onion routing: each node knows the source and next step.
> This is not perfect: R values trivially identifies connections (if
> you own two nodes on the path, you can connect them), and the timeout
> implies a minimum TTL.
>
> 4. A recipient gives the payer 100 routes from some nodes to them. The
> payer hopefully can route to one of the mentioned nodes (probably the
> cheapest). This also means that if the payer has to do some route
> query it doesn't trivially reveal who the recipient is.
>
> Route broadcast is more fun. It's not like BGP where you have useful
> subnets; even if you did, you need the pubkey of every node.
>
> My original idea was a subset of hubs (a few thousand?) to which you
> would connect: that makes full discovery routing fairly easy within that
> network, and you report your address as "client XXXXX via hub <pubkey>".
> Your hub(s) keep the routing tables, you just query them mostly.
>
> A more ambitious idea would be to select N "beacons" based on the block
> hash which every node figures out their best routes to/from. That's
> actually really efficient for broadcasting: you can guess whether a node
> is a likely beacon based on previous results, and only broadcast likely
> winners. It also means each node only has to remember N * 144 routes
> each way if we want beacons to expire after a day.
>
> But could also result in the beacons (and their neighbors) getting
> slammed. Maybe beacons only become usable after 10 blocks, so they have
> a chance to boost their connections in preparation? I'd have to
> simulate it...
>
> Joseph also pointed out that the anchor transactions in the blockchain
> imply the network topology. That's kind of cool, but I'll let him
> explore that.
>
> Cheers,
> Rusty
Published at
2023-06-09 12:43:32Event JSON
{
"id": "ce16f951d68d814d7eefee92d0fff245adce80e81d858906c99cd4de5dd6ce9d",
"pubkey": "880fa8c3080c3bd98e574cfcd6d6f53fd13e0516c40ea3f46295438b0c07bdf5",
"created_at": 1686314612,
"kind": 1,
"tags": [
[
"e",
"b051b897d36d9747c0a2ead17e2be0eb4114a547258d74b54d7d8da5c672c214",
"",
"root"
],
[
"e",
"d17d39efe8b92c1769850a8b456518c91ec351e229d3498f1ebda2f164ba149f",
"",
"reply"
],
[
"p",
"13bd8c1c5e3b3508a07c92598647160b11ab0deef4c452098e223e443c1ca425"
]
],
"content": "📅 Original date posted:2015-07-07\n📝 Original message:\nThe routing design has important implications for privacy, but also for\nthe enforcement of regulations on the Lightning network.\n\nImagine, for instance, that a couple of large nodes start requiring\ntheir neighbors to provide identity information (KYC-style regulation),\nand then require them to recursively provide identity information for\nall their neighbors' neighbors, and so on. If it is visible to\nintermediate nodes which other nodes participate in a transaction, this\nwould cause the Lightning network to split into a regulated and a\nnon-regulated part: nobody would dare to interface between the two,\nsince that would prove to the regulated side that they illegally provide\nconnectivity on the non-regulated side.\n\nSo, I don't want nodes to explicitly know the shape of the entire\nnetwork. Based on how Wikipedia explains source routing to me, I think\nit is incompatible with what I want. Please also note that IP almost\nnever uses source routing.\n\nAlso, as a counter-measure against censorship (or persecution) based on\ndestination address, I think the function of \"destination address of a\nroute\" should be de-coupled from the function of \"payer endpoint\" or\n\"payee endpoint\" of a route. In many cases, the \"payer endpoint\" or\nespecially the \"payee endpoint\" will also fulfill the role of\n\"destination address\", but they may also choose a neutral \"meeting\npoint\" node in the middle, and both route towards its address. This will\nallow nodes to secretly interface between regulated and non-regulated\nparts of the network, for transactions going in both directions.\n\nThe time-out value is a bit of a problem in this concept, since it is an\nindication of the number of hops from the payee endpoint. However, if\nnodes are free to choose the time-out increment for themselves, they\ncould choose to make that increment smaller, to be able to route through\na node that provides an interface to the regulated part.\n\nAn additional advantage of separating destination addresses from the\npayment endpoints is that routing tables can be much smaller. Most\nconsumers, and a lot of small shops can choose not to have their own\ndestination address, but instead route through the destination address\nof their Lightning provider (a bit like a NAT router's IP address).\n\nIn my view, routing tables are a sort of a heuristic, that tells you how\nlikely a payment (of a certain amount!) to/from a certain destination\naddress is to succeed on one of your interfaces. It is an optimization\nover the dumb algorithm of simply trying out all your interfaces one by\none(*). It is TBD how to determine these heuristics, and how to exchange\nthem between nodes.\n\nThis is probably quite different from how routing on the Internet works,\nand I'm not sure how it will scale and how it can be defended against\nDoS attacks, but it sort of follows automatically from the desire to\nkeep the network free.\n\nCJP\n\n(*) Which is currently the only routing method implemented in Amiko Pay.\n\n\nRusty Russell schreef op vr 03-07-2015 om 11:40 [+0930]:\n\u003e Hi all,\n\u003e \n\u003e One of the fun open questions for LN is how routing will work.\n\u003e I'd like to kick off that discussion now, to see if we can create a\n\u003e strawman which doesn't immediately collapse.\n\u003e \n\u003e Assumptions:\n\u003e 1. I'm assuming each node is known by its pubkey.\n\u003e \n\u003e 2. Source routing seems the easiest route; best for privacy, best for\n\u003e any tradeoffs between reliability/price etc.\n\u003e \n\u003e 3. We should do onion routing: each node knows the source and next step.\n\u003e This is not perfect: R values trivially identifies connections (if\n\u003e you own two nodes on the path, you can connect them), and the timeout\n\u003e implies a minimum TTL.\n\u003e \n\u003e 4. A recipient gives the payer 100 routes from some nodes to them. The\n\u003e payer hopefully can route to one of the mentioned nodes (probably the\n\u003e cheapest). This also means that if the payer has to do some route\n\u003e query it doesn't trivially reveal who the recipient is.\n\u003e \n\u003e Route broadcast is more fun. It's not like BGP where you have useful\n\u003e subnets; even if you did, you need the pubkey of every node.\n\u003e \n\u003e My original idea was a subset of hubs (a few thousand?) to which you\n\u003e would connect: that makes full discovery routing fairly easy within that\n\u003e network, and you report your address as \"client XXXXX via hub \u003cpubkey\u003e\".\n\u003e Your hub(s) keep the routing tables, you just query them mostly.\n\u003e \n\u003e A more ambitious idea would be to select N \"beacons\" based on the block\n\u003e hash which every node figures out their best routes to/from. That's\n\u003e actually really efficient for broadcasting: you can guess whether a node\n\u003e is a likely beacon based on previous results, and only broadcast likely\n\u003e winners. It also means each node only has to remember N * 144 routes\n\u003e each way if we want beacons to expire after a day.\n\u003e \n\u003e But could also result in the beacons (and their neighbors) getting\n\u003e slammed. Maybe beacons only become usable after 10 blocks, so they have\n\u003e a chance to boost their connections in preparation? I'd have to\n\u003e simulate it...\n\u003e \n\u003e Joseph also pointed out that the anchor transactions in the blockchain\n\u003e imply the network topology. That's kind of cool, but I'll let him\n\u003e explore that.\n\u003e \n\u003e Cheers,\n\u003e Rusty",
"sig": "8beb8a129b428a31d585eb8cde15619e141c5c173b0ce9c19b5fd38c2f9f01706a8e9ee73848d4fd6126096d0f5556f93eeff829abd82863cb586f3f2b2721fe"
}