Michael Grønager [ARCHIVE] on Nostr: 📅 Original date posted:2011-12-29 🗒️ Summary of this message: The trickle ...
📅 Original date posted:2011-12-29
🗒️ Summary of this message: The trickle algorithm in CNode::SendMessages sends only 1/4 of transaction-invs, but the other 3/4 are carried around from round to round. A suggestion is made to remove the redundant vInvWait stuff or change the algorithm.
📝 Original message:In CNode::SendMessages there is a trickle algorithm. Judging from the comments it is supposed to:
* at each update round a new (random) trickle node is chosen, with 120 nodes and an average round time of 100ms (the sleep) we will have moved through roughly all nodes every 12-15 seconds.
* when a node is the trickle node it will get to send all its pending addresses to its corresponding peer.
* when a node is not trickle node (the rest of the nodes) we send transaction-invs, however, only 1/4 of them - the rest is pushed to wait for the next round and would eventually get sent.
However, the way the 1/4 of the invs are chosen is by:
(inv.getHash() ^ hashSalt) & 3 == 0
As hashSalt is a constant (static, generated on start up) and as the hash of an inv is constant for the inv too, the other 3/4 will never get sent and hence it does not make sense to carry them around from round to round:
if (fTrickleWait) vInvWait.push_back(inv);
and:
pto->vInventoryToSend = vInvWait;
The hashSalt will be different for each node in the peer-to-peer network and hence as long as we have much more than 4 nodes all tx'es will be sent around.
Ironically, this (wrong?) implementation divides the inv forwarding hash space into 4, along the same lines as we discussed last week for DHTs...
I suggest to either keep the algorithm as is, but remove the redundant vInvWait stuff, or to change the algorithm to e.g. push the tx'es into a multimap (invHash^hashSalt, invHash) and choose the first 25% in each round.
The last alternative is that I have misunderstood the code... - if so please correct me ;)
Happy New Year!
Michael
Published at
2023-06-07 02:53:53Event JSON
{
"id": "5f865c2cfed13a8cf818eda835fbc66ea3ec4cc4b6e32d9609e64228c3f0524d",
"pubkey": "a277336e95d2d0a831fff67fc80d8082322689a88ede9f877fa246a02629a43f",
"created_at": 1686106433,
"kind": 1,
"tags": [
[
"e",
"ba61533977c7abbf2187cd40c54a921eeb555c490b31e47b6925df791be93e40",
"",
"reply"
],
[
"p",
"a23dbf6c6cc83e14cc3df4e56cc71845f611908084cfe620e83e40c06ccdd3d0"
]
],
"content": "📅 Original date posted:2011-12-29\n🗒️ Summary of this message: The trickle algorithm in CNode::SendMessages sends only 1/4 of transaction-invs, but the other 3/4 are carried around from round to round. A suggestion is made to remove the redundant vInvWait stuff or change the algorithm.\n📝 Original message:In CNode::SendMessages there is a trickle algorithm. Judging from the comments it is supposed to:\n\n* at each update round a new (random) trickle node is chosen, with 120 nodes and an average round time of 100ms (the sleep) we will have moved through roughly all nodes every 12-15 seconds.\n* when a node is the trickle node it will get to send all its pending addresses to its corresponding peer.\n* when a node is not trickle node (the rest of the nodes) we send transaction-invs, however, only 1/4 of them - the rest is pushed to wait for the next round and would eventually get sent.\n\nHowever, the way the 1/4 of the invs are chosen is by: \n\t(inv.getHash() ^ hashSalt) \u0026 3 == 0\n\nAs hashSalt is a constant (static, generated on start up) and as the hash of an inv is constant for the inv too, the other 3/4 will never get sent and hence it does not make sense to carry them around from round to round:\n\tif (fTrickleWait) vInvWait.push_back(inv); \nand:\n\tpto-\u003evInventoryToSend = vInvWait;\n\nThe hashSalt will be different for each node in the peer-to-peer network and hence as long as we have much more than 4 nodes all tx'es will be sent around.\n\nIronically, this (wrong?) implementation divides the inv forwarding hash space into 4, along the same lines as we discussed last week for DHTs...\n\nI suggest to either keep the algorithm as is, but remove the redundant vInvWait stuff, or to change the algorithm to e.g. push the tx'es into a multimap (invHash^hashSalt, invHash) and choose the first 25% in each round. \n\nThe last alternative is that I have misunderstood the code... - if so please correct me ;)\n\nHappy New Year!\n\nMichael",
"sig": "f0da83e85790f201cec6c837ea0a6a0391a31efc20756c4c957b6dc99134d4ddaaa0b462ad4092c617cf9b7d6411d4483fda530539e6c6d463a54f5b166ce12f"
}