Jeff Garzik [ARCHIVE] on Nostr: 📅 Original date posted:2011-06-14 🗒️ Summary of this message: The scalability ...
📅 Original date posted:2011-06-14
🗒️ Summary of this message: The scalability limitation of flood control in Bitcoin's P2P code is causing issues with large block sizes. Solutions include raising the limit or lowering the batch size.
📝 Original message:On Tue, Jun 14, 2011 at 12:44 PM, Mike Hearn <mike at plan99.net> wrote:
> Block sizes have started to get quite large once again. Whilst testing
> chain download today I was disconnected due to going over the 10mb
> flood control limit. Infuriatingly, I can't reproduce this reliably.
> But at 500 blocks an average of 20kb per block will cause this. As we
> can see from the block explorer, the average is probably quite close
> to that.
>
> The flood control seems like a pretty serious scalability limitation.
> I can see a few solutions. One is to raise the limit again. Another is
> to raise the limit and simultaneously lower the batch size. 500 blocks
> in one message means very large messages no matter how big the flood
> control limit is. Going down to 100 or even 50 would hurt chain
> download speed quite a bit in high latency environments, but chain
> download is already a serious bottleneck.
The main goal was not flood control but preventing an internal buffer
memory explosion. We already have the block chain on disk, so in
theory, if we can -eliminate- the outgoing network buffer and simply
use a pointer into the block chain file, we can send as much data as
we want.
HTTP servers certainly don't buffer huge amounts in memory; they would
keel over if so. HTTP servers have been working on the reverse, in
fact: moving the data-pushing over to sendfile(2) syscall and similar
optimizations.
This is an unfortunate relic of how bitcoin P2P code is written. If
the remote side has reduced their TCP window to zero, bitcoin will
still buffer so that it may continue processing other P2P traffic from
other nodes. That makes sense in the case of tiny, 31-byte address
messages -- one must handle the case of a half-sent message before
write(2) refuses additional data -- but not huge block chain download
messages.
The P2P code just wasn't written for huge amounts of streaming data,
and needs some serious thinking... I agree 100% that it is an issue
we will start bumping into, if we haven't already.
--
Jeff Garzik
exMULTI, Inc.
jgarzik at exmulti.com
Published at
2023-06-07 01:20:08Event JSON
{
"id": "d50378b4fc1dde44c6d435c921ade9e427e2acd3088c95592a0a661bb531c301",
"pubkey": "b25e10e25d470d9b215521b50da0dfe7a209bec7fedeb53860c3e180ffdc8c11",
"created_at": 1686100808,
"kind": 1,
"tags": [
[
"e",
"8ac08318487654d23c21b0d660910f4555910ca35e1a8510b9706687f955139f",
"",
"root"
],
[
"e",
"5c4d2eb96ea4a0edd0af38114a366ba73b8c5b25eb290b5e9f139b2a4df32257",
"",
"reply"
],
[
"p",
"a3410ff348fd16b9fcdb8b875646dd15f987dabfaee2750043340f290f2b99d3"
]
],
"content": "📅 Original date posted:2011-06-14\n🗒️ Summary of this message: The scalability limitation of flood control in Bitcoin's P2P code is causing issues with large block sizes. Solutions include raising the limit or lowering the batch size.\n📝 Original message:On Tue, Jun 14, 2011 at 12:44 PM, Mike Hearn \u003cmike at plan99.net\u003e wrote:\n\u003e Block sizes have started to get quite large once again. Whilst testing\n\u003e chain download today I was disconnected due to going over the 10mb\n\u003e flood control limit. Infuriatingly, I can't reproduce this reliably.\n\u003e But at 500 blocks an average of 20kb per block will cause this. As we\n\u003e can see from the block explorer, the average is probably quite close\n\u003e to that.\n\u003e\n\u003e The flood control seems like a pretty serious scalability limitation.\n\u003e I can see a few solutions. One is to raise the limit again. Another is\n\u003e to raise the limit and simultaneously lower the batch size. 500 blocks\n\u003e in one message means very large messages no matter how big the flood\n\u003e control limit is. Going down to 100 or even 50 would hurt chain\n\u003e download speed quite a bit in high latency environments, but chain\n\u003e download is already a serious bottleneck.\n\nThe main goal was not flood control but preventing an internal buffer\nmemory explosion. We already have the block chain on disk, so in\ntheory, if we can -eliminate- the outgoing network buffer and simply\nuse a pointer into the block chain file, we can send as much data as\nwe want.\n\nHTTP servers certainly don't buffer huge amounts in memory; they would\nkeel over if so. HTTP servers have been working on the reverse, in\nfact: moving the data-pushing over to sendfile(2) syscall and similar\noptimizations.\n\nThis is an unfortunate relic of how bitcoin P2P code is written. If\nthe remote side has reduced their TCP window to zero, bitcoin will\nstill buffer so that it may continue processing other P2P traffic from\nother nodes. That makes sense in the case of tiny, 31-byte address\nmessages -- one must handle the case of a half-sent message before\nwrite(2) refuses additional data -- but not huge block chain download\nmessages.\n\nThe P2P code just wasn't written for huge amounts of streaming data,\nand needs some serious thinking... I agree 100% that it is an issue\nwe will start bumping into, if we haven't already.\n\n-- \nJeff Garzik\nexMULTI, Inc.\njgarzik at exmulti.com",
"sig": "d27ba10c708089e73e9914707f2520ab84310887aad9d60f5919e379c89f3085a95c01e2539b9d067355670608926ad9ccd6797680051de9b43d617c44e445aa"
}