📅 Original date posted:2014-04-08
📝 Original message:Isn't that just conceding that p2p protocol A is better than p2p protocol B?
Can't Bitcoin Core's block fetching be improved to get similar performance as a torrent + import?
Currently it's hard to go wide on data fetching because headers first is still pretty 'beefy'. The headers can be compressed, which would get you about 50% savings.
Also, maybe adding a layer that groups block headers into a single hash (say, 2016 headers), and then being able to fetch those (possibly compressed) header 'blocks' from multiple sources in parallel. And fanning out block fetches even further, favoring fast nodes.
Just thinking out loud.
jp
> On Apr 7, 2014, at 8:44 PM, Jeff Garzik <jgarzik at bitpay.com> wrote:
>
> Being Mr. Torrent, I've held open the "80% serious" suggestion to
> simply refuse to serve blocks older than X (3 months?).
>
> That forces download by other means (presumably torrent).
>
> I do not feel it is productive for any nodes on the network to waste
> time/bandwidth/etc. serving static, ancient data. There remain, of
> course, issues of older nodes and "getting the word out" that prevents
> this switch from being flipped on tomorrow.
>
>
>
>> On Mon, Apr 7, 2014 at 2:49 PM, Gregory Maxwell <gmaxwell at gmail.com> wrote:
>>> On Mon, Apr 7, 2014 at 11:35 AM, Tamas Blummer <tamas at bitsofproof.com> wrote:
>>> BTW, did we already agree on the service bits for an archive node?
>>
>> I'm still very concerned that a binary archive bit will cause extreme
>> load hot-spotting and the kind of binary "Use lots of resources YES or
>> NO" I think we're currently suffering some from, but at that point
>> enshrined in the protocol.
>>
>> It would be much better to extend the addr messages so that nodes can
>> indicate a range or two of blocks that they're serving, so that all
>> nodes can contribute fractionally according to their means. E.g. if
>> you want to offer up 8 GB of distributed storage and contribute to the
>> availability of the blockchain, without having to swollow the whole
>> 20, 30, 40 ... gigabyte pill.
>>
>> Already we need that kind of distributed storage for the most recent
>> blocks to prevent extreme bandwidth load on archives, so extending it
>> to arbitrary ranges is only more complicated because there is
>> currently no room to signal it.
>>
>> ------------------------------------------------------------------------------
>> Put Bad Developers to Shame
>> Dominate Development with Jenkins Continuous Integration
>> Continuously Automate Build, Test & Deployment
>> Start a new project now. Try Jenkins in the cloud.
>> http://p.sf.net/sfu/13600_Cloudbees
>> _______________________________________________
>> Bitcoin-development mailing list
>> Bitcoin-development at lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bitcoin-development
>
>
>
> --
> Jeff Garzik
> Bitcoin core developer and open source evangelist
> BitPay, Inc. https://bitpay.com/
>
> ------------------------------------------------------------------------------
> Put Bad Developers to Shame
> Dominate Development with Jenkins Continuous Integration
> Continuously Automate Build, Test & Deployment
> Start a new project now. Try Jenkins in the cloud.
> http://p.sf.net/sfu/13600_Cloudbees
> _______________________________________________
> Bitcoin-development mailing list
> Bitcoin-development at lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bitcoin-development