📅 Original date posted:2015-06-01
📝 Original message:This exact question came up on the Bitcoin Stack Exchange once. I gave an
answer here:
http://bitcoin.stackexchange.com/questions/37292/whats-the-purpose-of-a-maximum-block-size/37303#37303
On Mon, Jun 1, 2015 at 2:32 PM, Jim Phillips <jim at ergophobia.org> wrote:
> Ok, I understand at least some of the reason that blocks have to be kept
> to a certain size. I get that blocks which are too big will be hard to
> propagate by relays. Miners will have more trouble uploading the large
> blocks to the network once they've found a hash. We need block size
> constraints to create a fee economy for the miners.
>
> But these all sound to me like issues that affect some, but not others. So
> it seems to me like it ought to be a configurable setting. We've already
> witnessed with last week's stress test that most miners aren't even
> creating 1MB blocks but are still using the software defaults of 730k. If
> there are configurable limits, why does there have to be a hard limit?
> Can't miners just use the configurable limit to decide what size blocks
> they can afford to and are thus willing to create? They could just as
> easily use that to create a fee economy. If the miners with the most
> hashpower are not willing to mine blocks larger than 1 or 2 megs, then they
> are able to slow down confirmations of transactions. It may take several
> blocks before a miner willing to include a particular transaction finds a
> block. This would actually force miners to compete with each other and find
> a block size naturally instead of having it forced on them by the protocol.
> Relays would be able to participate in that process by restricting the
> miners ability to propagate large blocks. You know, like what happens in a
> FREE MARKET economy, without burdensome regulation which can be manipulated
> through politics? Isn't that what's really happening right now? Different
> political factions with different agendas are fighting over how best to
> regulate the Bitcoin protocol.
>
> I know the limit was originally put in place to prevent spamming. But that
> was when we were mining with CPUs and just beginning to see the occasional
> GPU which could take control over the network and maliciously spam large
> blocks. But with ASIC mining now catching up to Moore's Law, that's not
> really an issue anymore. No one malicious entity can really just take over
> the network now without spending more money than it's worth -- and that's
> just going to get truer with time as hashpower continues to grow. And it's
> not like the hard limit really does anything anymore to prevent spamming.
> If a spammer wants to create thousands or millions of transactions, a hard
> limit on the block size isn't going to stop him.. He'll just fill up the
> mempool or UTXO database instead of someone's block database.. And block
> storage media is generally the cheapest storage.. I mean they could be
> written to tape and be just as valid as if they're stored in DRAM. Combine
> that with pruning, and block storage costs are almost a non-issue for
> anyone who isn't running an archival node.
>
> And can't relay nodes just configure a limit on the size of blocks they
> will relay? Sure they'd still need to download a big block occasionally,
> but that's not really that big a deal, and they're under no obligation to
> propagate it.. Even if it's a 2GB block, it'll get downloaded eventually.
> It's only if it gets to the point where the average home connection is too
> slow to keep up with the transaction & block flow that there's any real
> issue there, and that would happen regardless of how big the blocks are. I
> personally would much prefer to see hardware limits act as the bottleneck
> than to introduce an artificial bottleneck into the protocol that has to be
> adjusted regularly. The software and protocol are TECHNICALLY capable of
> scaling to handle the world's entire transaction set. The real issue with
> scaling to this size is limitations on hardware, which are regulated by
> Moore's Law. Why do we need arbitrary soft limits? Why can't we allow
> Bitcoin to grow naturally within the ever increasing limits of our
> hardware? Is it because nobody will ever need more than 640k of RAM?
>
> Am I missing something here? Is there some big reason that I'm overlooking
> why there has to be some hard-coded limit on the block size that affects
> the entire network and creates ongoing issues in the future?
>
> --
>
> *James G. Phillips IV*
> <https://plus.google.com/u/0/113107039501292625391/posts>
>
> *"Don't bunt. Aim out of the ball park. Aim for the company of immortals."
> -- David Ogilvy*
>
> *This message was created with 100% recycled electrons. Please think
> twice before printing.*
>
>
> ------------------------------------------------------------------------------
>
> _______________________________________________
> Bitcoin-development mailing list
> Bitcoin-development at lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bitcoin-development
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20150601/1eead1a9/attachment.html>