📅 Original date posted:2015-05-25
📝 Original message:On Mon, May 25, 2015 at 1:36 PM, Mike Hearn <mike at plan99.net> wrote:
This meme about datacenter-sized nodes has to die. The Bitcoin wiki is down
> right now, but I showed years ago that you could keep up with VISA on a
> single well specced server with today's technology. Only people living in a
> dreamworld think that Bitcoin might actually have to match that level of
> transaction demand with today's hardware. As noted previously, "too many
> users" is simply not a problem Bitcoin has .... and may never have!
>
>
... And will certainly NEVER have if we can't solve the capacity problem
SOON.
In a former life, I was a capacity planner for Bank of America's mid-range
server group. We had one hard and fast rule. When you are typically
exceeding 75% of capacity on a given metric, it's time to expand capacity.
Period. You don't do silly things like adjusting the business model to
disincentivize use. Unless there's some flaw in the system and it's leaking
resources, if usage has increased to the point where you are at or near the
limits of capacity, you expand capacity. It's as simple as that, and I've
found that same rule fits quite well in a number of systems.
In Bitcoin, we're not leaking resources. There's no flaw. The system is
performing as intended. Usage is increasing because it works so well, and
there is huge potential for future growth as we identify more uses and
attract more users. There might be a few technical things we can do to
reduce consumption, but the metric we're concerned with right now is how
many transactions we can fit in a block. We've broken through the 75%
marker and are regularly bumping up against the 100% limit.
It is time to stop debating this and take action to expand capacity. The
only questions that should remain are how much capacity do we add, and how
soon can we do it. Given that most existing computer systems and networks
can easily handle 20MB blocks every 10 minutes, and given that that will
increase capacity 20-fold, I can't think of a single reason why we can't go
to 20MB as soon as humanly possible. And in a few years, when the average
block size is over 15MB, we bump it up again to as high as we can go then
without pushing typical computers or networks beyond their capacity. We can
worry about ways to slow down growth without affecting the usefulness of
Bitcoin as we get closer to the hard technical limits on our capacity.
And you know what else? If miners need higher fees to accommodate the costs
of bigger blocks, they can configure their nodes to only mine transactions
with higher fees.. Let the miners decide how to charge enough to pay for
their costs. We don't need to cripple the network just for them.
--
*James G. Phillips IV*
<https://plus.google.com/u/0/113107039501292625391/posts>
*"Don't bunt. Aim out of the ball park. Aim for the company of immortals."
-- David Ogilvy*
*This message was created with 100% recycled electrons. Please think twice
before printing.*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20150525/4c46b15f/attachment.html>