Gregory Maxwell [ARCHIVE] on Nostr: đź“… Original date posted:2012-12-05 đź“ť Original message:On Tue, Dec 4, 2012 at ...
đź“… Original date posted:2012-12-05
đź“ť Original message:On Tue, Dec 4, 2012 at 9:08 PM, Alan Reiner <etotheipi at gmail.com> wrote:
> Our divergence is on two points (personal opinions):
>
> (1) I don't think there is any real risk to the centralization of the
> network by promoting a SPV (purely-consuming) node to brand-new users.
> In my opinion (but I'm not as familiar with the networking as you), as
> long as all full nodes are full-validation, the bottleneck will be
> computation and bandwidth, long before a constant 10k nodes would be
> insufficient to support propagating data through the network.
Not so— a moderately fast multicore desktop machine can keep up with
the maximum possible validation rate of the Bitcoin network and the
bandwidth has a long term maximum rate of about 14kbit/sec— though
you'll want at least ten times that for convergence stability and the
ability feed multiple peers.
Here are the worst blocks testnet3 (which has some intentionally
constructed maximum sized blocks),E31230 :
(with the new parallel validation code)
- Verify 2166 txins: 250.29ms (0.116ms/txin)
- Verify 3386 txins: 1454.25ms (0.429ms/txin)
- Verify 5801 txins: 575.46ms (0.099ms/txin)
- Verify 6314 txins: 625.05ms (0.099ms/txin)
Even the slowest one _validates_ at 400x realtime. (these measurements
are probably a bit noisy— but the point is that its fast).
(the connecting is fast too, but thats obvious with such a small database)
Although I haven't tested leveldb+ultraprune with a really enormous
txout set or generally with sustained maximum load— so there may be
other gaffs in the software that get exposed with sustained load, but
they'd all be correctable. Sounds like some interesting stuff to test
with on testnet fork that has the POW test disabled.
While syncing up a behind node can take a while— keep in mind that
you're expecting to sync up weeks of network work in hours. Even
'slow' is quite fast.
> In fact,
> I was under the impression that "connectedness" was the real metric of
> concern (and resilience of that connectedness to large percentage of
> users disappearing suddenly). If that's true, above a certain number of
> nodes, the connectedness isn't really going to get any better (I know
> it's not really that simple, but I feel like it is up to 10x the current
> network size).
Thats not generally concern for me. There are a number of DOS attack
risks... But attacker linear DOS attacks aren't generally avoidable
and they don't persist.
Of the class of connectedness concerns I have is that a sybil attacker
could spin up enormous numbers of nodes and then use them to partition
large miners. So, e.g. find BitTaco's node(s) and the nodes for
miners covering 25% hashpower and get them into a separate partition
from the rest of the network. Then they give double spends to that
partition and use them to purchase an unlimited supply of digitally
delivered tacos— allowing their captured miners to build an ill fated
fork— and drop the partition once the goods are delivered.
But there is no amount of full nodes that removes this concern,
especially if you allow for attackers which have compromised ISPs.
It can be adequately addressed by a healthy darknet of private
authenticated peerings between miners and other likely targets. I've
also thrown out some ideas on using merged mined node IDs to make some
kinds of sybil attacks harder ... but it'll be interesting to see how
the deployment of ASICs influences the concentration of hashpower— it
seems like there has already been a substantial move away from the
largest pools. Less hashpower consolidation makes attacks like this
less worrisome.
> (2) I think the current experience *is* really poor.
Yes, I said so specifically. But the fact that people are flapping
their lips here instead of testing the bitcoin-qt git master which is
an 1-2 order of magnitude improvement suggests that perhaps I'm wrong
about that. Certainly the dearth of people testing and making bug
reports suggests people don't actually care that much.
> You seem to
> suggest that the question for these new users is whether they will use
> full-node-or-lite-node, but I believe it will be a decision between
> lite-node-or-nothing-at-all (losing interest altogether).
No. The "question" that I'm concerned with is do we promote lite nodes
as equally good option— even for high end systems— remove the
incentive for people to create, improve, and adopt more useful full
node software and forever degrade the security of the system.
> Waiting a day
> for the full node to synchronize, and then run into issues like
> blkindex.dat corruption when their system crashes for some unrelated
> reason and they have to resync for another day... they'll be gone in a
> heartbeat.
The current software patches plus parallelism can sync on a fast
system with luck network access (or a local copy of the data) in under
an hour.
This is no replacement for start as SPV, but nor are handicapped
client programs a replacement for making fully capable ones acceptably
performing.
> Users need to experience, as quickly and easily as possible, that they
> can move money across the world, without signing up for anything or
> paying any fees.
Making the all the software painless for users is a great goal— and
one I share. I still maintain that it has nothing to do with
promoting less capable and secure software to users.
Published at
2023-06-07 10:46:13Event JSON
{
"id": "b3925b9eda2cede32a068b82ab678563e55e806a1d93639f31a6beb768e60707",
"pubkey": "4aa6cf9aa5c8e98f401dac603c6a10207509b6a07317676e9d6615f3d7103d73",
"created_at": 1686134773,
"kind": 1,
"tags": [
[
"e",
"c80d71acc598e43ca306127e20ad63b07c82e4d43519cd67fa3dcb170b78955c",
"",
"root"
],
[
"e",
"996682b12b1769f8d82f2abf1f30dcc6aaf7498689e174ee0b35d43008c0e624",
"",
"reply"
],
[
"p",
"86f42bcb76a431c128b596c36714ae73a42cae48706a9e5513d716043447f5ec"
]
],
"content": "📅 Original date posted:2012-12-05\n📝 Original message:On Tue, Dec 4, 2012 at 9:08 PM, Alan Reiner \u003cetotheipi at gmail.com\u003e wrote:\n\u003e Our divergence is on two points (personal opinions):\n\u003e\n\u003e (1) I don't think there is any real risk to the centralization of the\n\u003e network by promoting a SPV (purely-consuming) node to brand-new users.\n\u003e In my opinion (but I'm not as familiar with the networking as you), as\n\u003e long as all full nodes are full-validation, the bottleneck will be\n\u003e computation and bandwidth, long before a constant 10k nodes would be\n\u003e insufficient to support propagating data through the network.\n\nNot so— a moderately fast multicore desktop machine can keep up with\nthe maximum possible validation rate of the Bitcoin network and the\nbandwidth has a long term maximum rate of about 14kbit/sec— though\nyou'll want at least ten times that for convergence stability and the\nability feed multiple peers.\n\nHere are the worst blocks testnet3 (which has some intentionally\nconstructed maximum sized blocks),E31230 :\n(with the new parallel validation code)\n- Verify 2166 txins: 250.29ms (0.116ms/txin)\n- Verify 3386 txins: 1454.25ms (0.429ms/txin)\n- Verify 5801 txins: 575.46ms (0.099ms/txin)\n- Verify 6314 txins: 625.05ms (0.099ms/txin)\nEven the slowest one _validates_ at 400x realtime. (these measurements\nare probably a bit noisy— but the point is that its fast).\n(the connecting is fast too, but thats obvious with such a small database)\n\nAlthough I haven't tested leveldb+ultraprune with a really enormous\ntxout set or generally with sustained maximum load— so there may be\nother gaffs in the software that get exposed with sustained load, but\nthey'd all be correctable. Sounds like some interesting stuff to test\nwith on testnet fork that has the POW test disabled.\n\nWhile syncing up a behind node can take a while— keep in mind that\nyou're expecting to sync up weeks of network work in hours. Even\n'slow' is quite fast.\n\n\u003e In fact,\n\u003e I was under the impression that \"connectedness\" was the real metric of\n\u003e concern (and resilience of that connectedness to large percentage of\n\u003e users disappearing suddenly). If that's true, above a certain number of\n\u003e nodes, the connectedness isn't really going to get any better (I know\n\u003e it's not really that simple, but I feel like it is up to 10x the current\n\u003e network size).\n\nThats not generally concern for me. There are a number of DOS attack\nrisks... But attacker linear DOS attacks aren't generally avoidable\nand they don't persist.\n\nOf the class of connectedness concerns I have is that a sybil attacker\ncould spin up enormous numbers of nodes and then use them to partition\nlarge miners. So, e.g. find BitTaco's node(s) and the nodes for\nminers covering 25% hashpower and get them into a separate partition\nfrom the rest of the network. Then they give double spends to that\npartition and use them to purchase an unlimited supply of digitally\ndelivered tacos— allowing their captured miners to build an ill fated\nfork— and drop the partition once the goods are delivered.\n\nBut there is no amount of full nodes that removes this concern,\nespecially if you allow for attackers which have compromised ISPs.\nIt can be adequately addressed by a healthy darknet of private\nauthenticated peerings between miners and other likely targets. I've\nalso thrown out some ideas on using merged mined node IDs to make some\nkinds of sybil attacks harder ... but it'll be interesting to see how\nthe deployment of ASICs influences the concentration of hashpower— it\nseems like there has already been a substantial move away from the\nlargest pools. Less hashpower consolidation makes attacks like this\nless worrisome.\n\n\u003e (2) I think the current experience *is* really poor.\n\nYes, I said so specifically. But the fact that people are flapping\ntheir lips here instead of testing the bitcoin-qt git master which is\nan 1-2 order of magnitude improvement suggests that perhaps I'm wrong\nabout that. Certainly the dearth of people testing and making bug\nreports suggests people don't actually care that much.\n\n\u003e You seem to\n\u003e suggest that the question for these new users is whether they will use\n\u003e full-node-or-lite-node, but I believe it will be a decision between\n\u003e lite-node-or-nothing-at-all (losing interest altogether).\n\nNo. The \"question\" that I'm concerned with is do we promote lite nodes\nas equally good option— even for high end systems— remove the\nincentive for people to create, improve, and adopt more useful full\nnode software and forever degrade the security of the system.\n\n\u003e Waiting a day\n\u003e for the full node to synchronize, and then run into issues like\n\u003e blkindex.dat corruption when their system crashes for some unrelated\n\u003e reason and they have to resync for another day... they'll be gone in a\n\u003e heartbeat.\n\nThe current software patches plus parallelism can sync on a fast\nsystem with luck network access (or a local copy of the data) in under\nan hour.\n\nThis is no replacement for start as SPV, but nor are handicapped\nclient programs a replacement for making fully capable ones acceptably\nperforming.\n\n\u003e Users need to experience, as quickly and easily as possible, that they\n\u003e can move money across the world, without signing up for anything or\n\u003e paying any fees.\n\nMaking the all the software painless for users is a great goal— and\none I share. I still maintain that it has nothing to do with\npromoting less capable and secure software to users.",
"sig": "51ea20139629e5f4417a883a8afa5b90e2208f6f402bf8441284efed434201a55fd30c75e0de5d855edfbaf20c49b0aa7323f68b2d8510dc6ca0e0d9f3bb5339"
}