Anthony Towns [ARCHIVE] on Nostr: 📅 Original date posted:2022-10-30 📝 Original message:On Thu, Oct 27, 2022 at ...
📅 Original date posted:2022-10-30
📝 Original message:On Thu, Oct 27, 2022 at 09:29:47PM +0100, Antoine Riard via bitcoin-dev wrote:
> Let's take the contra.
(I don't think I know that phrase? Is it like "play devil's advocate"?)
> I would say the current post describes the state of Bitcoin Core and
> beyond policy
> rules with a high-degree of exhaustivity and completeness, though itt what
> is, mostly a description. While I think it ends with a set of
> recommendations
It was only intended as a description, not a recommendation for anything.
At this point, the only thing I think I could honestly recommend
that doesn't seem like it comes with massive downsides, is for core to
recommend and implement a particular mempool policy, and only have options
that either make it feasible to scale that policy to different hardware
limitations, and provide options that users can activate en-masse if it
turns out people are doing crazy things in the mempool (eg, a new policy
turns out to be ill-conceived, and it's better to revert to a previous
policy; or a potential spam vector gets exploited at scale).
> What should be actually the design goals and
> principles of Core's transaction-relay propagation rules
> of which mempool accepts ones is a subset?
I think the goals of mempool/relay policy are _really_ simple; namely:
* relay transactions from users to all potential miners, so that
non-mining nodes don't have to directly contact miners to announce
their tx, both for efficiency (your tx can appear in the next block
anyone mines, rather than just the next block you mine) and privacy
(so that miners don't know who a transaction belongs to, so that
users don't have to know who miners are, and so there's no/minimal
correlation between who proposed a tx and who mined the block it
appears in)
* having most of the data that makes up the next block pre-validated
and pre-distributed throughout the network, so that block validation
and relay is much more efficient
> By such design goals, I'm
> thinking either, a qualitative framework, like attacks game for a concrete
> application ("Can we prevent pinning against multi-party Coinjoin ?").
I don't think that even makes sense as a question at that level: you can
only ask questions like that if you already have known mempool policies
across the majority of nodes and miners. If you don't, you have to allow
for the possibility that 99% of hashrate is receiving private blacklists
from OFAC and that one of your coinjoin counterparties is on that list,
eg, and at that point, I don't think pinning is even conceivably solvable.
> I believe we would come up with a
> second-order observation. That we might not be able to satisfy every
> use-case with the standard set of policy rules. E.g, a contracting protocol
> could look for package size beyond the upper bound anti-Dos limit.
One reason that limit is in place is that it the larger the tx is
compared to the block limit, the more likely you are to hit corner cases
where greedily filling a block with the highest fee ratex txs first
is significantly suboptimal. That might mean, eg, that there's 410kvB
of higher fee rate txs than your 600kvB large package, and that your
stuff gets delayed, because the miner isn't clever enough to realise
dropping the 10kvB is worthwhile. Or it might mean that your tx gets
delayed because the complicated analysis takes a minute to run and a
block was mined using the simpler algorithm first. Or it might mean that
some mining startup with clever proprietary software that can calculate
this stuff quickly make substantially more profit than everyone else,
so they start dominating block generation, despite the fact that they're
censoring transactions due to OFAC rules.
> Or even the
> global ressources offered by the network of full-nodes are not high enough
> to handle some application event.
Blocks are limited on average to 4MB-per-10-minutes (6.7kB/second),
and applications certainly shouldn't be being designed to only work if
they can monopolise the entirety of the next few blocks. I don't think
it makes any sense to imagine application events in Bitcoin that exceed
the capacity of a random full node. And equally, even if you're talking
about some other blockchain with higher capacity; I don't really think
it makes sense to call it a "full" node if it can't actually cope with
the demands placed on it by any application that works on that network.
> E.g a Lightning Service Provider doing a
> liquidity maintenance round of all its counterparties, and as such
> force-closing and broadcasting more transactions than can be handled at the
> transaction-relay layer due to default MAX_PEER_TX_ANNOUNCEMENTS value.
MAX_PEER_TX_ANNOUNCEMENTS is 5000 txs, and it's per-peer. If you're an
LSP that's doing that much work, it seems likely that you'd at least
be running a long-lived listening node, so likely have 100+ peers, and
could conceivably simultaneously announce 500k txs distributed across
them, which at 130vB each (1-taproot-in, 2-p2wpkh out, which I think is
pretty minimal) adds up to 65 blocks worth of transactions. And then,
you could run more than one node, as well.
Your real limitation is likely that most nodes on the network
will only relay your txs onwards at an average rate of ~7/second
(INVENTORY_BROADCAST_PER_SECOND), so even 5000 txs will likely take over
700s to propagate anyway.
> My personal take on those subjects, we might have to realize we're facing
> an heterogeneity of Bitcoin applications and use-cases [1].
Sure; but that's why you make your policy generic, rather than having
to introduce a new, different policy targeted at each new use case.
> And this sounds
> like a long-term upward trend, akin to the history of the Internet: mail
> clients, web browser, streaming applications, etc, all with different
> service-level requirements in terms of latency, jitters and bandwidth.
Back in the mid/late 90s, people argued that real-time communication,
(like audio chat, let alone streaming video) wasn't suitable for IP,
but would require a different network like ATM where dedicated circuits
were established between the sender and recipient to avoid latency,
jitter and bandwidth competition. Turns out that separate networks
weren't optimal for that.
> To put it simply, some advanced Bitcoin
> applications might have to outgrow the "mempool policy rules" game,
I think if you stick to the fundamentals -- that relay/mempool is about
getting transactions to miners and widely preseeding the contents of
whatever the next block will be -- then it's pretty unlikely that any
Bitcoin application will outgrow the mempool policy game.
> I think this has been historically the case with
> some miners deciding to join FIBER, to improve their view of mined blocks.
FIBRE (it doesn't use the US spelling) is a speedup on top of compact
block relay -- it still gets you exactly the same data if you don't use,
it's just everything is slightly faster if you do. Even better, if you
get a block via FIBRE, then you relay it on to your peers over regular
p2p, helping them get it faster too.
Doing something similar with mempool txs -- having some high bandwidth
overlay network where the edges then feed txs back into the main p2p
network at a slower rate that filters out spam or whatever -- would
probably likewise be a fine addition to bitcoin, provided it had the
same policy rules as regular bitcoin nodes employ for tx relay. If it
had different ones, it would become a signficant centralisation risk: app
developers who make use of the different rules would need to comply with
the overlay networks ToS to avoid getting banned, and miners would need
to subscribe to the feed to avoid missing out on txs and thus fee income.
> What I'm expressing is a long-term perspective, and we might be too early
> in the evolutionary process that is Bitcoin Core development to abandon yet
> the "one-size-fits-all" policy rules conception that I understand from
> your post.
I don't think "one-size-fits-all" is a principle at all; I think
decentralisation/censorship-resistance, privacy, and efficiency are the
fundamental principles. As far as I can see, a one-size-fits-all approach
(or, more precisely, an approach where >90% of the network converges to
the same set of rules) is far better at achieving those principles than
a heterogenous policy approach.
> After exposure and exploration of more Bitcoin use-cases and applications,
> and even among the different requirement among types of use-cases nodes
> (e.g LN mobile vs LSP), I believe more heterogeneity in the policy rules
> usage makes more sense
I think when you say "more heterogeneity" what you're actually assuming
is that miners will implement a superset of all those policies, so that
if *any* node on the network accepts a tx X, *every* miner will also
accept a tx X, with the only exception being if there's some conflicting
tx Y that allows the miner to collect more fees.
But that's not really a heterogenous policy: in that case all miners
are employing exactly the same policy.
In that scenario, I don't think you'll end up with nodes running
heteregenous policies either: part of the point of having mempool
policies is to predict the next block, so if all miners really do have
a common policy, it makes sense for nodes to have the same policy. The
only potential difference is miners might be willing to dedicate more
resources, so might set some knobs related to memory/bandwidth/cpu
consumption differently.
I think what you're actually assuming is that this scenario will mean
that miners will quickly expand their shared policy to accept *any*
set of txs that are accepted by a small minority of relay nodes: after
all, if there are some txs out there that pay fees, why wouldn't miners
want to include them? That's what incentive compatible means, right? And
that's great from a protocol reasearch point-of-view: it allows you to
handwave away people complaining that your idea is bad -- by assumption,
all you need to do is deploy it, and it immediately starts working,
without anyone else needing to adopt it.
I don't think that's actually a realistic assumption though: first,
updating miners' policy rules requires updated software to be tested
and deployed, so isn't trivial enough that it should be handwaved away,
second, as in the "big packages" example above, constructing an efficient
block becomes harder the more mempool rules you throw away, so even if
there are txs violating those rules that are offering extra fees, they
may not actually cover the extra costs to generate a block when you're
no longer able to rely on those rules to reduce the complexity of the
problem space.
Note also that "relay nodes will want to use the same policy as mining
nodes" goes both ways -- if that doesn't happen, and compact block
relay requires an extra round trip to reconstruct the block, miners'
blocks won't relay as quickly, and they'll have an increased orphan rate.
Cheers,
aj
Published at
2023-06-07 23:16:00Event JSON
{
"id": "20a108d8152dad31217cdeaac9d19ea664338cfb99ae25c4db4866539431fbba",
"pubkey": "f0feda6ad58ea9f486e469f87b3b9996494363a26982b864667c5d8acb0542ab",
"created_at": 1686179760,
"kind": 1,
"tags": [
[
"e",
"72e405df3aaafafab176501cba9ee9ab7db3f9a3f761dd313e13dd7ef49c1f95",
"",
"root"
],
[
"e",
"85940316b80ec1db684e14f6434d91f99837bce1272fc2673c6a3d7e2b7f6cba",
"",
"reply"
],
[
"p",
"6485bc56963b51c9043d0855cca9f78fcbd0ce135a195c3f969e18ca54a0d551"
]
],
"content": "📅 Original date posted:2022-10-30\n📝 Original message:On Thu, Oct 27, 2022 at 09:29:47PM +0100, Antoine Riard via bitcoin-dev wrote:\n\u003e Let's take the contra.\n\n(I don't think I know that phrase? Is it like \"play devil's advocate\"?)\n\n\u003e I would say the current post describes the state of Bitcoin Core and\n\u003e beyond policy\n\u003e rules with a high-degree of exhaustivity and completeness, though itt what\n\u003e is, mostly a description. While I think it ends with a set of\n\u003e recommendations\n\nIt was only intended as a description, not a recommendation for anything.\n\nAt this point, the only thing I think I could honestly recommend\nthat doesn't seem like it comes with massive downsides, is for core to\nrecommend and implement a particular mempool policy, and only have options\nthat either make it feasible to scale that policy to different hardware\nlimitations, and provide options that users can activate en-masse if it\nturns out people are doing crazy things in the mempool (eg, a new policy\nturns out to be ill-conceived, and it's better to revert to a previous\npolicy; or a potential spam vector gets exploited at scale).\n\n\u003e What should be actually the design goals and\n\u003e principles of Core's transaction-relay propagation rules\n\u003e of which mempool accepts ones is a subset?\n\nI think the goals of mempool/relay policy are _really_ simple; namely:\n\n * relay transactions from users to all potential miners, so that\n non-mining nodes don't have to directly contact miners to announce\n their tx, both for efficiency (your tx can appear in the next block\n anyone mines, rather than just the next block you mine) and privacy\n (so that miners don't know who a transaction belongs to, so that\n users don't have to know who miners are, and so there's no/minimal\n correlation between who proposed a tx and who mined the block it\n appears in)\n\n * having most of the data that makes up the next block pre-validated\n and pre-distributed throughout the network, so that block validation\n and relay is much more efficient\n\n\u003e By such design goals, I'm\n\u003e thinking either, a qualitative framework, like attacks game for a concrete\n\u003e application (\"Can we prevent pinning against multi-party Coinjoin ?\").\n\nI don't think that even makes sense as a question at that level: you can\nonly ask questions like that if you already have known mempool policies\nacross the majority of nodes and miners. If you don't, you have to allow\nfor the possibility that 99% of hashrate is receiving private blacklists\nfrom OFAC and that one of your coinjoin counterparties is on that list,\neg, and at that point, I don't think pinning is even conceivably solvable.\n\n\u003e I believe we would come up with a\n\u003e second-order observation. That we might not be able to satisfy every\n\u003e use-case with the standard set of policy rules. E.g, a contracting protocol\n\u003e could look for package size beyond the upper bound anti-Dos limit.\n\nOne reason that limit is in place is that it the larger the tx is\ncompared to the block limit, the more likely you are to hit corner cases\nwhere greedily filling a block with the highest fee ratex txs first\nis significantly suboptimal. That might mean, eg, that there's 410kvB\nof higher fee rate txs than your 600kvB large package, and that your\nstuff gets delayed, because the miner isn't clever enough to realise\ndropping the 10kvB is worthwhile. Or it might mean that your tx gets\ndelayed because the complicated analysis takes a minute to run and a\nblock was mined using the simpler algorithm first. Or it might mean that\nsome mining startup with clever proprietary software that can calculate\nthis stuff quickly make substantially more profit than everyone else,\nso they start dominating block generation, despite the fact that they're\ncensoring transactions due to OFAC rules.\n\n\u003e Or even the\n\u003e global ressources offered by the network of full-nodes are not high enough\n\u003e to handle some application event.\n\nBlocks are limited on average to 4MB-per-10-minutes (6.7kB/second),\nand applications certainly shouldn't be being designed to only work if\nthey can monopolise the entirety of the next few blocks. I don't think\nit makes any sense to imagine application events in Bitcoin that exceed\nthe capacity of a random full node. And equally, even if you're talking\nabout some other blockchain with higher capacity; I don't really think\nit makes sense to call it a \"full\" node if it can't actually cope with\nthe demands placed on it by any application that works on that network.\n\n\u003e E.g a Lightning Service Provider doing a\n\u003e liquidity maintenance round of all its counterparties, and as such\n\u003e force-closing and broadcasting more transactions than can be handled at the\n\u003e transaction-relay layer due to default MAX_PEER_TX_ANNOUNCEMENTS value.\n\nMAX_PEER_TX_ANNOUNCEMENTS is 5000 txs, and it's per-peer. If you're an\nLSP that's doing that much work, it seems likely that you'd at least\nbe running a long-lived listening node, so likely have 100+ peers, and\ncould conceivably simultaneously announce 500k txs distributed across\nthem, which at 130vB each (1-taproot-in, 2-p2wpkh out, which I think is\npretty minimal) adds up to 65 blocks worth of transactions. And then,\nyou could run more than one node, as well.\n\nYour real limitation is likely that most nodes on the network\nwill only relay your txs onwards at an average rate of ~7/second\n(INVENTORY_BROADCAST_PER_SECOND), so even 5000 txs will likely take over\n700s to propagate anyway.\n\n\u003e My personal take on those subjects, we might have to realize we're facing\n\u003e an heterogeneity of Bitcoin applications and use-cases [1].\n\nSure; but that's why you make your policy generic, rather than having\nto introduce a new, different policy targeted at each new use case.\n\n\u003e And this sounds\n\u003e like a long-term upward trend, akin to the history of the Internet: mail\n\u003e clients, web browser, streaming applications, etc, all with different\n\u003e service-level requirements in terms of latency, jitters and bandwidth.\n\nBack in the mid/late 90s, people argued that real-time communication,\n(like audio chat, let alone streaming video) wasn't suitable for IP,\nbut would require a different network like ATM where dedicated circuits\nwere established between the sender and recipient to avoid latency,\njitter and bandwidth competition. Turns out that separate networks\nweren't optimal for that.\n\n\u003e To put it simply, some advanced Bitcoin\n\u003e applications might have to outgrow the \"mempool policy rules\" game,\n\nI think if you stick to the fundamentals -- that relay/mempool is about\ngetting transactions to miners and widely preseeding the contents of\nwhatever the next block will be -- then it's pretty unlikely that any\nBitcoin application will outgrow the mempool policy game.\n\n\u003e I think this has been historically the case with\n\u003e some miners deciding to join FIBER, to improve their view of mined blocks.\n\nFIBRE (it doesn't use the US spelling) is a speedup on top of compact\nblock relay -- it still gets you exactly the same data if you don't use,\nit's just everything is slightly faster if you do. Even better, if you\nget a block via FIBRE, then you relay it on to your peers over regular\np2p, helping them get it faster too.\n\nDoing something similar with mempool txs -- having some high bandwidth\noverlay network where the edges then feed txs back into the main p2p\nnetwork at a slower rate that filters out spam or whatever -- would\nprobably likewise be a fine addition to bitcoin, provided it had the\nsame policy rules as regular bitcoin nodes employ for tx relay. If it\nhad different ones, it would become a signficant centralisation risk: app\ndevelopers who make use of the different rules would need to comply with\nthe overlay networks ToS to avoid getting banned, and miners would need\nto subscribe to the feed to avoid missing out on txs and thus fee income.\n\n\u003e What I'm expressing is a long-term perspective, and we might be too early\n\u003e in the evolutionary process that is Bitcoin Core development to abandon yet\n\u003e the \"one-size-fits-all\" policy rules conception that I understand from\n\u003e your post.\n\nI don't think \"one-size-fits-all\" is a principle at all; I think\ndecentralisation/censorship-resistance, privacy, and efficiency are the\nfundamental principles. As far as I can see, a one-size-fits-all approach\n(or, more precisely, an approach where \u003e90% of the network converges to\nthe same set of rules) is far better at achieving those principles than\na heterogenous policy approach.\n\n\u003e After exposure and exploration of more Bitcoin use-cases and applications,\n\u003e and even among the different requirement among types of use-cases nodes\n\u003e (e.g LN mobile vs LSP), I believe more heterogeneity in the policy rules\n\u003e usage makes more sense\n\nI think when you say \"more heterogeneity\" what you're actually assuming\nis that miners will implement a superset of all those policies, so that\nif *any* node on the network accepts a tx X, *every* miner will also\naccept a tx X, with the only exception being if there's some conflicting\ntx Y that allows the miner to collect more fees.\n\nBut that's not really a heterogenous policy: in that case all miners\nare employing exactly the same policy.\n\nIn that scenario, I don't think you'll end up with nodes running\nheteregenous policies either: part of the point of having mempool\npolicies is to predict the next block, so if all miners really do have\na common policy, it makes sense for nodes to have the same policy. The\nonly potential difference is miners might be willing to dedicate more\nresources, so might set some knobs related to memory/bandwidth/cpu\nconsumption differently.\n\nI think what you're actually assuming is that this scenario will mean\nthat miners will quickly expand their shared policy to accept *any*\nset of txs that are accepted by a small minority of relay nodes: after\nall, if there are some txs out there that pay fees, why wouldn't miners\nwant to include them? That's what incentive compatible means, right? And\nthat's great from a protocol reasearch point-of-view: it allows you to\nhandwave away people complaining that your idea is bad -- by assumption,\nall you need to do is deploy it, and it immediately starts working,\nwithout anyone else needing to adopt it.\n\nI don't think that's actually a realistic assumption though: first,\nupdating miners' policy rules requires updated software to be tested\nand deployed, so isn't trivial enough that it should be handwaved away,\nsecond, as in the \"big packages\" example above, constructing an efficient\nblock becomes harder the more mempool rules you throw away, so even if\nthere are txs violating those rules that are offering extra fees, they\nmay not actually cover the extra costs to generate a block when you're\nno longer able to rely on those rules to reduce the complexity of the\nproblem space.\n\nNote also that \"relay nodes will want to use the same policy as mining\nnodes\" goes both ways -- if that doesn't happen, and compact block\nrelay requires an extra round trip to reconstruct the block, miners'\nblocks won't relay as quickly, and they'll have an increased orphan rate.\n\nCheers,\naj",
"sig": "28a8439c3e9f7a59d3312229ee654c8f4b65b8686b695ab8eff2909abad28e38394cd2fd7d614672beafa046bcdab1c1acb29b80fa7bff0ded6e47e740492d59"
}