Joseph Poon [ARCHIVE] on Nostr: 📅 Original date posted:2015-05-07 📝 Original message:Hi Matt, I agree that ...
📅 Original date posted:2015-05-07
📝 Original message:Hi Matt,
I agree that starting discussion on how to approach this problem is
necessary and it's difficult taking positions without details on what is
being discussed.
A simple hard 20-megabyte increase will likely create perverse
incentives, perhaps a method can exist with some safe transition. I
think ultimately, the underlying tension with this discussion is about
the relative power of miners. Any transition of blocksize increase will
increase the influence of miners, and it is about understanding the
tradeoffs for each possible approach.
On Thu, May 07, 2015 at 10:02:09PM +0000, Matt Corallo wrote:
> * I'd like to see some better conclusions to the discussion around
> long-term incentives within the system. If we're just building Bitcoin
> to work in five years, great, but if we want it all to keep working as
> subsidy drops significantly, I'd like a better answer than "we'll deal
> with it when we get there" or "it will happen, all the predictions based
> on people's behavior today say so" (which are hopefully invalid thanks
> to the previous point). Ideally, I'd love to see some real free pressure
> already on the network starting to develop when we commit to hardforking
> in a year. Not just full blocks with some fees because wallets are
> including far greater fees than they really need to, but software which
> properly handles fees across the ecosystem, smart fee increases when
> transactions arent confirming (eg replace-by-fee, which could be limited
> to increase-in-fees-only for those worried about double-spends).
I think the long-term fee incentive structure needs to be significantly
more granular. We've all seen miners and pools take the path of least
resistance; often they just do whatever the community tells them to
blindly. While this status quo can change in the future, I think
designing sane defaults is a good path for any possible transition.
It seems especially reasonable to maintain fee pressure for normal
transactions during a hard-fork transition. It's possible to do so using
some kind of soft-cap structure. Building in a default soft-cap of 1
megabyte for some far future scheduled fork would seem like a sane thing
to do for bitcoin-core.
It seems also viable to be far more aggressive. What's your (and the
community's) opinion on some kind of coinbase voting protocol for
soft-cap enforcement? It's possible to write in messages to the coinbase
for a enforcible soft-cap that orphans out any transaction which
violates these rules. It seems safest to have the transition has the
first hardforked block be above 1MB, however, the next block default to
an enforced 1MB block. If miners agree to go above this, they must vote
in their coinbase to do so.
There's a separate discussion about this starting on:
CAE-z3OXnjayLUeHBU0hdwU5pKrJ6fpj7YPtGBMQ7hKXG3Sj6hw at mail.gmail.com
I think defaulting some kind of mechanism on reading the coinbase seems
to be a good idea, I think left alone, miners may not do so. That way,
it's possible to have your cake and eat it too, fee pressure will still
exist, while block sizes can increase (provided it's in the miners'
greater interests to do so).
The Lightning Network's security model in the long-term may rely on a
multi-tier soft-cap, but I'm not sure. If 2nd order systemic miner
incentives were not a concern, a system which has an enforced soft-cap
and permits breaching that soft-cap with some agreed upon much higher
fee would work best. LN works without this, but it seems to be more
secure if some kind of miner consensus rule is reached regarding
prioritizing behavior of 2nd-layer consensus states.
No matter how it's done, certain aspects of the security model of
something like Lightning is reliant upon having block-space
availability for transactions to enter into the blockchain in a timely
manner (since "deprecated" channel states become valid again after some
agreed upon block-time).
I think pretty much everyone agrees that the 1MB block cap will
eventually be a problem. While people may disagree with when that will
be and how it'll play out, I think we're all in agreement that
discussion about it is a good idea, especially when it comes to
resolving blocking concerns.
Starting a discussion on how a hypothetical blocksize increase will
occur and the necessary blocking/want-to-have features/tradeoffs seems
to be a great way to approach this problem. The needs for Lightning
Network may be best optimized by being able to prioritizing a large mass
of timeout transactions at once (when a well-connected node stops
communicating).
--
Joseph Poon
Published at
2023-06-07 15:33:39Event JSON
{
"id": "9137a831a122190476346616c188a794dbd9a70d63053901763342e0bacdcd0b",
"pubkey": "ccb4cc87c455b74febaee5929cfd0726421b2eea64ad2b16440b68e8c7433211",
"created_at": 1686152019,
"kind": 1,
"tags": [
[
"e",
"f8b5d67799444b0b408a010416662bfec521761783201657452331c18514fd28",
"",
"root"
],
[
"e",
"99eb6c4ae1793904afce1a2985e9ef94f2e939aef24df134f063ca998f9233f3",
"",
"reply"
],
[
"p",
"cd753aa8fbc112e14ffe9fe09d3630f0eff76ca68e376e004b8e77b687adddba"
]
],
"content": "📅 Original date posted:2015-05-07\n📝 Original message:Hi Matt,\n\nI agree that starting discussion on how to approach this problem is\nnecessary and it's difficult taking positions without details on what is\nbeing discussed.\n\nA simple hard 20-megabyte increase will likely create perverse\nincentives, perhaps a method can exist with some safe transition. I\nthink ultimately, the underlying tension with this discussion is about\nthe relative power of miners. Any transition of blocksize increase will\nincrease the influence of miners, and it is about understanding the\ntradeoffs for each possible approach.\n\nOn Thu, May 07, 2015 at 10:02:09PM +0000, Matt Corallo wrote:\n\u003e * I'd like to see some better conclusions to the discussion around\n\u003e long-term incentives within the system. If we're just building Bitcoin\n\u003e to work in five years, great, but if we want it all to keep working as\n\u003e subsidy drops significantly, I'd like a better answer than \"we'll deal\n\u003e with it when we get there\" or \"it will happen, all the predictions based\n\u003e on people's behavior today say so\" (which are hopefully invalid thanks\n\u003e to the previous point). Ideally, I'd love to see some real free pressure\n\u003e already on the network starting to develop when we commit to hardforking\n\u003e in a year. Not just full blocks with some fees because wallets are\n\u003e including far greater fees than they really need to, but software which\n\u003e properly handles fees across the ecosystem, smart fee increases when\n\u003e transactions arent confirming (eg replace-by-fee, which could be limited\n\u003e to increase-in-fees-only for those worried about double-spends).\n\nI think the long-term fee incentive structure needs to be significantly\nmore granular. We've all seen miners and pools take the path of least\nresistance; often they just do whatever the community tells them to\nblindly. While this status quo can change in the future, I think\ndesigning sane defaults is a good path for any possible transition.\n\nIt seems especially reasonable to maintain fee pressure for normal\ntransactions during a hard-fork transition. It's possible to do so using\nsome kind of soft-cap structure. Building in a default soft-cap of 1\nmegabyte for some far future scheduled fork would seem like a sane thing\nto do for bitcoin-core.\n\nIt seems also viable to be far more aggressive. What's your (and the\ncommunity's) opinion on some kind of coinbase voting protocol for\nsoft-cap enforcement? It's possible to write in messages to the coinbase\nfor a enforcible soft-cap that orphans out any transaction which\nviolates these rules. It seems safest to have the transition has the\nfirst hardforked block be above 1MB, however, the next block default to\nan enforced 1MB block. If miners agree to go above this, they must vote\nin their coinbase to do so.\n\nThere's a separate discussion about this starting on:\nCAE-z3OXnjayLUeHBU0hdwU5pKrJ6fpj7YPtGBMQ7hKXG3Sj6hw at mail.gmail.com\n\nI think defaulting some kind of mechanism on reading the coinbase seems\nto be a good idea, I think left alone, miners may not do so. That way,\nit's possible to have your cake and eat it too, fee pressure will still\nexist, while block sizes can increase (provided it's in the miners'\ngreater interests to do so).\n\nThe Lightning Network's security model in the long-term may rely on a\nmulti-tier soft-cap, but I'm not sure. If 2nd order systemic miner\nincentives were not a concern, a system which has an enforced soft-cap\nand permits breaching that soft-cap with some agreed upon much higher\nfee would work best. LN works without this, but it seems to be more\nsecure if some kind of miner consensus rule is reached regarding\nprioritizing behavior of 2nd-layer consensus states.\n\nNo matter how it's done, certain aspects of the security model of\nsomething like Lightning is reliant upon having block-space\navailability for transactions to enter into the blockchain in a timely\nmanner (since \"deprecated\" channel states become valid again after some\nagreed upon block-time).\n\nI think pretty much everyone agrees that the 1MB block cap will\neventually be a problem. While people may disagree with when that will\nbe and how it'll play out, I think we're all in agreement that\ndiscussion about it is a good idea, especially when it comes to\nresolving blocking concerns.\n\nStarting a discussion on how a hypothetical blocksize increase will\noccur and the necessary blocking/want-to-have features/tradeoffs seems\nto be a great way to approach this problem. The needs for Lightning\nNetwork may be best optimized by being able to prioritizing a large mass\nof timeout transactions at once (when a well-connected node stops\ncommunicating).\n\n-- \nJoseph Poon",
"sig": "15aea6b143f0f3a9b313e83bf2d3e31cf2b1990cba2bde01f514b6741303d1a869c36017576fd054316ed49a1e8bd998e88dcdc4db524b313e1a1f582199a1b5"
}