Why Nostr? What is Njump?
2023-06-07 18:08:26

Damian Williamson [ARCHIVE] on Nostr: đź“… Original date posted:2017-12-19 đź“ť Original message:Thank you for your ...

đź“… Original date posted:2017-12-19
đź“ť Original message:Thank you for your constructive feedback. I now see that the proposal introduces a potential issue.


>Finally in terms of the broad goal, having block size based on the number of transactions is NOT something desirable in the first place, even if it did work. That’s effectively the same as an infinite block size since anyone anywhere can create transactions in the mempool at no cost.


Do you have any critical suggestion as to how transaction bandwidth limit could be addressed, it will eventually become an issue if nothing is changed regardless of how high fees go?


Regards,

Damian Williamson



________________________________
From: Mark Friedenbach <mark at friedenbach.org>
Sent: Tuesday, 19 December 2017 3:08 AM
To: Damian Williamson
Subject: Re: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks

Damian, you seem to be misunderstanding that either

(1) the strong form of your proposal requires validating the commitment to the mempool properties, in which case the mempool becomes consensus critical (an impossible requirement); or

(2) in the weak form where the current block is dependent on the commitment in the last block only it is becomes a miner-selected field they can freely parameterize with no repercussions for setting values totally independent of the actual mempool.

If you want to make the block size dependent on the properties of the mempool in a consensus critical way, flex cap achieves this. If you want to make the contents or properties of the mempool known to well-connected nodes, weak blocks achieves that. But you can’t stick the mempool in consensus because it fundamentally is not something the nodes have consensus over. That’s a chicken-and-the-egg assumption.

Finally in terms of the broad goal, having block size based on the number of transactions is NOT something desirable in the first place, even if it did work. That’s effectively the same as an infinite block size since anyone anywhere can create transactions in the mempool at no cost.

On Dec 16, 2017, at 8:14 PM, Damian Williamson via bitcoin-dev <bitcoin-dev at lists.linuxfoundation.org<mailto:bitcoin-dev at lists.linuxfoundation.org>> wrote:

I do not know why people make the leap that the proposal requires a consensus on the transaction pool. It does not.

It may be helpful to have the discussion from the previous thread linked here.
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-December/015370.html

Where I speak of validating that a block conforms to the broadcast next block size, I do not propose validating the number broadcast for the next block size itself, only that the next generated block is that size.

Regards,
Damian Williamson


________________________________
From: Damian Williamson <willtech at live.com.au<mailto:willtech at live.com.au>>
Sent: Saturday, 16 December 2017 7:59 AM
To: Rhavar
Cc: Bitcoin Protocol Discussion
Subject: Re: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks

There are really two separate problems to solve.


1. How does Bitcoin scale with fixed block size?
2. How do we ensure that all valid transactions are eventually included in the blockchain?

Those are the two issues that the proposal attempts to address. It makes sense to resolve these two problems together. Using the proposed system for variable block sizes would solve the first problem but there would still be a whole bunch of never confirming transactions. I am not sure how to reliably solve the second problem at scale without first solving the first.

>* Every node has a (potentially) different mempool, you can't use it to decide consensus values like the max block size.

I do not suggest a consensus. Depending on which node solves a block the value for next block size will be different. The consensus would be that blocks will adhere to the next block size value transmitted with the current block. It is easy to verify that the consensus is being adhered to once in place.

>* Increasing the entropy in a block to make it more unpredictable doesn't really make sense.

Not a necessary function, just an effect of using a probability-based distribution.

>* Bitcoin should be roughly incentive compatible. Your proposal explicits asks miners to ignore their best interests, and confirm transactions by "priority". What are you going to do if a "malicious" miner decides to go after their profits and order by what makes them the most money. Add "ordered by priority" as a consensus requirement? And even if you miners can still sort their mempool by fee, and then order the top 1MB by priority.

I entirely agree with your sentiment that Bitcoin must be incentive compatible. It is necessary.

It is in only miners immediate interest to make the most profitable block from the available transaction pool. As with so many other things, it is necessary to partially ignore short-term gain for long-term benefit. It is in miners and everybody's long-term interest to have a reliable transaction service. A busy transaction service that confirms lots of transactions per hour will become more profitable as demand increases and more users are prepared to pay for priority. As it is there is currently no way to fully scale because of the transaction bandwidth limit and that is problematic. If all valid transactions must eventually confirm then there must be a way to resolve that problem.

Bitcoin deliberately removes traditional scale by ensuring blocks take ten minutes on average to solve, an ingenious idea and, incentive compatible but, fixed block sizes leaves us with a problem to solve when we want to scale.

>If you could find a good solution that would allow you to know if miners were following your rule or not (and thus ignore it if it doesn't) then you wouldn't even need bitcoin in the first place.

I am confident that the math to verify blocks based on the proposal can be developed (and I think it will not be too complex for a mathematician with the relevant experience), however, I am nowhere near experienced enough with probability and statistical analysis to do it. Yes, if Bitcoin doesn't then it might make another great opportunity for an altcoin but I am not even nearly interested in promoting any altcoins.

If not the proposal that I have put forward, then, hopefully, someone can come up with a better solution. The important thing is that the issues are resolved.

Regards,
Damian Williamson


________________________________
From: Rhavar <rhavar at protonmail.com<mailto:rhavar at protonmail.com>>
Sent: Saturday, 16 December 2017 3:38 AM
To: Damian Williamson
Cc: Bitcoin Protocol Discussion
Subject: Re: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks

> I understand that there would be technical issues to resolve in implementation, but, are there no fundamental errors?

Unfortunately your proposal is really fundamentally broken, on a few levels. I think you might need to do a bit more research into how bitcoin works before coming up with such improvements =)

But just some quick notes:

* Every node has a (potentially) different mempool, you can't use it to decide consensus values like the max block size.

* Increasing the entropy in a block to make it more unpredictable doesn't really make sense.

* Bitcoin should be roughly incentive compatible. Your proposal explicits asks miners to ignore their best interests, and confirm transactions by "priority". What are you going to do if a "malicious" miner decides to go after their profits and order by what makes them the most money. Add "ordered by priority" as a consensus requirement? And even if you miners can still sort their mempool by fee, and then order the top 1MB by priority.

If you could find a good solution that would allow you to know if miners were following your rule or not (and thus ignore it if it doesn't) then you wouldn't even need bitcoin in the first place.




-Ryan


-------- Original Message --------
Subject: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks
Local Time: December 15, 2017 3:42 AM
UTC Time: December 15, 2017 9:42 AM
From: bitcoin-dev at lists.linuxfoundation.org<mailto:bitcoin-dev at lists.linuxfoundation.org>
To: Bitcoin Protocol Discussion <bitcoin-dev at lists.linuxfoundation.org<mailto:bitcoin-dev at lists.linuxfoundation.org>>



I should not take it that the lack of critical feedback to this revised proposal is a glowing endorsement. I understand that there would be technical issues to resolve in implementation, but, are there no fundamental errors?

I suppose that it if is difficult to determine how long a transaction has been waiting in the pool then, each node could simply keep track of when a transaction was first seen. This may have implications for a verify routine, however, for example, if a node was offline, how should it differentiate how long each transaction was waiting in that case? If a node was restarted daily would it always think that all transactions had been waiting in the pool less than one day If each node keeps the current transaction pool in a file and updates it, as transactions are included in blocks and, as new transactions appear in the pool, then that would go some way to alleviate the issue, apart from entirely new nodes. There should be no reason the contents of a transaction pool files cannot be shared without agreement as to the transaction pool between nodes, just as nodes transmit new transactions freely.

It has been questioned why miners could not cheat. For the question of how many transactions to include in a block, I say it is a standoff and miners will conform to the proposal, not wanting to leave transactions with valid fees standing, and, not wanting to shrink the transaction pool. In any case, if miners shrink the transaction pool then I am not immediately concerned since it provides a more efficient service. For the question of including transactions according to the proposal, I say if it is possible to keep track of how long transactions are waiting in the pool so that they can be included on a probability curve then it is possible to verify that blocks conform to the proposal, since the input is a probability, the output should conform to a probability curve.


If someone has the necessary skill, would anyone be willing to develop the math necessary for the proposal?

Regards,
Damian Williamson


________________________________

From: bitcoin-dev-bounces at lists.linuxfoundation.org<mailto:bitcoin-dev-bounces at lists.linuxfoundation.org> <bitcoin-dev-bounces at lists.linuxfoundation.org<mailto:bitcoin-dev-bounces at lists.linuxfoundation.org>> on behalf of Damian Williamson via bitcoin-dev <bitcoin-dev at lists.linuxfoundation.org<mailto:bitcoin-dev at lists.linuxfoundation.org>>
Sent: Friday, 8 December 2017 8:01 AM
To: bitcoin-dev at lists.linuxfoundation.org<mailto:bitcoin-dev at lists.linuxfoundation.org>
Subject: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks


Good afternoon,

The need for this proposal:

We all must learn to admit that transaction bandwidth is still lurking as a serious issue for the operation, reliability, safety, consumer acceptance, uptake and, for the value of Bitcoin.

I recently sent a payment which was not urgent so; I chose three-day target confirmation from the fee recommendation. That transaction has still not confirmed after now more than six days - even waiting twice as long seems quite reasonable to me. That transaction is a valid transaction; it is not rubbish, junk or, spam. Under the current model with transaction bandwidth limitation, the longer a transaction waits, the less likely it is ever to confirm due to rising transaction numbers and being pushed back by transactions with rising fees.

I argue that no transactions are rubbish or junk, only some zero fee transactions might be spam. Having an ever-increasing number of valid transactions that do not confirm as more new transactions with higher fees are created is the opposite of operating a robust, reliable transaction system.

Business cannot operate with a model where transactions may or may not confirm. Even a business choosing a modest fee has no guarantee that their valid transaction will not be shuffled down by new transactions to the realm of never confirming after it is created. Consumers also will not accept this model as Bitcoin expands. If Bitcoin cannot be a reliable payment system for confirmed transactions then consumers, by and large, will simply not accept the model once they understand. Bitcoin will be a dirty payment system, and this will kill the value of Bitcoin.

Under the current system, a minority of transactions will eventually be the lucky few who have fees high enough to escape being pushed down the list.

Once there are more than x transactions (transaction bandwidth limit) every ten minutes, only those choosing twenty-minute confirmation (2 blocks) will have initially at most a fifty percent chance of ever having their payment confirm. Presently, not even using fee recommendations can ensure a sufficiently high fee is paid to ensure transaction confirmation.

I also argue that the current auction model for limited transaction bandwidth is wrong, is not suitable for a reliable transaction system and, is wrong for Bitcoin. All transactions must confirm in due time. Currently, Bitcoin is not a safe way to send payments.

I do not believe that consumers and business are against paying fees, even high fees. What is required is operational reliability.

This great issue needs to be resolved for the safety and reliability of Bitcoin. The time to resolve issues in commerce is before they become great big issues. The time to resolve this issue is now. We must have the foresight to identify and resolve problems before they trip us over. Simply doubling block sizes every so often is reactionary and is not a reliable permanent solution. I have written a BIP proposal for a technical solution but, need your help to write it up to an acceptable standard to be a full BIP.

I have formatted the following with markdown which is human readable so, I hope nobody minds. I have done as much with this proposal as I feel that I am able so far but continue to take your feedback.

# BIP Proposal: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks

## The problem:
Everybody wants value. Miners want to maximize revenue from fees (and we presume, to minimize block size). Consumers need transaction reliability and, (we presume) want low fees.

The current transaction bandwidth limit is a limiting factor for both. As the operational safety of transactions is limited, so is consumer confidence as they realize the issue and, accordingly, uptake is limited. Fees are artificially inflated due to bandwidth limitations while failing to provide a full confirmation service for all transactions.

Current fee recommendations provide no satisfaction for transaction reliability and, as Bitcoin scales, this will worsen.

Bitcoin must be a fully scalable and reliable service, providing full transaction confirmation for every valid transaction.

The possibility to send a transaction with a fee lower than one that is acceptable to allow eventual transaction confirmation should be removed from the protocol and also from the user interface.

## Solution summary:
Provide each transaction with an individual transaction priority each time before choosing transactions to include in the current block, the priority being a function of the fee paid (on a curve), and the time waiting in the transaction pool (also on a curve) out to n days (n=60 ?). The transaction priority to serve as the likelihood of a transaction being included in the current block, and for determining the order in which transactions are tried to see if they will be included.

Use a target block size. Determine the target block size using; current transaction pool size x ( 1 / (144 x n days ) ) = number of transactions to be included in the current block. Broadcast the next target block size with the current block when it is solved so that nodes know the next target block size for the block that they are building on.

The curves used for the priority of transactions would have to be appropriate. Perhaps a mathematician with experience in probability can develop the right formulae. My thinking is a steep curve. I suppose that the probability of all transactions should probably account for a sufficient number of inclusions that the target block size is met although, it may not always be. As a suggestion, consider including some zero fee transactions to pad, highest BTC value first?

**Explanation of the operation of priority:**
> If transaction priority is, for example, a number between one (low) and one-hundred (high) it can be directly understood as the percentage chance in one-hundred of a transaction being included in the block. Using probability or likelihood infers that there is some function of random. If random (100) < transaction priority then the transaction is included.

>To break it down further, if both the fee on a curve value and the time waiting on a curve value are each a number between one and one-hundred, a rudimentary method may be to simply multiply those two numbers, to find the priority number. For example, a middle fee transaction waiting thirty days (if n = 60 days) may have a value of five for each part (yes, just five, the values are on a curve). When multiplied that will give a priority value of twenty-five, or, a twenty-five percent chance at that moment of being included in the block; it will likely be included in one of the next four blocks, getting more likely each chance. If it is still not included then the value of time waiting will be higher, making for more probability. A very low fee transaction would have a value for the fee of one. It would not be until near sixty-days that the particular low fee transaction has a high likelihood of being included in the block.

I am not concerned with low (or high) transaction fees, the primary reason for addressing the issue is to ensure transactional reliability and scalability while having each transaction confirm in due time.

## Pros:
* Maximizes transaction reliability.
* Fully scalable.
* Maximizes possibility for consumer and business uptake.
* Maximizes total fees paid per block without reducing reliability; because of reliability, in time confidence and overall uptake are greater; therefore, more transactions.
* Market determines fee paid for transaction priority.
* Fee recommendations work all the way out to 30 days or greater.
* Provides additional block entropy; greater security since there is less probability of predicting the next block.

## Cons:
* Could initially lower total transaction fees per block.
* Must be first be programmed.

## Solution operation:
This is a simplistic view of the operation. The actual operation will need to be determined in a spec for the programmer.

1. Determine the target block size for the current block.
2. Assign a transaction priority to each transaction in the pool.
3. Select transactions to include in the current block using probability in transaction priority order until the target block size is met.
5. Solve block.
6. Broadcast the next target block size with the current block when it is solved.
7. Block is received.
8. Block verification process.
9. Accept/reject block based on verification result.
10. Repeat.

## Closing comments:
It may be possible to verify blocks conform to the proposal by showing that the probability for all transactions included in the block statistically conforms to a probability distribution curve, *if* the individual transaction priority can be recreated. I am not that deep into the mathematics; however, it may also be possible to use a similar method to do this just based on the fee, that statistically, the blocks conform to a fee distribution. Any zero fee transactions would have to be ignored. This solution needs a clever mathematician.

I implore, at the very least, that we use some method that validates full transaction reliability and enables scalability of block sizes. If not this proposal, an alternative.

Regards,
Damian Williamson


_______________________________________________
bitcoin-dev mailing list
bitcoin-dev at lists.linuxfoundation.org<mailto:bitcoin-dev at lists.linuxfoundation.org>
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20171219/f24be6fb/attachment-0001.html>;
Author Public Key
npub1spe4rkz350zpn7h2pz07t8sdwgt440zmx9k530afwj6gmq7q0lksh8lz2h