Joseph Poon [ARCHIVE] on Nostr: 📅 Original date posted:2015-07-27 📝 Original message: Hi Anthony, On Sat, Jul ...
📅 Original date posted:2015-07-27
📝 Original message:
Hi Anthony,
On Sat, Jul 25, 2015 at 06:44:26PM +1000, Anthony Towns wrote:
> On Fri, Jul 24, 2015 at 04:24:49PM -0700, Joseph Poon wrote:
> > Ah sorry, that only solves the Commitment Transactions, not the HTLC
> > outputs. It's also not possible to use the pubkeys as identifiers,
> > as Rusty said, P2SH would be used.
> >
> > While it's possible to only check only recent blocks before the
> > Commitment Transaction for the search space (e.g. 3 days worth),
> > since you know when the Commitment Transaction was broadcast, the
> > search space limitation sort of breaks down if you permit long-dated
> > HTLCs.
>
> I don't think it matters how long the HTLC was; maybe they're way old
> and all expired, but were payments to you. Say the current channel is:
>
> 12 -> Cheater 88 -> You
>
> and the old transaction that Cheater just pushed to the blockchain
> was:
>
> 55 -> Cheater 3 -> You 10 -> You & R1 | Cheater & Timeout1 20 -> You
> & R2 | Cheater & Timeout2 12 -> You & R3 | Cheater & Timeout3
>
> To get at least your 88 owed, you need all but the last transaction,
> so you need to be able to workout #R1 and #R2 and Timeout1 and
> Timeout2, no matter how long ago they were.
Yes, I agree, that is absolutely true. I was alluding to something
different (but didn't properly explain myself), which is that if you did
grinding of only recent Commitments, it's possible that there will be
HTLCs with very high timeouts in the future and this may be a necessary
requirement for some possible future use cases (e.g.
recurring/pre-allocated billing).
> > For now, I think a reasonable stop-gap solution would be to have
> > some small storage of prior commitment transactions. For every
> > commitment, and each HTLC output, store the timeout and the original
> > Commitment Transaction height when the HTLC was first made.
>
> I don't think you want to multiply each HTLC output by every
> commitment it's stored in -- if the TIMEOUT is on the order of a day,
> and the channel is updated just once a second that's a x86,400 blowout
> in storage, so almost 5 orders of magnitude.
>
> But if everytime you see a new HTLC output (ie, R4, Timeout4), you
> could store those values and use the nLockTime trick to store the
> height of your HTLC storage. Then you just have to search back down
> from R4 to find the other HTLCs in the txn, ie R3, R2 and R1, which is
> just a matter of pulling out the values R, Timeout, dropping them into
> payment script templates, and checking if they match.
Yes, that's a good point(!), especially when you're doing local storage.
If you're relying on OP_RETURN, though, you must put in some more
contextual data. If you're willing to regenerate the revocation hash
every time, I guess the OP_RETURN can just be timeout and H. For local
storage, you don't need to do it for every HTLC if you're willing to
search back on near-dated HTLCs, but long-dated HTLCs (say, greater than
a couple days) could be included (class memory vs. computation
tradeoff). Agreed, the necessary data storage isn't *that bad* for core
nodes, and trivial for edge nodes not doing liquidity providing
(ignoring backup concerns, of course).
> BTW, 10 commitments per second (per channel) doesn't sound /that/ high
> volume :) Pay per megabyte for an end user at 100Mb/s is already
> around that at least at peak times, eg.
Perhaps with a relatively distributed graph and core nodes having many
connections, it's possible that's the range. Either way, it should be
fine. If you have enough entropy to filter by hundreds of millions using
nLockTime, even if you have 10 billion (or 100 billion) to search
through it should be nearly instant. If you have 1000 possible
revocation hashes, just look at the first txout (the non-HTLC payouts to
Alice and Bob) and see which revocation fits. Once you know the exact
Commitment number, the rest of the outputs are easy.
--
Joseph Poon
Published at
2023-06-09 12:43:45Event JSON
{
"id": "1dd5f5ba2459537bfda88197d79a53f9169eee8c7d1091d545b5181ba45ad364",
"pubkey": "ccb4cc87c455b74febaee5929cfd0726421b2eea64ad2b16440b68e8c7433211",
"created_at": 1686314625,
"kind": 1,
"tags": [
[
"e",
"70bc73032209813d1efbd8c41b490ea29a02fc5941bbed8bed53cf49c33f8564",
"",
"root"
],
[
"e",
"43ffd963e77fab6edc2a598fd60f52a5485767f7ce200a77657f5272e24f8975",
"",
"reply"
],
[
"p",
"f0feda6ad58ea9f486e469f87b3b9996494363a26982b864667c5d8acb0542ab"
]
],
"content": "📅 Original date posted:2015-07-27\n📝 Original message:\nHi Anthony,\n\nOn Sat, Jul 25, 2015 at 06:44:26PM +1000, Anthony Towns wrote:\n\u003e On Fri, Jul 24, 2015 at 04:24:49PM -0700, Joseph Poon wrote:\n\u003e \u003e Ah sorry, that only solves the Commitment Transactions, not the HTLC\n\u003e \u003e outputs. It's also not possible to use the pubkeys as identifiers,\n\u003e \u003e as Rusty said, P2SH would be used.\n\u003e \u003e \n\u003e \u003e While it's possible to only check only recent blocks before the\n\u003e \u003e Commitment Transaction for the search space (e.g. 3 days worth),\n\u003e \u003e since you know when the Commitment Transaction was broadcast, the\n\u003e \u003e search space limitation sort of breaks down if you permit long-dated\n\u003e \u003e HTLCs.\n\u003e \n\u003e I don't think it matters how long the HTLC was; maybe they're way old\n\u003e and all expired, but were payments to you. Say the current channel is:\n\u003e \n\u003e 12 -\u003e Cheater 88 -\u003e You\n\u003e \n\u003e and the old transaction that Cheater just pushed to the blockchain\n\u003e was:\n\u003e \n\u003e 55 -\u003e Cheater 3 -\u003e You 10 -\u003e You \u0026 R1 | Cheater \u0026 Timeout1 20 -\u003e You\n\u003e \u0026 R2 | Cheater \u0026 Timeout2 12 -\u003e You \u0026 R3 | Cheater \u0026 Timeout3\n\u003e \n\u003e To get at least your 88 owed, you need all but the last transaction,\n\u003e so you need to be able to workout #R1 and #R2 and Timeout1 and\n\u003e Timeout2, no matter how long ago they were.\n\nYes, I agree, that is absolutely true. I was alluding to something\ndifferent (but didn't properly explain myself), which is that if you did\ngrinding of only recent Commitments, it's possible that there will be\nHTLCs with very high timeouts in the future and this may be a necessary\nrequirement for some possible future use cases (e.g.\nrecurring/pre-allocated billing).\n\n\u003e \u003e For now, I think a reasonable stop-gap solution would be to have\n\u003e \u003e some small storage of prior commitment transactions. For every\n\u003e \u003e commitment, and each HTLC output, store the timeout and the original\n\u003e \u003e Commitment Transaction height when the HTLC was first made.\n\u003e \n\u003e I don't think you want to multiply each HTLC output by every\n\u003e commitment it's stored in -- if the TIMEOUT is on the order of a day,\n\u003e and the channel is updated just once a second that's a x86,400 blowout\n\u003e in storage, so almost 5 orders of magnitude.\n\u003e \n\u003e But if everytime you see a new HTLC output (ie, R4, Timeout4), you\n\u003e could store those values and use the nLockTime trick to store the\n\u003e height of your HTLC storage. Then you just have to search back down\n\u003e from R4 to find the other HTLCs in the txn, ie R3, R2 and R1, which is\n\u003e just a matter of pulling out the values R, Timeout, dropping them into\n\u003e payment script templates, and checking if they match.\n\nYes, that's a good point(!), especially when you're doing local storage.\nIf you're relying on OP_RETURN, though, you must put in some more\ncontextual data. If you're willing to regenerate the revocation hash\nevery time, I guess the OP_RETURN can just be timeout and H. For local\nstorage, you don't need to do it for every HTLC if you're willing to\nsearch back on near-dated HTLCs, but long-dated HTLCs (say, greater than\na couple days) could be included (class memory vs. computation\ntradeoff). Agreed, the necessary data storage isn't *that bad* for core\nnodes, and trivial for edge nodes not doing liquidity providing\n(ignoring backup concerns, of course).\n\n\u003e BTW, 10 commitments per second (per channel) doesn't sound /that/ high\n\u003e volume :) Pay per megabyte for an end user at 100Mb/s is already\n\u003e around that at least at peak times, eg.\n\nPerhaps with a relatively distributed graph and core nodes having many\nconnections, it's possible that's the range. Either way, it should be\nfine. If you have enough entropy to filter by hundreds of millions using\nnLockTime, even if you have 10 billion (or 100 billion) to search\nthrough it should be nearly instant. If you have 1000 possible\nrevocation hashes, just look at the first txout (the non-HTLC payouts to\nAlice and Bob) and see which revocation fits. Once you know the exact\nCommitment number, the rest of the outputs are easy.\n\n-- \nJoseph Poon",
"sig": "24e2e1c98c3ee31890b53a1744097cbe29e1639bc22390fd8109aa12fea0b341e4705818e66869c52d0e8eaf7144fdfbedffe31e031c403aa43c3581048005db"
}