Gregory Maxwell [ARCHIVE] on Nostr: 📅 Original date posted:2015-01-23 📝 Original message:On Fri, Jan 23, 2015 at ...
📅 Original date posted:2015-01-23
📝 Original message:On Fri, Jan 23, 2015 at 5:40 PM, slush <slush at centrum.cz> wrote:
> Yes, the step you're missing is "and build the table". Dynamic memory
> allocation is something you want to avoid, as well as any artifical
> restrictions to number of inputs or outputs. Current solution is slow, but
> there's really no limitation on tx size.
>
> Plus there're significant restrictions to memory in embedded world. Actually
> TREZOR uses pretty powerful (and expensive) MCU just because it needs to do
> such validations and calculate such hashes. With SIGHASH_WITHINPUTVALUE or
> similar we may cut hardware cost significantly.
I'm quite familiar with embedded development :), and indeed trezor MCU
is what I would generally consider (over-)powered which is why I was
somewhat surprised by the numbers; I'm certainly not expecting you to
perform dynamic allocation... but wasn't clear on how 40 minutes and
was I just trying to understand. Using a table to avoid retransmitting
reused transactions is just an optimization and can be done in
constant memory (e.g. falling back to retransmission if filled).
So what I'm understanding now is that you stream the transaction along
with its inputs interleaved in order to reduce the memory requirement
to two midstates and a value accumulator; requiring resending the
transaction... so in the worst case transaction (since you can't get
in more than about 800 inputs at the maximum transaction size) each
input spending from (one or more, since even one would be repeated)
100kb input transactions you might send about 800MBytes of data, which
could take a half an hour if hashing runs at 45KB/s or slower?
(If so, okay then there isn't another thing that I was missing).
Published at
2023-06-07 15:29:04Event JSON
{
"id": "70361a9c75067c3005790290f5a1e298125308065e1c36b6851773b122f8b52f",
"pubkey": "4aa6cf9aa5c8e98f401dac603c6a10207509b6a07317676e9d6615f3d7103d73",
"created_at": 1686151744,
"kind": 1,
"tags": [
[
"e",
"6dc832bdb706e2f61589ea679669c46a49f00ca991f53b93fef09a1a229115c6",
"",
"root"
],
[
"e",
"af8f83f4fdccbafd839d4320aedbac5c9c1ac3da71d407d816008ef59979491f",
"",
"reply"
],
[
"p",
"eb7ca795057ca7cabde6f541c741e661d013414934e5934c2e04c6677625c99a"
]
],
"content": "📅 Original date posted:2015-01-23\n📝 Original message:On Fri, Jan 23, 2015 at 5:40 PM, slush \u003cslush at centrum.cz\u003e wrote:\n\u003e Yes, the step you're missing is \"and build the table\". Dynamic memory\n\u003e allocation is something you want to avoid, as well as any artifical\n\u003e restrictions to number of inputs or outputs. Current solution is slow, but\n\u003e there's really no limitation on tx size.\n\u003e\n\u003e Plus there're significant restrictions to memory in embedded world. Actually\n\u003e TREZOR uses pretty powerful (and expensive) MCU just because it needs to do\n\u003e such validations and calculate such hashes. With SIGHASH_WITHINPUTVALUE or\n\u003e similar we may cut hardware cost significantly.\n\nI'm quite familiar with embedded development :), and indeed trezor MCU\nis what I would generally consider (over-)powered which is why I was\nsomewhat surprised by the numbers; I'm certainly not expecting you to\nperform dynamic allocation... but wasn't clear on how 40 minutes and\nwas I just trying to understand. Using a table to avoid retransmitting\nreused transactions is just an optimization and can be done in\nconstant memory (e.g. falling back to retransmission if filled).\n\nSo what I'm understanding now is that you stream the transaction along\nwith its inputs interleaved in order to reduce the memory requirement\nto two midstates and a value accumulator; requiring resending the\ntransaction... so in the worst case transaction (since you can't get\nin more than about 800 inputs at the maximum transaction size) each\ninput spending from (one or more, since even one would be repeated)\n100kb input transactions you might send about 800MBytes of data, which\ncould take a half an hour if hashing runs at 45KB/s or slower?\n\n(If so, okay then there isn't another thing that I was missing).",
"sig": "4baf6fcb8e6125e5eae92b496baebc34096d21de0d3713f7d08d9eaf752655b78736d6c53297cb2c556ef4e3554ebcd6bf1fda7cfdbe7280ffbe35906f5635d4"
}