Eugen Leitl [ARCHIVE] on Nostr: 📅 Original date posted:2014-07-04 📝 Original message:On Fri, Jul 04, 2014 at ...
📅 Original date posted:2014-07-04
📝 Original message:On Fri, Jul 04, 2014 at 06:53:47AM -0400, Alan Reiner wrote:
> Something similar could be applied to your idea. We use the hash of a
> prevBlockHash||nonce as the starting point for 1,000,000 lookup
> operations. The output of the previous lookup is used to determine
> which block and tx (perhaps which chunk of 32 bytes within that tx) is
> used for the next lookup operation. This means that in order to do the
> hashing, you need the entire blockchain available to you, even though
> you'll only be using a small fraction of it for each "hash". This might
> achieve what you're describing without actually requiring the full 20 GB
> of reading on ever hash.
Anything involving lots of unpredictable memory accesses to a large
chunk of fast memory is unASICable. That data vector could be derived
by the same means as an one time pad, and loaded and locked into
memory after boot. If you make it large enough it won't profit from
embedded RAM bandwidth/speedup. The only way to speed up would be clustering,
which doesn't offer economies of scale.
Published at
2023-06-07 15:23:34Event JSON
{
"id": "c9bdafac16d5c542e5f27836889f0eefd1551a08b5bf23d8c36d335d63e5f403",
"pubkey": "e7bcf4d0ea05abfd67a8230b26a8de70392ff0b8b26ddf0f858224059aff100b",
"created_at": 1686151414,
"kind": 1,
"tags": [
[
"e",
"a9e20ae1b6e168ee862b71e56f89b4ed7603371912b3a24e648d19074482f62c",
"",
"root"
],
[
"e",
"65bdd68aeeec5e02b83d9ab679a79468dd4b27cef95f33c86535f4025fef795e",
"",
"reply"
],
[
"p",
"86f42bcb76a431c128b596c36714ae73a42cae48706a9e5513d716043447f5ec"
]
],
"content": "📅 Original date posted:2014-07-04\n📝 Original message:On Fri, Jul 04, 2014 at 06:53:47AM -0400, Alan Reiner wrote:\n\n\u003e Something similar could be applied to your idea. We use the hash of a\n\u003e prevBlockHash||nonce as the starting point for 1,000,000 lookup\n\u003e operations. The output of the previous lookup is used to determine\n\u003e which block and tx (perhaps which chunk of 32 bytes within that tx) is\n\u003e used for the next lookup operation. This means that in order to do the\n\u003e hashing, you need the entire blockchain available to you, even though\n\u003e you'll only be using a small fraction of it for each \"hash\". This might\n\u003e achieve what you're describing without actually requiring the full 20 GB\n\u003e of reading on ever hash.\n\nAnything involving lots of unpredictable memory accesses to a large\nchunk of fast memory is unASICable. That data vector could be derived\nby the same means as an one time pad, and loaded and locked into\nmemory after boot. If you make it large enough it won't profit from\nembedded RAM bandwidth/speedup. The only way to speed up would be clustering,\nwhich doesn't offer economies of scale.",
"sig": "1c2bdcbc0d12df254b202f32dc9761ba7e2d34b702fa816bd4fe5693299004e711951a11b19e148159020c5c58758d2461688d5fd07e3ac995e6714ec3342ff3"
}