Alan Reiner [ARCHIVE] on Nostr: 📅 Original date posted:2014-07-04 📝 Original message:Just a thought on this -- ...
📅 Original date posted:2014-07-04
📝 Original message:Just a thought on this -- I'm not saying this is a good idea or a bad
idea, because I have spent about zero time thinking about it, but
something did come to mind as I read this. Reading 20 GB of data for
every hash might be a bit excessive. And as the blockchain grows, it
will become infeasible to continue. However, what comes to mind is the
ROMix algorithm defined by Colin Percival, which was the pre-cursor to
scrypt. Which is actually what Armory uses for key stretching because
it's far simpler than scrypt itself while maintaining the memory-hard
properties (the downside is that it's much less flexible in allowing the
user to trade-off compute time vs memory usage).
ROMix works by taking N sequential hashes and storing the results into a
single N*32 byte lookup table. So if N is 1,000,000, you are going to
compute 1,000,000 and store the results into 32,000,000 sequential bytes
of RAM. Then you are going to do 1,000,000 lookup operations on that
table, using the hash of the previous lookup result, to determine the
location of next lookup (within that 32,000,000 bytes). Assuming a
strong hash function, this means its impossible to know in advance what
needs to be available in RAM to lookup, and it's easiest if you simply
hold all 32,000,000 bytes in RAM.
Something similar could be applied to your idea. We use the hash of a
prevBlockHash||nonce as the starting point for 1,000,000 lookup
operations. The output of the previous lookup is used to determine
which block and tx (perhaps which chunk of 32 bytes within that tx) is
used for the next lookup operation. This means that in order to do the
hashing, you need the entire blockchain available to you, even though
you'll only be using a small fraction of it for each "hash". This might
achieve what you're describing without actually requiring the full 20 GB
of reading on ever hash.
-Alan
On 07/04/2014 06:27 AM, Andy Parkins wrote:
> Hello,
>
> I had a thought after reading Mike Hearn's blog about it being impossible to
> have an ASIC-proof proof of work algorithm.
>
> Perhaps I'm being dim, but I thought I'd mention my thought anyway.
>
> It strikes me that he's right that it's impossible for any algorithm to exist
> that can't be implemented in an ASIC. However, that's only because it's
> trying to pick an algorithm that is CPU bound. You could protect against ASCI
> mining (or rather, make it irrelevant that it was being used) by making the
> algorithm IO-bound rather than CPU-bound.
>
> For example, what if the proof-of-work hash for a block were no longer just
> "hash of block", which contains the hash of the parent block, but instead were
> hash of
>
> [NEW_BLOCK] [ALL_PREVIOUS_BLOCKS] [NEW_BLOCK]
>
> [ALL_PREVIOUS_BLOCKS] is now 20GB (from memory) and growing. By prefixing and
> suffixing the new block, you have to feed every byte of the blockchain through
> the hashing engine (the prefix prevents you caching the intermediate result).
> Whatever bus you're using to feed your high speed hashing engine, it will
> always be faster than the bus -- hence you're now IO-bound, not CPU-bound, and
> any hashing engine will, effectively, be the same.
>
> I'm making the assumption that SHA-256 is not cacheable from the middle
> outwards, so the whole block-chain _has_ to be transferred for every hash.
>
> Apologies in advance if this is a stupid idea.
>
>
>
> Andy
Published at
2023-06-07 15:23:33Event JSON
{
"id": "65bdd68aeeec5e02b83d9ab679a79468dd4b27cef95f33c86535f4025fef795e",
"pubkey": "86f42bcb76a431c128b596c36714ae73a42cae48706a9e5513d716043447f5ec",
"created_at": 1686151413,
"kind": 1,
"tags": [
[
"e",
"a9e20ae1b6e168ee862b71e56f89b4ed7603371912b3a24e648d19074482f62c",
"",
"root"
],
[
"e",
"56f7c56f5b2a24fa56154bdfa89f2e87adcff975853e5b9c7ac30e97ca745520",
"",
"reply"
],
[
"p",
"99bec497728c848e65549d1a5257d08de97621edcb4b77073269a45dac708d59"
]
],
"content": "📅 Original date posted:2014-07-04\n📝 Original message:Just a thought on this -- I'm not saying this is a good idea or a bad\nidea, because I have spent about zero time thinking about it, but\nsomething did come to mind as I read this. Reading 20 GB of data for\nevery hash might be a bit excessive. And as the blockchain grows, it\nwill become infeasible to continue. However, what comes to mind is the\nROMix algorithm defined by Colin Percival, which was the pre-cursor to\nscrypt. Which is actually what Armory uses for key stretching because\nit's far simpler than scrypt itself while maintaining the memory-hard\nproperties (the downside is that it's much less flexible in allowing the\nuser to trade-off compute time vs memory usage).\n\nROMix works by taking N sequential hashes and storing the results into a\nsingle N*32 byte lookup table. So if N is 1,000,000, you are going to\ncompute 1,000,000 and store the results into 32,000,000 sequential bytes\nof RAM. Then you are going to do 1,000,000 lookup operations on that\ntable, using the hash of the previous lookup result, to determine the\nlocation of next lookup (within that 32,000,000 bytes). Assuming a\nstrong hash function, this means its impossible to know in advance what\nneeds to be available in RAM to lookup, and it's easiest if you simply\nhold all 32,000,000 bytes in RAM.\n\nSomething similar could be applied to your idea. We use the hash of a\nprevBlockHash||nonce as the starting point for 1,000,000 lookup\noperations. The output of the previous lookup is used to determine\nwhich block and tx (perhaps which chunk of 32 bytes within that tx) is\nused for the next lookup operation. This means that in order to do the\nhashing, you need the entire blockchain available to you, even though\nyou'll only be using a small fraction of it for each \"hash\". This might\nachieve what you're describing without actually requiring the full 20 GB\nof reading on ever hash.\n\n-Alan\n\n\n\nOn 07/04/2014 06:27 AM, Andy Parkins wrote:\n\u003e Hello,\n\u003e\n\u003e I had a thought after reading Mike Hearn's blog about it being impossible to \n\u003e have an ASIC-proof proof of work algorithm.\n\u003e\n\u003e Perhaps I'm being dim, but I thought I'd mention my thought anyway.\n\u003e\n\u003e It strikes me that he's right that it's impossible for any algorithm to exist \n\u003e that can't be implemented in an ASIC. However, that's only because it's \n\u003e trying to pick an algorithm that is CPU bound. You could protect against ASCI \n\u003e mining (or rather, make it irrelevant that it was being used) by making the \n\u003e algorithm IO-bound rather than CPU-bound.\n\u003e\n\u003e For example, what if the proof-of-work hash for a block were no longer just \n\u003e \"hash of block\", which contains the hash of the parent block, but instead were \n\u003e hash of \n\u003e\n\u003e [NEW_BLOCK] [ALL_PREVIOUS_BLOCKS] [NEW_BLOCK]\n\u003e\n\u003e [ALL_PREVIOUS_BLOCKS] is now 20GB (from memory) and growing. By prefixing and \n\u003e suffixing the new block, you have to feed every byte of the blockchain through \n\u003e the hashing engine (the prefix prevents you caching the intermediate result). \n\u003e Whatever bus you're using to feed your high speed hashing engine, it will \n\u003e always be faster than the bus -- hence you're now IO-bound, not CPU-bound, and \n\u003e any hashing engine will, effectively, be the same.\n\u003e\n\u003e I'm making the assumption that SHA-256 is not cacheable from the middle \n\u003e outwards, so the whole block-chain _has_ to be transferred for every hash.\n\u003e\n\u003e Apologies in advance if this is a stupid idea.\n\u003e\n\u003e\n\u003e\n\u003e Andy",
"sig": "feb41aca58ba9baa91cc93bb9f0bb8809d2d3561759a437bd47151a02d9c920975503aca8c5a757c3d1957b9840b17720950d6f9f90e38224f80f76971669069"
}