Alan Reiner [ARCHIVE] on Nostr: 📅 Original date posted:2014-07-04 📝 Original message:On 07/04/2014 07:15 AM, ...
📅 Original date posted:2014-07-04
📝 Original message:On 07/04/2014 07:15 AM, Andy Parkins wrote:
> On Friday 04 July 2014 06:53:47 Alan Reiner wrote:
>
>> ROMix works by taking N sequential hashes and storing the results into a
>> single N*32 byte lookup table. So if N is 1,000,000, you are going to
>> compute 1,000,000 and store the results into 32,000,000 sequential bytes
>> of RAM. Then you are going to do 1,000,000 lookup operations on that
>> table, using the hash of the previous lookup result, to determine the
>> location of next lookup (within that 32,000,000 bytes). Assuming a
>> strong hash function, this means its impossible to know in advance what
>> needs to be available in RAM to lookup, and it's easiest if you simply
>> hold all 32,000,000 bytes in RAM.
> My idea wasn't to make hashing memory hungry; it was to make it IO-hungry. It
> wouldn't be too hard to make an ASIC with 32MB of RAM. Especially if it
> gained you a 1000x advantage over the other miners. It seems that sort of
> solution is exactly the one that Mike Hearn was warning against in his blog.
I think you misundersood.... using ROMix-like algorithm, each hash
requires a different 32 MB of the blockchain. Uniformly distributed
throughout the blockchain, and no way to predict which 32 MB until you
have actually executed it. If the difficulty is high enough, your
miner is likely to end up going through the entire X GB blockchain while
searching for a good hash, but other nodes will only need to do 32 MB
worth of disk accesses to verify your answer (and it will be unknown
which 32 MB until they do the 1,000,000 hash+lookup operations on their
X GB blockchain).
I think that strikes a good compromise of needing access to 100% of the
blockchain, without requiring reading 20 GB to verify a block.
(Replace N=1,000,000, 32 MB and 20 GB with the appropriately calibrated
numbers in the future)
Published at
2023-06-07 15:23:34Event JSON
{
"id": "5e092a14a94e9df5472be0b93ca16fef504dc87fac7ff3b5b238e242fa27ad27",
"pubkey": "86f42bcb76a431c128b596c36714ae73a42cae48706a9e5513d716043447f5ec",
"created_at": 1686151414,
"kind": 1,
"tags": [
[
"e",
"a9e20ae1b6e168ee862b71e56f89b4ed7603371912b3a24e648d19074482f62c",
"",
"root"
],
[
"e",
"cc602487d67f7ae4998b2559aeaa8fe067ed3f5099cd75cd2cece4096862c237",
"",
"reply"
],
[
"p",
"99bec497728c848e65549d1a5257d08de97621edcb4b77073269a45dac708d59"
]
],
"content": "📅 Original date posted:2014-07-04\n📝 Original message:On 07/04/2014 07:15 AM, Andy Parkins wrote:\n\u003e On Friday 04 July 2014 06:53:47 Alan Reiner wrote:\n\u003e\n\u003e\u003e ROMix works by taking N sequential hashes and storing the results into a\n\u003e\u003e single N*32 byte lookup table. So if N is 1,000,000, you are going to\n\u003e\u003e compute 1,000,000 and store the results into 32,000,000 sequential bytes\n\u003e\u003e of RAM. Then you are going to do 1,000,000 lookup operations on that\n\u003e\u003e table, using the hash of the previous lookup result, to determine the\n\u003e\u003e location of next lookup (within that 32,000,000 bytes). Assuming a\n\u003e\u003e strong hash function, this means its impossible to know in advance what\n\u003e\u003e needs to be available in RAM to lookup, and it's easiest if you simply\n\u003e\u003e hold all 32,000,000 bytes in RAM.\n\u003e My idea wasn't to make hashing memory hungry; it was to make it IO-hungry. It \n\u003e wouldn't be too hard to make an ASIC with 32MB of RAM. Especially if it \n\u003e gained you a 1000x advantage over the other miners. It seems that sort of \n\u003e solution is exactly the one that Mike Hearn was warning against in his blog.\n\nI think you misundersood.... using ROMix-like algorithm, each hash\nrequires a different 32 MB of the blockchain. Uniformly distributed\nthroughout the blockchain, and no way to predict which 32 MB until you\nhave actually executed it. If the difficulty is high enough, your\nminer is likely to end up going through the entire X GB blockchain while\nsearching for a good hash, but other nodes will only need to do 32 MB\nworth of disk accesses to verify your answer (and it will be unknown\nwhich 32 MB until they do the 1,000,000 hash+lookup operations on their\nX GB blockchain).\n\nI think that strikes a good compromise of needing access to 100% of the\nblockchain, without requiring reading 20 GB to verify a block.\n\n(Replace N=1,000,000, 32 MB and 20 GB with the appropriately calibrated\nnumbers in the future)",
"sig": "132844f3b6c2b7afc70a6addf7bbfab46f820811128821b3316c9c792c1e2f8d7009bf2d655a718dec4f19552fddc328d0fe9b36c87afccd34212b4cd70a7acc"
}