Andy Parkins [ARCHIVE] on Nostr: 📅 Original date posted:2014-07-04 📝 Original message:On Friday 04 July 2014 ...
📅 Original date posted:2014-07-04
📝 Original message:On Friday 04 July 2014 06:53:47 Alan Reiner wrote:
> ROMix works by taking N sequential hashes and storing the results into a
> single N*32 byte lookup table. So if N is 1,000,000, you are going to
> compute 1,000,000 and store the results into 32,000,000 sequential bytes
> of RAM. Then you are going to do 1,000,000 lookup operations on that
> table, using the hash of the previous lookup result, to determine the
> location of next lookup (within that 32,000,000 bytes). Assuming a
> strong hash function, this means its impossible to know in advance what
> needs to be available in RAM to lookup, and it's easiest if you simply
> hold all 32,000,000 bytes in RAM.
My idea wasn't to make hashing memory hungry; it was to make it IO-hungry. It
wouldn't be too hard to make an ASIC with 32MB of RAM. Especially if it
gained you a 1000x advantage over the other miners. It seems that sort of
solution is exactly the one that Mike Hearn was warning against in his blog.
> you'll only be using a small fraction of it for each "hash". This might
> achieve what you're describing without actually requiring the full 20 GB
> of reading on ever hash.
But we want that read. Remember the actual hash rate isn't important, what
matters is how hard it is to reproduce. If we make it 1000x harder to do one
hash for everybody, we're still just as secure. The difficulty adjustment
algorithm ensures blocks come at 10 minutes, regardless of hash rate. So we
can make it harder by picking a harder algorithm -- SCRYPT or BLOWFISH, or
just by upping the size of the data that needs hashing. The advantage of
upping the size of the input is that, unlike an algorithm change, you can't
build a better ASIC to reduce the size.
Andy
--
Dr Andy Parkins
andyparkins at gmail.com
Published at
2023-06-07 15:23:34Event JSON
{
"id": "cc602487d67f7ae4998b2559aeaa8fe067ed3f5099cd75cd2cece4096862c237",
"pubkey": "99bec497728c848e65549d1a5257d08de97621edcb4b77073269a45dac708d59",
"created_at": 1686151414,
"kind": 1,
"tags": [
[
"e",
"a9e20ae1b6e168ee862b71e56f89b4ed7603371912b3a24e648d19074482f62c",
"",
"root"
],
[
"e",
"c9bdafac16d5c542e5f27836889f0eefd1551a08b5bf23d8c36d335d63e5f403",
"",
"reply"
],
[
"p",
"e7bcf4d0ea05abfd67a8230b26a8de70392ff0b8b26ddf0f858224059aff100b"
]
],
"content": "📅 Original date posted:2014-07-04\n📝 Original message:On Friday 04 July 2014 06:53:47 Alan Reiner wrote:\n\n\u003e ROMix works by taking N sequential hashes and storing the results into a\n\u003e single N*32 byte lookup table. So if N is 1,000,000, you are going to\n\u003e compute 1,000,000 and store the results into 32,000,000 sequential bytes\n\u003e of RAM. Then you are going to do 1,000,000 lookup operations on that\n\u003e table, using the hash of the previous lookup result, to determine the\n\u003e location of next lookup (within that 32,000,000 bytes). Assuming a\n\u003e strong hash function, this means its impossible to know in advance what\n\u003e needs to be available in RAM to lookup, and it's easiest if you simply\n\u003e hold all 32,000,000 bytes in RAM.\n\nMy idea wasn't to make hashing memory hungry; it was to make it IO-hungry. It \nwouldn't be too hard to make an ASIC with 32MB of RAM. Especially if it \ngained you a 1000x advantage over the other miners. It seems that sort of \nsolution is exactly the one that Mike Hearn was warning against in his blog.\n\n\u003e you'll only be using a small fraction of it for each \"hash\". This might\n\u003e achieve what you're describing without actually requiring the full 20 GB\n\u003e of reading on ever hash.\n\nBut we want that read. Remember the actual hash rate isn't important, what \nmatters is how hard it is to reproduce. If we make it 1000x harder to do one \nhash for everybody, we're still just as secure. The difficulty adjustment \nalgorithm ensures blocks come at 10 minutes, regardless of hash rate. So we \ncan make it harder by picking a harder algorithm -- SCRYPT or BLOWFISH, or \njust by upping the size of the data that needs hashing. The advantage of \nupping the size of the input is that, unlike an algorithm change, you can't \nbuild a better ASIC to reduce the size.\n\n\nAndy\n\n-- \nDr Andy Parkins\nandyparkins at gmail.com",
"sig": "a4a7a1fcbc76a2317639d7ceefcedc9c9c86e427e5de590126afe0aa60f165897b067b8b3445d358f1036d3c26e11825f7f1e966ce358b61a75484e9dbec1c00"
}