waxwing on Nostr: Re: the 55MB file generic/custom: it comes from parsing the taproot-only utxo set at ...
Re: the 55MB file generic/custom: it comes from parsing the taproot-only utxo set at a particular block height (you see that in the filename). It's a list of curve points, each one of them is basically P + vJ where P is the taproot pubkey of the utxo and v is the value in satoshis of the utxo. It's also filtered with minimum value to prevent being too large. Also it should be half that size (and that's one of the largest I've seen so far) because I never bothered to change it from hex to binary.
The processing takes time: yes, this pre-processing step, which reads in and modifies the curve points, is currently more friction than we'd like. That one took 4 minutes for a 440K key set. For some applications it might be fine to have this setup, then run a daemon in the background so that you can keep verifying instantly, but for this one, maybe not. Farming this out to others, I'm not sure if there's a way to do that that's more interesting than have the "other" do the whole verification. Maybe? Also, it *might* be possible to have the proof size be larger (e.g. 1MB instead of 10kb) and then the verifier only needs the tree root, but it's a significantly different algorithm and I'm not sure about if/how that would work.
The range: the range here was a from a chosen minimum of 1 million sats to a maximum of 1 million + 2^18. The minimum is completely prover choice. The range being a power of 2 is because ZKPs of range are most efficiently done as "prove that the number has the following bit representation", hence a power of 2. My feeling is that k+2^n is flexible *enough* but with some extra machinery you could make it more flexible (you can also change the base from 2 to 3 or whatever, but that doesn't help a ton I think).
Window for cheating: as above it's a utxo snapshot at a certain block. It isn't valid for any other time. So frequency of checks can't be that fast. Now I think about it, that is a big drawback here, if you want real time checking.
I'm not sure what your "custom to the prover" sentence was really asking. As per above ,the keyset/snapshot is generic for everyone, it's a snapshot of a blockheight. The prover and verifier have to agree on: the block height, and the minimum value filter, so that they have the same keyset file. If you try to verify a proof with a different keyset it just doesn't verify, even if the utxo is shared between the two conflicting keysets, because it's a tree like a merkle tree, and the roots won't match.
Published at
2024-10-05 14:37:56Event JSON
{
"id": "4fed441eaea57d445ff5d739e7a517f133516a7ef185dfdc0a9cf8d508a19bb6",
"pubkey": "675b84fe75e216ab947c7438ee519ca7775376ddf05dadfba6278bd012e1d728",
"created_at": 1728139076,
"kind": 1,
"tags": [
[
"e",
"22d3db0b069a9d47bf86ba715c4d277767ab6b5390e86b928035ad4ebd253f50",
"",
"root"
],
[
"e",
"0893e63bc424c3ef5937d2816477407267c06f695888a0d3e3f51af9d7423373",
"",
"reply"
],
[
"p",
"50d94fc2d8580c682b071a542f8b1e31a200b0508bab95a33bef0855df281d63"
],
[
"p",
"675b84fe75e216ab947c7438ee519ca7775376ddf05dadfba6278bd012e1d728"
]
],
"content": "Re: the 55MB file generic/custom: it comes from parsing the taproot-only utxo set at a particular block height (you see that in the filename). It's a list of curve points, each one of them is basically P + vJ where P is the taproot pubkey of the utxo and v is the value in satoshis of the utxo. It's also filtered with minimum value to prevent being too large. Also it should be half that size (and that's one of the largest I've seen so far) because I never bothered to change it from hex to binary.\n\nThe processing takes time: yes, this pre-processing step, which reads in and modifies the curve points, is currently more friction than we'd like. That one took 4 minutes for a 440K key set. For some applications it might be fine to have this setup, then run a daemon in the background so that you can keep verifying instantly, but for this one, maybe not. Farming this out to others, I'm not sure if there's a way to do that that's more interesting than have the \"other\" do the whole verification. Maybe? Also, it *might* be possible to have the proof size be larger (e.g. 1MB instead of 10kb) and then the verifier only needs the tree root, but it's a significantly different algorithm and I'm not sure about if/how that would work.\n\nThe range: the range here was a from a chosen minimum of 1 million sats to a maximum of 1 million + 2^18. The minimum is completely prover choice. The range being a power of 2 is because ZKPs of range are most efficiently done as \"prove that the number has the following bit representation\", hence a power of 2. My feeling is that k+2^n is flexible *enough* but with some extra machinery you could make it more flexible (you can also change the base from 2 to 3 or whatever, but that doesn't help a ton I think).\n\nWindow for cheating: as above it's a utxo snapshot at a certain block. It isn't valid for any other time. So frequency of checks can't be that fast. Now I think about it, that is a big drawback here, if you want real time checking.\n\nI'm not sure what your \"custom to the prover\" sentence was really asking. As per above ,the keyset/snapshot is generic for everyone, it's a snapshot of a blockheight. The prover and verifier have to agree on: the block height, and the minimum value filter, so that they have the same keyset file. If you try to verify a proof with a different keyset it just doesn't verify, even if the utxo is shared between the two conflicting keysets, because it's a tree like a merkle tree, and the roots won't match.",
"sig": "0d8ae39ec5050927a9c8d5268a851192859df26cd3d5fdce88ca118f7e7a72fdad7228b502760e4659ffab5f5ee2e746c7fc2586a1c25535f75b567614d3606a"
}