Karl Johan Alm [ARCHIVE] on Nostr: 📅 Original date posted:2017-06-04 📝 Original message:On Sat, Jun 3, 2017 at ...
📅 Original date posted:2017-06-04
📝 Original message:On Sat, Jun 3, 2017 at 2:55 AM, Alex Akselrod via bitcoin-dev
<bitcoin-dev at lists.linuxfoundation.org> wrote:
> Without a soft fork, this is the only way for light clients to verify that
> peers aren't lying to them. Clients can request headers (just hashes of the
> filters and the previous headers, creating a chain) and look for conflicts
> between peers. If a conflict is found at a certain block, the client can
> download the block, generate a filter, calculate the header by hashing
> together the previous header and the generated filter, and banning any peers
> that don't match. A full node could prune old filters if you wanted and
> recalculate them as necessary if you just keep the filter header chain info
> as really old filters are unlikely to be requested by correctly written
> software but you can't guarantee every client will follow best practices
> either.
Ahh, so you actually make a separate digest chain with prev hashes and
everything. Once/if committed digests are soft forked in, it seems a
bit overkill but maybe it's worth it. (I was always assuming committed
digests in coinbase would come after people started using this, and
that people could just ask a couple of random peers for the digest
hash and ensure everyone gave the same answer as the hash of the
downloaded digest..).
> The simulations are based on completely random data within given parameters.
I noticed an increase in FP hits when using real data sampled from
real scriptPubKeys and such. Address reuse and other weird stuff. See
"lies.h" in github repo for experiments and chainsim.c initial part of
main where wallets get random stuff from the chain.
> I will definitely try to reproduce my experiments with Golomb-Coded
> sets and see what I come up with. It seems like you've got a little
> less than half the size of my digests for 1-block digests but I
> haven't tried making digests for all blocks (and lots of early blocks
> are empty).
>
>
> Filters for empty blocks only take a few bytes and sometimes zero when the
> coinbase output is a burn that doesn't push any data (example will be in the
> test vectors that I'll have ready shortly).
I created digests for all blocks up until block #469805 and actually
ended up with 5.8 GB, which is 1.1 GB lower than what you have, but
may be worse perf-wise on false positive rates and such.
> How fast are these to create? Would it make sense to provide digests
> on demand in some cases, rather than keeping them around indefinitely?
>
>
> They're pretty fast and can be pruned if desired, as mentioned above, as
> long as the header chain is kept.
For comparison, creating the digests above (469805 of them) took
roughly 30 mins on my end, but using the kstats format so probably
higher on an actual node (should get around to profiling that...).
Published at
2023-06-07 18:02:15Event JSON
{
"id": "d58a727933e1c07ac3ffbd447bd95a21c876b3eea7fbb571f4a675567ddce0fe",
"pubkey": "cf98d015f410ea690e93370543fcb2c3129303ca3921fd6d463206f557722518",
"created_at": 1686160935,
"kind": 1,
"tags": [
[
"e",
"55a56c7ebba05dd9613a9ae00ae6d5bbfa4f7fd0155fa447a7d09d61ff658a4b",
"",
"root"
],
[
"e",
"553b48e881003062e5204aabbfe0d298d2c96e15eb761c27da1c3116e05c3925",
"",
"reply"
],
[
"p",
"cc947666eb0f72b86fe97a050651a765544470e6916c78fbea46c00b5b2a178f"
]
],
"content": "📅 Original date posted:2017-06-04\n📝 Original message:On Sat, Jun 3, 2017 at 2:55 AM, Alex Akselrod via bitcoin-dev\n\u003cbitcoin-dev at lists.linuxfoundation.org\u003e wrote:\n\u003e Without a soft fork, this is the only way for light clients to verify that\n\u003e peers aren't lying to them. Clients can request headers (just hashes of the\n\u003e filters and the previous headers, creating a chain) and look for conflicts\n\u003e between peers. If a conflict is found at a certain block, the client can\n\u003e download the block, generate a filter, calculate the header by hashing\n\u003e together the previous header and the generated filter, and banning any peers\n\u003e that don't match. A full node could prune old filters if you wanted and\n\u003e recalculate them as necessary if you just keep the filter header chain info\n\u003e as really old filters are unlikely to be requested by correctly written\n\u003e software but you can't guarantee every client will follow best practices\n\u003e either.\n\nAhh, so you actually make a separate digest chain with prev hashes and\neverything. Once/if committed digests are soft forked in, it seems a\nbit overkill but maybe it's worth it. (I was always assuming committed\ndigests in coinbase would come after people started using this, and\nthat people could just ask a couple of random peers for the digest\nhash and ensure everyone gave the same answer as the hash of the\ndownloaded digest..).\n\n\u003e The simulations are based on completely random data within given parameters.\n\nI noticed an increase in FP hits when using real data sampled from\nreal scriptPubKeys and such. Address reuse and other weird stuff. See\n\"lies.h\" in github repo for experiments and chainsim.c initial part of\nmain where wallets get random stuff from the chain.\n\n\u003e I will definitely try to reproduce my experiments with Golomb-Coded\n\u003e sets and see what I come up with. It seems like you've got a little\n\u003e less than half the size of my digests for 1-block digests but I\n\u003e haven't tried making digests for all blocks (and lots of early blocks\n\u003e are empty).\n\u003e\n\u003e\n\u003e Filters for empty blocks only take a few bytes and sometimes zero when the\n\u003e coinbase output is a burn that doesn't push any data (example will be in the\n\u003e test vectors that I'll have ready shortly).\n\nI created digests for all blocks up until block #469805 and actually\nended up with 5.8 GB, which is 1.1 GB lower than what you have, but\nmay be worse perf-wise on false positive rates and such.\n\n\u003e How fast are these to create? Would it make sense to provide digests\n\u003e on demand in some cases, rather than keeping them around indefinitely?\n\u003e\n\u003e\n\u003e They're pretty fast and can be pruned if desired, as mentioned above, as\n\u003e long as the header chain is kept.\n\nFor comparison, creating the digests above (469805 of them) took\nroughly 30 mins on my end, but using the kstats format so probably\nhigher on an actual node (should get around to profiling that...).",
"sig": "614519c3cfe48e914afa08c10cfdafeac1b8d49ca2718b53d7c8e9f8ee211ab2e68c88387df9d35fefd3c51dca522d50865b233214656ee25099f6f55fb06cf7"
}