Andreas Petersson [ARCHIVE] on Nostr: 📅 Original date posted:2012-07-23 📝 Original message:Some concerns regarding ...
📅 Original date posted:2012-07-23
📝 Original message:Some concerns regarding Bloom Filters. I talked with Stefan Thomas on
the Hackathon Berlin about this.
I tried to follow the discussion closely but i have not taken a look at
the code yet (is there already an implementation?) so please correct me
if i got something wrong.
The way the Bloom filters are planned now this requires a complicated
setup. basically the client will ask the server to replay the whole
blockchain, but filtered.
This is not optimal for the following reasons:
This will require the server to do a full scan of his data and only
filter out non-matching entries.
Really lightweight clients (like Bitcoincard), clients with shared
private keys (electrum-style), or brainwallets - will ask the following
question quite often to "supernodes": Given my public keys/addresses,
what is the list of unspent outputs. i think it would make sense to
include such a command, instead or in addition to the filterload/filterinit.
And perhaps more severe: as far as i understand classic bloom filters,
the server has no method of indexing his data for the expected requests.
There is simply no data structure (or maybe it has to be invented) which
allows the use of an index for queries by bloom filters of *varying
length* and a generic hashing method.
im not sure what a really efficient data structure for these kinds of
query is. but i think it should be possible to find a good compromise
between query flexibility, server load, client privacy.
one possible scheme, looks like this:
the client takes his list of addesses he is interested in. he hashes all
of them to a fixed-length bit array (bloom filter) of length 64KiB (for
example), and combines them with | to add more 1's with each address.
the server maintains a binary tree data structure of unspent outputs
arranged by the Bloom filter bits.
to build the tree, the server would need to calculate the 64KiB bits for
each address and arrange them in a binary tree. that way he can easily
traverse the tree for a given bloom query.
if a client whishes to query more broadly he can calculate the bloom
filter to 64KiB and after that fill up the last 50% of the Bits with 1.
or 95%. the trailing 1 bits even don't need to be transmitted to the
server when a client is querying. of course, if the client is more
privacy-concerned he could also fill up random bits with 1, which would
not change much actually.
the value of 64KiB is just out of thin air.
according to my experimentation using BloomFilter from Guava -
currently, also 8KiB would be sufficient to hava a 3% false positive
rate for the 40000 active addresses we have right now.
someone more familiar with hashing should please give his opinion if
cutting a bloom filter in half has any bad consequences.
Andreas
Published at
2023-06-07 10:23:43Event JSON
{
"id": "bfedf617c708b97a650472f20a9422fd22adb34041fa534178c9c503c2aca0ef",
"pubkey": "7888690f3f40427a03058866b3e6a433e5036d5e84cbf81036bfc6b5c83b9596",
"created_at": 1686133423,
"kind": 1,
"tags": [
[
"e",
"23ee79838ca93d0a58849ea33f725a52fc63e55e8c77c38083086666621775b1",
"",
"root"
],
[
"e",
"5c3dfbc2c0e85e5de3982f03c783b134d7222dc661a68f05f07a4311a1438461",
"",
"reply"
],
[
"p",
"f2c95df3766562e3b96b79a0254881c59e8639f23987846961cf55412a77f6f2"
]
],
"content": "📅 Original date posted:2012-07-23\n📝 Original message:Some concerns regarding Bloom Filters. I talked with Stefan Thomas on \nthe Hackathon Berlin about this.\nI tried to follow the discussion closely but i have not taken a look at \nthe code yet (is there already an implementation?) so please correct me \nif i got something wrong.\n\nThe way the Bloom filters are planned now this requires a complicated \nsetup. basically the client will ask the server to replay the whole \nblockchain, but filtered.\nThis is not optimal for the following reasons:\nThis will require the server to do a full scan of his data and only \nfilter out non-matching entries.\n\nReally lightweight clients (like Bitcoincard), clients with shared \nprivate keys (electrum-style), or brainwallets - will ask the following \nquestion quite often to \"supernodes\": Given my public keys/addresses, \nwhat is the list of unspent outputs. i think it would make sense to \ninclude such a command, instead or in addition to the filterload/filterinit.\n\nAnd perhaps more severe: as far as i understand classic bloom filters, \nthe server has no method of indexing his data for the expected requests. \nThere is simply no data structure (or maybe it has to be invented) which \nallows the use of an index for queries by bloom filters of *varying \nlength* and a generic hashing method.\n\nim not sure what a really efficient data structure for these kinds of \nquery is. but i think it should be possible to find a good compromise \nbetween query flexibility, server load, client privacy.\n\none possible scheme, looks like this:\n\nthe client takes his list of addesses he is interested in. he hashes all \nof them to a fixed-length bit array (bloom filter) of length 64KiB (for \nexample), and combines them with | to add more 1's with each address.\nthe server maintains a binary tree data structure of unspent outputs \narranged by the Bloom filter bits.\nto build the tree, the server would need to calculate the 64KiB bits for \neach address and arrange them in a binary tree. that way he can easily \ntraverse the tree for a given bloom query.\nif a client whishes to query more broadly he can calculate the bloom \nfilter to 64KiB and after that fill up the last 50% of the Bits with 1. \nor 95%. the trailing 1 bits even don't need to be transmitted to the \nserver when a client is querying. of course, if the client is more \nprivacy-concerned he could also fill up random bits with 1, which would \nnot change much actually.\n\nthe value of 64KiB is just out of thin air.\naccording to my experimentation using BloomFilter from Guava - \ncurrently, also 8KiB would be sufficient to hava a 3% false positive \nrate for the 40000 active addresses we have right now.\n\nsomeone more familiar with hashing should please give his opinion if \ncutting a bloom filter in half has any bad consequences.\n\nAndreas",
"sig": "0bc8f8abceb363209d742c7772de793446efffa554b2a185287d70ce92fe0a03f9f588ff0816331450fe80339f6b33dd2cad9c4ae71fa49a18c0e015793c37a6"
}