Doug Hoyte on Nostr: In strfry there's currently no way to do read-time filtering, but I'm going to put ...
In strfry there's currently no way to do read-time filtering, but I'm going to put some thought in how to do this flexibly (see my sibling reply). Your ruleset description will be quite helpful as a solid use-case - thanks!
Also, that's very cool with your new relay design, I'm looking forward to seeing it.
About using a parallel mmap'ed file to store the event data, I have some experience with this approach and frankly I would not do that except as a last resort.
The biggest downside IMO is that you won't be guaranteeing transactional consistency between the event data and the indices. For instance, what if a write update to the sled DB succeeds, but writing the event to the mmap fails? This could happen when you're in low-diskspace conditions, or maybe there's an application bug or server failure and you crash at a bad time. In this case, you'd have a corrupted DB, because there would be pointers into invalid offsets in the mmap. Similarly, deleting an event and recovering the space will become much harder without transactional consistency.
OTOH, one really great thing you can do is directly `pwritev()` from the mmap to the socket. I have had success with this in the past, because the data doesn't even need to be copied to userspace and userspace page-tables don't need updating. In fact, with `sendfile()` and good enough network card drivers, it can actually be copied directly from the kernel's page cache to the network card's memory.
Although possible, I don't actually do this in strfry for a bunch of reasons. First of all, nostr events are usually too small to really make this worthwhile. Also, I don't want to deal with the complexity of long-running transactions in the case of a slow-reading client. Lastly, this usually isn't possible anyway because of websocket compression and strfry's feature where events can be stored zstd compressed on disk.
Anyway, on balance I think it's preferable to store the event data in the same DB as the indices. With LMDB you get pretty much all the benefits (and downsides) of a mmap() anyway. I don't know about sled, I haven't looked into that before.
About serialisation, I quickly looked at speedy and from what I can tell it seems like it's pretty simple and concatenates the struct's fields directly in-order (?). That may work pretty well in some cases, but let me elaborate a bit on my choice of flatbuffers.
flatbuffers have an offsets table (like a "vtable" if you're familiar with the C++ terminology). This lets you directly access a single field from the struct without having to touch the rest of the struct. For instance, if you wanted to extract a single string from a large object, you look up its offset in the vtable and directly access that memory. This has the advantage that none of the rest of the memory of the struct needs to be touched. If you're using a mmap like LMDB, then some of the record may not even need to be paged in.
Typically you will never fully "deserialise" the record - that doesn't even really make sense in flatbuffers because there is no deserialised format (or rather, the encoded format is the same as the "deserialised" one). This means that getting some information out of a flatbuffer does not require any heap memory allocations. Additionally, small fixed-size fields are typically stored together (even if there is some long string "in between" them), so loading one will often as a side-effect cause the others to be paged in/loaded as well, and live in the same CPU cache line.
I may be wrong, but speedy looks more like protobufs where you have to traverse and decode the entire struct, and allocate memory for each field before you can access a single field in the record. In the case where you wanted to read just a couple fields (pretty common for a DB record), this could be pretty wasteful. Again, the data required for indexing nostr events is fairly small so this may not be a big deal -- my choice of default is from experience with much larger records.
One more thing: flatbuffers is quite interesting (and I think unique?) in that it allows you to separate the error checking from the access/deserialisation. For example, when you deserialise you typically assume this input is untrusted, so you'll be doing bounds-checking, field validation, etc. However, in the case where the records are purely created and consumed internally, these checks are unnecessary. This is one reason flatbuffers is so effective for the use-case of database records.
Sorry for this big wall of text!
Published at
2023-05-07 15:45:51Event JSON
{
"id": "623a3f6e857408b9d79195cf39646afb12b3e865d6112e54d0e1dd8b5e4238fb",
"pubkey": "218238431393959d6c8617a3bd899303a96609b44a644e973891038a7de8622d",
"created_at": 1683474351,
"kind": 1,
"tags": [
[
"e",
"f70d6db9005ce00a8abf314caa16463c8b1b0b2cc1fe8ecebc144d1823a0bd74",
"",
"root"
],
[
"e",
"00000793a2664c2cd24e166f9bcebe610f189318df36b1121cdefd3b751a7cf8",
"",
"reply"
],
[
"p",
"218238431393959d6c8617a3bd899303a96609b44a644e973891038a7de8622d"
],
[
"p",
"0c99877612291bd818b3dd92f2852b823557b3744c3cb10470865c7a56a4929b"
],
[
"p",
"79c2cae114ea28a981e7559b4fe7854a473521a8d22a66bbab9fa248eb820ff6"
],
[
"p",
"2d5b6404df532de082d9e77f7f4257a6f43fb79bb9de8dd3ac7df5e6d4b500b0"
],
[
"p",
"1bc70a0148b3f316da33fe3c89f23e3e71ac4ff998027ec712b905cd24f6a411"
],
[
"p",
"ee11a5dff40c19a555f41fe42b48f00e618c91225622ae37b6c2bb67b76c4e49"
]
],
"content": "In strfry there's currently no way to do read-time filtering, but I'm going to put some thought in how to do this flexibly (see my sibling reply). Your ruleset description will be quite helpful as a solid use-case - thanks!\n\nAlso, that's very cool with your new relay design, I'm looking forward to seeing it.\n\nAbout using a parallel mmap'ed file to store the event data, I have some experience with this approach and frankly I would not do that except as a last resort.\n\nThe biggest downside IMO is that you won't be guaranteeing transactional consistency between the event data and the indices. For instance, what if a write update to the sled DB succeeds, but writing the event to the mmap fails? This could happen when you're in low-diskspace conditions, or maybe there's an application bug or server failure and you crash at a bad time. In this case, you'd have a corrupted DB, because there would be pointers into invalid offsets in the mmap. Similarly, deleting an event and recovering the space will become much harder without transactional consistency.\n\nOTOH, one really great thing you can do is directly `pwritev()` from the mmap to the socket. I have had success with this in the past, because the data doesn't even need to be copied to userspace and userspace page-tables don't need updating. In fact, with `sendfile()` and good enough network card drivers, it can actually be copied directly from the kernel's page cache to the network card's memory.\n\nAlthough possible, I don't actually do this in strfry for a bunch of reasons. First of all, nostr events are usually too small to really make this worthwhile. Also, I don't want to deal with the complexity of long-running transactions in the case of a slow-reading client. Lastly, this usually isn't possible anyway because of websocket compression and strfry's feature where events can be stored zstd compressed on disk.\n\nAnyway, on balance I think it's preferable to store the event data in the same DB as the indices. With LMDB you get pretty much all the benefits (and downsides) of a mmap() anyway. I don't know about sled, I haven't looked into that before.\n\nAbout serialisation, I quickly looked at speedy and from what I can tell it seems like it's pretty simple and concatenates the struct's fields directly in-order (?). That may work pretty well in some cases, but let me elaborate a bit on my choice of flatbuffers.\n\nflatbuffers have an offsets table (like a \"vtable\" if you're familiar with the C++ terminology). This lets you directly access a single field from the struct without having to touch the rest of the struct. For instance, if you wanted to extract a single string from a large object, you look up its offset in the vtable and directly access that memory. This has the advantage that none of the rest of the memory of the struct needs to be touched. If you're using a mmap like LMDB, then some of the record may not even need to be paged in.\n\nTypically you will never fully \"deserialise\" the record - that doesn't even really make sense in flatbuffers because there is no deserialised format (or rather, the encoded format is the same as the \"deserialised\" one). This means that getting some information out of a flatbuffer does not require any heap memory allocations. Additionally, small fixed-size fields are typically stored together (even if there is some long string \"in between\" them), so loading one will often as a side-effect cause the others to be paged in/loaded as well, and live in the same CPU cache line.\n\nI may be wrong, but speedy looks more like protobufs where you have to traverse and decode the entire struct, and allocate memory for each field before you can access a single field in the record. In the case where you wanted to read just a couple fields (pretty common for a DB record), this could be pretty wasteful. Again, the data required for indexing nostr events is fairly small so this may not be a big deal -- my choice of default is from experience with much larger records.\n\nOne more thing: flatbuffers is quite interesting (and I think unique?) in that it allows you to separate the error checking from the access/deserialisation. For example, when you deserialise you typically assume this input is untrusted, so you'll be doing bounds-checking, field validation, etc. However, in the case where the records are purely created and consumed internally, these checks are unnecessary. This is one reason flatbuffers is so effective for the use-case of database records.\n\nSorry for this big wall of text!",
"sig": "6ffd18440e81ad56215b7aa64376707c6f70b8f06c4dc8b8e01d041277887d066edc0466db3afb5293345607a414853706d2e6a7f1a4f83abf58160f38f531a4"
}