Why Nostr? What is Njump?
2024-10-25 12:12:49

girinovey on Nostr: about 800Mb memory in 20h - 7ms on average for each full scan - lookups by ID take 14 ...

about 800Mb memory in 20h

- 7ms on average for each full scan
- lookups by ID take 14 micro secs
- lookups by kind and author (the most common) taking 18 microseconds

(used simple maps for the indexes)

load on the raspberry pi is not even close to what it gets when using badger or lmdb (about 0.2 with albyhub being responsible for most of the cpu usage, not the relay).

Just doesnt seem to be able to hold more than a few days data. growing too fast. I'll keep you posted.

(data is saved to disk in json format every 3 minutes, takes 2.5 seconds to save).

Seems running everything im memory is really fast and less resource consuming, but needs lots of RAM :(

code: https://github.com/girino/mem-relay (still full of legacy code all around, since i copied it from another project, only important thing is "slicestore" folder)

#nostr #relay #khatru #memonly #slicestore

Author Public Key
npub18lav8fkgt8424rxamvk8qq4xuy9n8mltjtgztv2w44hc5tt9vets0hcfsz