MachuPikacchu on Nostr: When dealing with large amounts of data there's always tradeoffs. We don't really ...
When dealing with large amounts of data there's always tradeoffs. We don't really have enough to go on yet. How much data? Gigabytes, Terabytes, Petabytes or more? What kind of uptime do you need? Are your LLMs reading it on the fly or do they need to query across the whole dataset at any given time?
There's no silver bullet. No single database or system is best at every use case.
Published at
2025-05-13 15:44:24Event JSON
{
"id": "5d5486fee42fd7cda9f5730f41144205775bba3b73fab57f0cb8e4cdfefd0e9f",
"pubkey": "1e908fbc1d131c17a87f32069f53f64f45c75f91a2f6d43f8aa6410974da5562",
"created_at": 1747151064,
"kind": 1,
"tags": [
[
"e",
"0a085a3daa63818f5b14d7bb2e15c50e1b8b48f1793a0658250f64b8b40fe1bf",
"",
"root"
],
[
"p",
"b9e76546ba06456ed301d9e52bc49fa48e70a6bf2282be7a1ae72947612023dc"
]
],
"content": "When dealing with large amounts of data there's always tradeoffs. We don't really have enough to go on yet. How much data? Gigabytes, Terabytes, Petabytes or more? What kind of uptime do you need? Are your LLMs reading it on the fly or do they need to query across the whole dataset at any given time?\n\nThere's no silver bullet. No single database or system is best at every use case.",
"sig": "7b275ef39e401157f27de211b8d547efc49ebec799175ce4b7dbe02a3478cf95699a92f968623087cad2222337a439bd497c6f09dde5b5a09ba52103ef12c6d0"
}