Why Nostr? What is Njump?
2025-05-24 07:45:08

Daniel Wigton on Nostr: Oh good grief. 😂 Then yeah, it is all explained by your RAM speed, you aren't ...

Oh good grief. 😂 Then yeah, it is all explained by your RAM speed, you aren't going to get faster on that machine.

You can do a Mixture of Experts Model, like llama 4, to get the number of active parameters down, but it is still going to be slow and performance will be worse than a full model like llama3.3.

Fast memory is everything for AI workloads.
Author Public Key
npub1w4jkwspqn9svwnlrw0nfg0u2yx4cj6yfmp53ya4xp7r24k7gly4qaq30zp