Why Nostr? What is Njump?
2024-11-24 18:22:53
in reply to

jb55 on Nostr: mainly llama atm, but been playing with others. I want to try qwen ...

mainly llama atm, but been playing with others. I want to try qwen
I noticed llama.cpp supports ROCM (amdgpu) now! I can sample the 8B parameter llama 3 model with my 8GB VRAM graphics card! It's fast! Local ai ftw.

Author Public Key
npub1xtscya34g58tk0z605fvr788k263gsu6cy9x0mhnm87echrgufzsevkk5s