Why Nostr? What is Njump?
2024-12-22 12:25:28

Jamie on Nostr: I'd previously been using the #LocalAI Vulkan container for #LLM inference. Recently ...

I'd previously been using the #LocalAI Vulkan container for #LLM inference. Recently I got #ROCm working on my RX 5600 XT thanks to #Debian and it's 2x to 4x faster. Cool. Today I compiled #llamacpp Vulkan server from source and inference runs at the same speed as ROCm. I wonder what's going on here...
Author Public Key
npub1cfn99j3q66cnjfnqdgeen9uz6sf2knpmdr7mmgf3y3wrextaypcqml4a3n