Henry Saputra on Nostr: Understanding how LLM inference works with llama.cpp
Published at
2024-11-29 06:41:27Event JSON
{
"id": "90f098ad453290e0d2614cf4e8f4901872c282a22a50a0407338e6b9ebe0b332",
"pubkey": "113ba2d5aa88e97df8be825240ab525ca052f7bc6bb8eb05d62a87bfcbd38f2d",
"created_at": 1732862487,
"kind": 1,
"tags": [
[
"proxy",
"https://sigmoid.social/users/Kingwulf/statuses/113564875951188423",
"activitypub"
]
],
"content": "Understanding how LLM inference works with llama.cpp\n\nhttps://www.omrimallis.com/posts/understanding-how-llm-inference-works-with-llama-cpp",
"sig": "b1a3f78fec8e711114647a65504def5fbcc489b6e85aecb602464d071126a2facd363cfac5538c8820ffd061e94dc324a073c3e327edfe0f2e808bccd07acbc8"
}