Event JSON
{
"id": "fc36206921682d01426ab6c834fcd52131025587b3a0d36560b4fb776f5db894",
"pubkey": "6140478c9ae12f1d0b540e7c57806649327a91b040b07f7ba3dedc357cab0da5",
"created_at": 1719496373,
"kind": 1,
"tags": [
[
"e",
"b6a6d416b2c49e4bbc2e8da383ac1b9395c5ff87ed9b11f9ebe0f0f77b2fa323",
"",
"root"
],
[
"e",
"81dcee18b49282be24f7cf45c5f28800616d02e9e43d38022ba7270f953daa64",
"",
"reply"
],
[
"p",
"7d50ca8c167a10723c24afa9fe9ac7170dc4d5c000d7fcd470c5aecf7ff3aeb7"
],
[
"r",
"https://www.markhneedham.com/blog/2023/10/18/ollama-hugging-face-gguf-models/"
],
[
"r",
"https://huggingface.co/mradermacher/dolphin-2.9.2-mixtral-8x22b-GGUF"
]
],
"content": "Ah but it does! Once you download the gguf file from Hugging Face, you can use ollama’s create command, passing in a Modelfile that specifies the path the the gguf. Then you can use ollama run to start up the model.\n\nIt’s kinda annoying but there are instructions online: https://www.markhneedham.com/blog/2023/10/18/ollama-hugging-face-gguf-models/\n\nI used this technique to run mradermacher/dolphin-2.9.2-mixtral-8x22b-GGUF: https://huggingface.co/mradermacher/dolphin-2.9.2-mixtral-8x22b-GGUF",
"sig": "219f3d2de6c7b7847c696a65818314b703a47f0a7ecfc07747463bc8ac8af145b735e3b1243c218827468d6e5e04284da4c1bf38cdcdd8d9fbbf6b17b92b3763"
}