Why Nostr? What is Njump?
2024-04-22 06:26:05
in reply to

gooGof on Nostr: You can use Ollama for inferring some playground models. It can run on both CPU and ...

You can use Ollama for inferring some playground models. It can run on both CPU and GPU.

Python is required.

Learn how to create the dataset (in ChatML format or completion format, for example).

For fine tuning, if you have Nvidia GPU(s), try using Unsloth.

As for me, I'm using my MacBook with the MLX framework (built for Apple Silicon) to fine-tune and infer my LLMs in one place. It's very easy to start with self-hosted solutions.
Author Public Key
npub1uuy6vl9tr8jru4l0qanmuzgs7qkk8kn46xs84fk3kjgvypcwnekqwue0l3