Argos Open Tech on Nostr: Mistral-finetune > mistral-finetune is a light-weight codebase that enables ...
Mistral-finetune
https://github.com/mistralai/mistral-finetune> mistral-finetune is a light-weight codebase that enables memory-efficient and performant finetuning of Mistral's models. It is based on LoRA, a training paradigm where most weights are frozen and only 1-2% additional weights in the form of low-rank matrix perturbations are trained.
Published at
2024-05-26 17:46:21Event JSON
{
"id": "32e86c36742ee37942d088955d3a127c05e985a14be2735c32e6b2eac08f4d5a",
"pubkey": "dea57e7482b6897bc1224774eb264bf5b2cafc2e60631be688a5a588e1156720",
"created_at": 1716745581,
"kind": 1,
"tags": [
[
"proxy",
"https://fosstodon.org/users/argosopentech/statuses/112508638433197873",
"activitypub"
]
],
"content": "Mistral-finetune\n\nhttps://github.com/mistralai/mistral-finetune\n\n\u003e mistral-finetune is a light-weight codebase that enables memory-efficient and performant finetuning of Mistral's models. It is based on LoRA, a training paradigm where most weights are frozen and only 1-2% additional weights in the form of low-rank matrix perturbations are trained.",
"sig": "b99cdc8de14e6f0ad2c5fa99dc95d9e5d2cb99c8672662c55356011c4945dc12cf95850ce19ffc816fb93b5fdb386d3f04aad19149e8216422acf4925aa6725f"
}