Why Nostr? What is Njump?
2023-05-06 16:56:07

Brian Armstrong 🛡️ / @brian_armstrong (RSS Feed) on Nostr: **RT @karpathy:** Oops haven't tweeted too much recently; I'm mostly watching with ...

**RT @karpathy:**

Oops haven't tweeted too much recently; I'm mostly watching with interest the open source LLM ecosystem experiencing early signs of a cambrian explosion. Roughly speaking the story as of now:

1\. Pretraining LLM base models remains very expensive. Think: supercomputer + months.

2\. But finetuning LLMs is turning out to be very cheap and effective due to recent PEFT (parameter efficient training) techniques that work surprisingly well, e.g. LoRA / LLaMA-Adapter, and other awesome work, e.g. low precision as in bitsandbytes library. Think: few GPUs + day, even for very large models.

3\. Therefore, the cambrian explosion, which requires wide reach and a lot of experimentation, is quite tractable due to (2), but only conditioned on (1).

4\. The de facto OG release of (1) was Facebook's sorry Meta's LLaMA release - a very well executed high quality series of models from 7B all the way to 65B, trained nice and long, correctly ignoring the "Chinchilla trap". But LLaMA weights are research-only, been locked down behind forms, but have also awkwardly leaked all over the place... it's a bit messy.

5\. In absence of an available and permissive (1), (2) cannot fully proceed. So there are a number of efforts on (1), under the banner "LLaMA but actually open", with e.g. current models from @togethercompute (https://nitter.moomoo.me/togethercompute), @MosaicML (https://nitter.moomoo.me/MosaicML) ~matching the performance of the smallest (7B) LLaMA model, and @AiEleuther (https://nitter.moomoo.me/AiEleuther) , @StabilityAI (https://nitter.moomoo.me/StabilityAI) nearby.

For now, things are moving along (e.g. see the 10 chat finetuned models released last ~week, and projects like llama.cpp and friends) but a bit awkwardly due to LLaMA weights being open but not really but still. And most interestingly, a lot of questions of intuition remain to be resolved, e.g. especially around how well finetuned model work in practice, even at smaller scales.

https://nitter.moomoo.me/karpathy/status/1654892810590650376#m
Author Public Key
npub1jj9575693w997wqncd3daxp8alpxs9uamu9k03xtrymtaxxfzfkst6h3zc