Stu on Nostr: As predicted here in March… Open Source AI catches up with GPT-4 for writing code. ...
As predicted here in March… Open Source AI catches up with GPT-4 for writing code. 😎
Model is called CodeLlama-34B-Python
It achieved 69.5% pass@1 on HumanEval, GPT-4 achieved 67%
NB: 34 billion is significantly smaller than GPT-4 it’s approx 1/10th the size.
Again I confidently predict that many small and open source specialists will destroy monolithic general models, when it comes to creating something akin to AGI.
The large forest of super narrow specialists will win. It won’t even be close. Monoliths are a dead end.
https://www.phind.com/blog/code-llama-beats-gpt4Published at
2023-08-26 15:28:23Event JSON
{
"id": "c554cda7e6c438e3d7acb1535f6922abffdf96072dda18ede498ad093dc13738",
"pubkey": "7560e065bdfe91872a336b4b15dacd2445257f429364c10efc38e6e7d8ffc1ff",
"created_at": 1693063703,
"kind": 1,
"tags": [
[
"r",
"https://www.phind.com/blog/code-llama-beats-gpt4"
]
],
"content": "As predicted here in March… Open Source AI catches up with GPT-4 for writing code. 😎\n\nModel is called CodeLlama-34B-Python\n\nIt achieved 69.5% pass@1 on HumanEval, GPT-4 achieved 67% \n\nNB: 34 billion is significantly smaller than GPT-4 it’s approx 1/10th the size. \n\nAgain I confidently predict that many small and open source specialists will destroy monolithic general models, when it comes to creating something akin to AGI.\n\nThe large forest of super narrow specialists will win. It won’t even be close. Monoliths are a dead end.\n\nhttps://www.phind.com/blog/code-llama-beats-gpt4",
"sig": "4a68680e5a903816ba2c46e9c0e8ea6f80356e2b73142b1296e069c031603709232c1b38ba228fc7a4d84dab0df171fbe61a409088ab813f439a705deb0b5792"
}