Aeon.Cypher on Nostr: #Generative #AI is not going to make #AGI, ever. The performance of LLMs is the log ...
#Generative #AI is not going to make #AGI, ever.
The performance of LLMs is the log of their parameter size. The resource use is proportional to their parameter size.
The human brain is approximately equal to a 60 quintilion parameter model running at 80hz. It consumes the energy of a single lightbulb.
GPT-5 is likely a 1 trillion parameter model, and requires massive energy.
LLMs are amazing, but they're nowhere near approaching AGI.
https://www.wired.com/story/how-quickly-do-large-language-models-learn-unexpected-skills/Published at
2024-03-24 17:40:04Event JSON
{
"id": "792816b635751ee8d869b2b4c5827d31d97bb217fd70b8ae1efe18a1eb2b974c",
"pubkey": "dd519421c4fb836676312abd4e0bdac773e6040d7929baf0b42eb4e126f7ce6a",
"created_at": 1711302004,
"kind": 1,
"tags": [
[
"t",
"generative"
],
[
"t",
"ai"
],
[
"t",
"agi"
],
[
"proxy",
"https://lgbtqia.space/users/AeonCypher/statuses/112151888153033225",
"activitypub"
]
],
"content": "#Generative #AI is not going to make #AGI, ever.\n\nThe performance of LLMs is the log of their parameter size. The resource use is proportional to their parameter size. \n\nThe human brain is approximately equal to a 60 quintilion parameter model running at 80hz. It consumes the energy of a single lightbulb.\n\nGPT-5 is likely a 1 trillion parameter model, and requires massive energy.\n\nLLMs are amazing, but they're nowhere near approaching AGI.\n\nhttps://www.wired.com/story/how-quickly-do-large-language-models-learn-unexpected-skills/",
"sig": "7d34baab2497093b31c682dcb82ab59ff92632fb8252063e75db4faaa4d69ffbfe65849ddfe40542794808b638caa7aa4e05b1c5e6d95fc8f5981935a6de6066"
}