Petar Petrov on Nostr: I did a small experiment with #chatgpt and #llama as part of trying to understand ...
I did a small experiment with #chatgpt and #llama as part of trying to understand general relativity and time dilation a bit better. Not a math guru myself, but Llama 3.1 8B did very poorly in terms of math. 4o just blows it away. Llama did however correct itself once I fed it 4o's results, so that was quite positive. Trusting numbers provided by LLMs is still very tricky.
Published at
2024-10-05 08:52:19Event JSON
{
"id": "2bac9958d80923e30c90e2cb6ae0c722956b68236526beb22dbaf1c866b89cc9",
"pubkey": "5ff0be72cf6c9e2e65b81a2401168a334d6599dc5785a035239060e00f0cedf0",
"created_at": 1728118339,
"kind": 1,
"tags": [
[
"t",
"chatgpt"
],
[
"t",
"llama"
],
[
"proxy",
"https://social.tchncs.de/users/petarov/statuses/113253963490635392",
"activitypub"
]
],
"content": "I did a small experiment with #chatgpt and #llama as part of trying to understand general relativity and time dilation a bit better. Not a math guru myself, but Llama 3.1 8B did very poorly in terms of math. 4o just blows it away. Llama did however correct itself once I fed it 4o's results, so that was quite positive. Trusting numbers provided by LLMs is still very tricky.",
"sig": "ecf243b9d6a98cb9d2d7b5735ebdd68da13d452b61434b3f9e837e3150f2657d20fc56ad76fb2e8246ec289a49862adbf22e09b8a1dff0955b723beba2fac679"
}