Why Nostr? What is Njump?
2023-12-06 17:39:15
in reply to

Matt Lavender on Nostr: npub1u2hl9…4z4d7 this may be true, but unfortunately the inference cost of the ...

this may be true, but unfortunately the inference cost of the models at this scale is completely unsustainable for that sort of use, or anything equivalent.

It takes 128 GPUs per instance to run GPT-4.

The cost per conversation on GPT-3 was 38c.

GPT-4 is roughly 3 times as expensive to run: or $1.14 PER CONVERSATION.

At those rates, routine casual usage is just not viable...even assuming we could give every child 128 dedicated GPUs, which we definitely cannot.
Author Public Key
npub1tj44u0uwzm5anxputagdnxz7qjsf6ggaws4leqla6f7hqq0a30ns7xdks9