BourbonicPlague on Nostr: Yes, but the scale of the hardware that the original models are trained on means the ...
Yes, but the scale of the hardware that the original models are trained on means the actual training of the original models is still very centralized and vulnerable to centralized control in the near term.
Time will help as Moore’s law continues to work on the compute that matters for AI training. But it’s important that those chips don’t get too centralized in only a single vendor’s hands. That’s why I’m hoping AMD gets their shit together and other players join the race. Apple doesn’t seem to be interested in that fight right now, but we’ll see if that changes.
Published at
2023-07-12 22:13:25Event JSON
{
"id": "9ee5c1a6f2bf67944916733e221abc2c98809d4ae4c22e73b464166be0036f87",
"pubkey": "104a9e01bfa9fd7d89920636bf25bb28f1fa5ee4a12201fad462fb79c9b5b2e9",
"created_at": 1689200005,
"kind": 1,
"tags": [
[
"e",
"e1937a566f14f1a6d92e50eec91847aa7d46b1626fb4052fd2c10e096617fbf4",
""
],
[
"e",
"0d7f1b07c6354bfa14361aa1c75560968b7793623d4f87be64428889c6cee578"
],
[
"p",
"f0ff87e7796ba86fc84b4807b25a5dee206d724c6f61aa8853975a39deeeff58"
]
],
"content": "Yes, but the scale of the hardware that the original models are trained on means the actual training of the original models is still very centralized and vulnerable to centralized control in the near term. \n\nTime will help as Moore’s law continues to work on the compute that matters for AI training. But it’s important that those chips don’t get too centralized in only a single vendor’s hands. That’s why I’m hoping AMD gets their shit together and other players join the race. Apple doesn’t seem to be interested in that fight right now, but we’ll see if that changes.",
"sig": "1a0127177bb87e0e3a8e3cffad61b819ebd50866d2bfbddef0f7933919abedf5a3e62899e0271041ab63b71b59e045c5942b0b27c138e2976af2406fa04816a2"
}