Jeff Jarvis on Nostr: How LLMs will destroy the web & themselves: "Here we consider what may happen to ...
How LLMs will destroy the web & themselves:
"Here we consider what may happen to GPT-{n} once LLMs contribute much of the text found online. We find that indiscriminate use of model-generated content in training causes irreversible defects in the resulting models, in which tails of the original content distribution disappear."
https://www.nature.com/articles/s41586-024-07566-yPublished at
2024-07-25 13:22:36Event JSON
{
"id": "2cd2856f8427aaff56670bba2326f2fa160715a3e3330a2943c06a22f72ccac8",
"pubkey": "cc2f8deaa910ec55998b77a6fcae16d8494e02b69488bbfac9e68b5e2f96c178",
"created_at": 1721913756,
"kind": 1,
"tags": [
[
"proxy",
"https://mastodon.social/@jeffjarvis/112847339953672479",
"web"
],
[
"proxy",
"https://mastodon.social/users/jeffjarvis/statuses/112847339953672479",
"activitypub"
],
[
"L",
"pink.momostr"
],
[
"l",
"pink.momostr.activitypub:https://mastodon.social/users/jeffjarvis/statuses/112847339953672479",
"pink.momostr"
],
[
"-"
]
],
"content": "How LLMs will destroy the web \u0026 themselves:\n\"Here we consider what may happen to GPT-{n} once LLMs contribute much of the text found online. We find that indiscriminate use of model-generated content in training causes irreversible defects in the resulting models, in which tails of the original content distribution disappear.\"\nhttps://www.nature.com/articles/s41586-024-07566-y",
"sig": "b47774532593af8bc2b47266cca4fd23d0b183f9655d9c66083270f04bb6f684d936cc2c6afd0d823d53b900b8693bf4168b665423f931a79f1fa681d0bbcef9"
}