“…indiscriminate use of model-generated content in training causes irreversible defects in the resulting models, in which tails of the original content distribution disappear. We refer to this effect as ‘model collapse’ and show that it can occur in LLMs as well as in variational autoencoders (VAEs) and Gaussian mixture models (GMMs)”
npub1gw4guamk89rqavrn0qf7nr7qsuj234dupqd00m782zymyhk6nwyqe4pu63 (npub1gw4…pu63)
#ai
https://www.nature.com/articles/s41586-024-07566-y