What happens when AI eats its own tail?
The big LLMs (ChatGPT, Midjourney, Stable Diffusion, etc.) are trained on massive piles of scraped “public” data.
(Btw: “Paging the Copyright Cops.” I mean, wtf!)
A couple of new studies show that when AI starts training itself on its own AI-generated output, results degrade until it’s just craptastic gobbledygook.