Experts warn that AI-generated content may pose a threat to the AI technology that produced it.
In a recent paper on how generative AI tools like ChatGPT are trained, a team of AI researchers from schools like the University of Oxford and the University of Cambridge found that the large language models behind the technology may potentially be trained on other AI-generated content as it continues to spread in droves across the internet — a phenomenon they coined as “model collapse.” In turn, the researchers claim that generative AI tools may respond to user queries with lower-quality outputs, as their models become more widely trained on “synthetic data” instead of the human-made content that make their responses unique.
Other AI researchers have coined their own terms to describe the training method. In a paper released in July, researchers from Stanford and Rice universities called this phenomenon the “Model Autography Disorder,” in which the “self-consuming” loop of AI training itself on content generated by other AI could result in generative AI tools “doomed” to have their “quality” and “diversity” of images and text generated falter. Jathan Sadowski, a senior fellow at the Emerging Technologies Research Lab in Australia who researches AI, called this phenomenon “Habsburg AI,” arguing that AI systems heavily trained on outputs of other generative AI tools can create “inbred mutant” responses that contain “exaggerated, grotesque features.”
Comments are closed.