The author argues LLMs churn out fast, generic answers by remixing low-quality source material. They seed brittle, repetitive code via vibe-coding. The remedy: require source attribution and auditable inference to separate originals from forgeries and to reshape model training and deployment.
Requiring source attribution from LLMs would force auditable forward passes, traceable training pipelines, and new model architectures.










