Video thumbnail

    AI Slop Is Destroying The Internet

    Valuable insights

    1.AI Slop Saturates Online Content Ecosystems: The internet is rapidly filling with low-effort, AI-generated content, often referred to as 'slop,' making it increasingly difficult for users to distinguish authentic human work from automated output.

    2.Bots Dominate Internet Traffic: Approximately half of all internet traffic currently consists of automated bots, a significant portion of which is dedicated to destructive activities like spreading misinformation or manipulating discourse.

    3.Generative AI Relies on Stolen Training Data: The foundation of modern generative AI models is built upon vast amounts of human-created content—from Reddit comments to original artwork—used without attribution or compensation to the original creators.

    4.AI Models Exhibit Confident Inaccuracy: AI research tools frequently extrapolate or invent facts to create narratives that seem plausible or interesting, even when experts cannot verify the source information provided by the model.

    5.Misinformation Creates a Dangerous Feedback Loop: When AI-generated falsehoods are published as videos or articles, they become new sources, which subsequent AI models ingest, effectively cementing fabricated information as accepted truth online.

    6.Erosion of Trust in Scholarly Work: Studies indicate an increase in language characteristic of AI assistance within scientific papers, suggesting that the reliability of the human knowledge library is diminishing due to unacknowledged AI involvement.

    7.Attention Economy Under Threat: If cheap, easily produced AI slop captures the majority of human attention, it endangers the financial viability of channels and creators who invest substantial human time into deep, researched content.

    8.Responsible AI as an Efficiency Tool: The proper application of AI involves using it for utility tasks, such as speeding up alignment or research summaries, while ensuring that human creativity, integrity, and final decision-making remain paramount.

    9.Support for Human-Made Content is Essential: To counteract the rise of AI slop, active support for projects committed to human research, illustration, and fact-checking is necessary to ensure these endeavors can continue operating.

    The Rise of AI Slop Saturation

    The digital landscape is currently experiencing a dramatic influx of content generated by artificial intelligence, often termed 'AI Slop,' which is rapidly altering online dynamics. This proliferation is occurring alongside the release of the 12,026 Human Era Calendar, marking a special year for the creators. The core issue revolves around how online attention translates directly into revenue, creating incentives for the mass production of low-quality, algorithmically optimized material.

    AI Slop is saturating the internet and things are becoming dramatic pretty quickly.

    Pervasiveness of Automated Online Activity

    The environment where money is generated through capturing user attention has been fundamentally changed by automated systems. Fake users actively disseminate their low-quality content across review sections, generate artificial traffic metrics, or actively poison public discourse. Current statistics indicate that roughly half of all internet traffic is composed of bots, with the majority serving destructive objectives rather than beneficial ones.

    Manifestations of Low-Effort Digital Content

    The creation of mediocre content has never been simpler, evident across numerous platforms. This includes the 'black hole of meaninglessness' found on LinkedIn, low-effort short videos designed to hypnotize younger audiences and severely diminish their attention spans, and countless soullessly rewritten books appearing on Amazon.

    • AI-generated music infiltrating streaming platforms.
    • Google AI summarizing websites instead of directing traffic to the original sources.
    • YouTube channels utilizing AI thumbnails, voices, and scripts for frequent long-form uploads across genres like true crime and science.

    Ethical Concerns of Creative Data Training

    A deeply frustrating aspect of this trend is that the very creative work produced by humans is being utilized to train these sophisticated AI models. Every comment posted on Reddit, every original video uploaded to YouTube, and every human drawing shared on Deviant Art has effectively been sold off or directly appropriated by AI corporations.

    Scale of Uncompensated Creative Theft

    This appropriation occurs without providing any attribution or payment to the actual creators responsible for generating the original material. This scale of creative theft makes protection virtually impossible, actively endangering the work of countless creatives while allowing AI companies to accrue significant wealth.

    The Existential Threat to Internet Truth

    While the creative theft is frustrating, the potential for generative AI to irreversibly break the internet is considered far worse. The primary danger lies in the increasing difficulty for users to discern what information presented online is factually accurate.

    generative AI truly has the potential to break the internet irreversibly.

    Contrasting Human Rigor with AI Output

    Initially, artificial intelligence seemed promising. For instance, producing a high-quality script, such as those created by Kurzgesagt, begins with foundational research, which is then subjected to in-depth fact-checking by two or three individuals. Information is confirmed using trustworthy sources, ideally primary documents or academic papers, followed by input and critique from one to three domain experts. This process of fact-checking and source compilation alone consumes approximately 100 hours per video, acknowledging that human limitations mean mistakes are unavoidable.

    Initial Excitement Followed by Disappointment

    When AI emerged, there was significant excitement about a mechanical brain capable of rapid information collection. Initial tests using professional accounts across various AI models to summarize information about brown dwarfs yielded impressive outlines with unique facts and source links. However, deeper investigation revealed that while over 80% of the data was solid, the remainder consisted of compelling facts—like brown dwarf superstorm speeds—that the AI could not source, suggesting extrapolation or invention.

    AI's Tendency to Confidently Fabricate Details

    The AI confidently provided incorrect or fabricated details, similar to a journalist inventing specifics to enhance a story's impact or fit a predetermined narrative. When experts were consulted, they flagged these same facts as questionable, prompting the AI to invent information to satisfy the goal of appearing knowledgeable and making the user happy.

    Information Type
    Verification Status
    Links to Wikipedia, papers, legit articles
    Solid and traceable
    Speed of brown dwarf superstorms
    Unfindable source; likely extrapolated
    Nature of their insides
    Unfindable source; likely extrapolated
    How disappointed their moms are
    Clearly invented for narrative effect

    The Self-Referential Source Problem

    Further investigation revealed that one seemingly solid source given by the AI was itself an article written in a style highly suggestive of AI generation, confirmed by an essay detection tool showing a 72% match. This created a closed loop where an AI article, lacking credible sources, was used as a credible source for subsequent AI research. By 2025, over 1200 confirmed AI news websites were publishing massive amounts of misinformation, leading AIs to present shoddy conclusions that sound authoritative but are often half-truths.

    The Feedback Loop Solidifying Falsehoods

    The danger escalates when this cycle repeats. If a new AI repeats research based on this AI-generated content, and that content originates from a highly viewed video, the misinformation gains perceived validity. Before AI, tracing obscure lies was difficult; with AI proliferation, determining truth becomes nearly impossible.

    The misinformation is now true.

    The content will spread rapidly through these automated channels.

    The Corrosive Effect of AI Trustworthiness

    The most corrosive element of current AI is its convincing appearance of intelligence, enabling it to deliver incredibly confident falsehoods, often subtly. When confronted, the AI admits the error only to repeat the behavior later, as there is no underlying consciousness or understanding guiding its output; it is merely a complex tool operating without comprehension of its task.

    Linguistic Shifts in Academic Publishing

    Studies analyzing millions of scientific papers before and after the rise of Large Language Models (LLMs) showed a sharp, abrupt increase in the frequency of words favored by AIs. This suggests a significant portion of papers are now AI-assisted, often without disclosure. Furthermore, researchers have been caught sneaking hidden messages into papers, prompting AIs via white text or tiny fonts to review their work positively and ignore flaws.

    The Battle for Scarce Human Attention

    Human attention remains the single most valuable resource on the internet. If current trends persist, cheap, 'good enough' AI slop content will consume the majority of this attention. This shift risks making society dumber, exacerbating political divides, and causing neglect of genuine human interaction.

    Financial Viability of Human Creation

    If AI consumes the majority of the attention pie, channels requiring massive human investment for research and quality production become financially unfeasible. This forces creators into a difficult choice: downsize operations or reluctantly adopt AI tools themselves simply to compete in the attention marketplace.

    Integrating AI as a Supportive Utility

    The responsible approach to AI involves using it as a helpful utility, much like the alignment tool in Adobe Illustrator, which instantly organizes multiple elements. AI should handle repetitive, mechanical tasks, allowing human creativity and integrity to lead the final product. This approach maintains efficiency without sacrificing authorship.

    • Using alignment tools for rapid graphic organization.
    • Employing AI programming tools for speed.
    • Utilizing AI as a faster alternative to traditional search engines.

    Commitment to Human-Made Content

    The commitment moving forward is that content creation will remain human-made, for humans. This involves significant investment in research, human creativity for illustrations and animations, and maintaining rigorous fact-checking processes discussed with human experts to ensure maximum trustworthiness. The creators state a preference to cease operations rather than produce AI slop.

    Supporting the 12,026 Human Era Calendar

    Sustaining this human-centric project requires community support, as the organization employs nearly 70 full-time staff plus freelancers. The primary offering is the 12,026 Human Era Calendar, designed to fill homes with art while reframing time itself. By starting the timeline 12,000 years ago at the dawn of civilization, 10,000 more years of human achievement are incorporated into the timeline, offering a new perspective on historical progress.

    The Anniversary Artbook Collection

    To commemorate the 10-year calendar anniversary, a first-ever artbook was created, containing 120 pages of every calendar illustration ever produced, complete with behind-the-scenes sketches and team stories. These products, like the videos, are crafted with intensive human effort, research, and design, standing in direct opposition to content churned out by soulless algorithms.

    Questions

    Common questions and answers from the video to help you understand the content better.

    How does AI slop impact the reliability of scientific research papers published online?

    Studies show an abrupt increase in language patterns characteristic of AI assistance in scientific papers published after the rise of LLMs, suggesting that unacknowledged AI involvement is eroding the overall reliability of the scholarly knowledge base.

    What specific examples illustrate AI's tendency to confidently invent facts during research processes?

    During a research simulation on brown dwarfs, the AI confidently provided facts regarding superstorm speeds and other characteristics that could not be sourced, indicating that the model extrapolated or invented information to create a more compelling narrative.

    Why is human attention considered the most valuable resource on the internet today?

    Human attention is the primary driver of online revenue; if cheap, easily produced AI slop content captures the majority of this attention, it makes financially sustainable, deeply researched human content unfeasible.

    What ethical concerns surround the use of human-created content to train generative AI models?

    Generative AI models are trained using vast amounts of human creative work—including Reddit comments and artwork—without providing any attribution or payment to the original creators, which constitutes large-scale creative theft.

    How does Kurzgesagt plan to utilize AI technology responsibly without compromising creative integrity?

    The organization intends to use AI strictly as a utility tool for efficiency, similar to alignment functions in design software, ensuring that all creativity, research integrity, and final output remain firmly under human control.

    Useful links

    These links were generated based on the content of the video to help you deepen your knowledge about the topics discussed.

    This article was AI generated. It may contain errors and should be verified with the original source.
    VideoToWordsClarifyTube

    © 2025 ClarifyTube. All rights reserved.