YouTube creators are understandably wary about the changes generative AI could bring to their careers. Technological advancements powered by artificial intelligence are threatening to steal creators’ work, fill the internet with deepfakes, and make it more difficult to tell reality from fiction.
Those concerns have put creators on edge, and some of them have noticed some unauthorized changes to their videos. After short-form uploads go live, YouTube is using machine learning technology to retouch them.
As far back as June 2025, keen-eyed YouTube viewers have spotted Shorts clips that look like they had been subjected to AI-generated edits. Telling details, such as face and hair shapes, appeared too clean and neat to come from real people. Observers questioned whether YouTube was applying AI to videos after the fact.
Subscribe for daily Tubefilter Top Stories
Earlier in August, those queries started getting louder. Rhett Shull, a musician with about 750,000 YouTube subscribers, shared a video discussing retouched Shorts. The altered channels included his own, that of fellow musician Rick Beato, and several other notable hubs like Vlogbrothers and Rhett & Link. If I’m scrolling on Shorts and this came up, I would think this is an AI-generated short,” Shull said, “that Rhett & Link are using AI to generate content for Good Mythical Morning.”
To be clear, it is normal for YouTube to rescale videos after they’ve been uploaded. To play videos in different resolutions, for example, YouTube will add its own encoding and compression, balancing factors like video quality and load times in the proces.
In statements shared in response to Shull’s video, YouTube reps characterized the contentious video changes as routine enhancements. On X, creator liaison Rene Ritchie described the edits as an experiment powered by “traditional machine learning technology” that aims to “unblur, denoise, and improve clarity in videos” while serving YouTube’s mission to “provide the best video quality and experience possible.” In a statement provided to The Atlantic, YouTube spokeswoman Allison Toh stressed that “these enhancements are not done with generative AI.”
Those responses make it clear that YouTube is drawing a line between what it calls “traditional machine learning technology” and what we know as generative AI. Ritchie clarified that idea in a follow-up tweet, writing that “GenAI typically refers to technologies like transformers and large language models, which are relatively new.”
Some technologists aren’t buying that explanation. “I think using the term ‘machine learning’ is an attempt to obscure the fact that they used AI because of concerns surrounding the technology,” University of Pittsburgh Chair of Disinformation Studies Samuel Woolley told the BBC. “Machine learning is in fact a subfield of artificial intelligence.”
Ultimately, the semantic argument is beside the point. Shull argued that the enhancements look like AI to viewers, and that alone makes the changes impactful. He sees artificial-looking content as something that “makes you question the reality of what you’re seeing,” which can affect the relationship between creators and their fans.
“The trust of my audience is the most important thing that I have as a YouTube creator,” Shull said. “Replacing or enhancing my work without my consent or knowledge with some kind of AI upscaling system not only erodes that trust with the audience, but it erodes my trust with the platform of YouTube.”
Can YouTube win back the trust of creators who are skeptical regarding the platform’s use of machine learning technology? For a long time, communication with creators was one of YouTube’s biggest weaknesses. Channels like Creator Insider have addressed that problem, but the latest kerfuffle shows there is still a disconnect between YouTube and many of its creators. As AI slop takes over YouTube Shorts, humans want their original work to stand out. So if YouTube wants to tinker with those uploads, it might want to give its users a heads-up first.
link
