YouTube has permanently banned two channels – Screen Culture and KH Studio – for repeatedly violating its policies against spam and misleading content. The platforms were shut down after exploiting AI to generate fake movie trailers that tricked viewers into believing they were real, according to reports from Deadline and CNET.
The Channels’ Reach and Violations
The two channels, operating out of India and Georgia, had amassed a combined 2 million subscribers and over 1 billion views before being terminated. YouTube first suspended ad monetization on both accounts earlier this year when the misleading trailers were initially detected. A YouTube spokesperson, Jack Malon, stated that the channels were readmitted to the YouTube Partner Program after making corrections. However, they quickly resumed deceptive practices.
“These channels made necessary corrections to be readmitted into the YouTube Partner Program. However, once monetizing again, they reverted to clear violations of our spam and misleading metadata policies, and as a result, they have been terminated from the platform.” – Jack Malon, YouTube Spokesperson
Why This Matters
This action underscores the growing challenge of AI-generated disinformation on major platforms. While AI tools can be used creatively, malicious actors are increasingly leveraging them to create believable fake content. The incident raises questions about YouTube’s ability to consistently detect and remove such material before it reaches a large audience. This is especially relevant as AI-generated videos become more realistic, making them harder to distinguish from authentic content.
The rapid spread of fake trailers demonstrates how easily viewers can be misled, potentially damaging trust in legitimate film promotion. YouTube’s response signals a willingness to enforce its policies, but the incident also highlights the need for more proactive detection methods and faster enforcement against deceptive AI-generated content.
The removal of these channels serves as a warning to others attempting to exploit AI for fraudulent purposes on the platform.






























