As part of YouTube’s larger initiative to be open about content that could otherwise confuse or mislead consumers, creators of realistic-looking films will have to cite when they used artificial intelligence starting on Monday.
A checklist inquiring if the content changes footage of a genuine place or event, portrays a realistic-looking situation that didn’t actually happen, or has a real person say or do something they didn’t do appears when a user uploads a video to the website.
In light of the emergence of new consumer-facing generative AI tools that make it quick and simple to create captivating text, images, video, and music that are frequently difficult to discern from the real thing, the disclosure is intended to assist prevent consumers from becoming confused by synthetic content. Experts in online safety have expressed concern that the spread of AI-generated content may mislead and confuse consumers online, particularly in the run-up to elections in 2024 in the US and other countries.
YouTube creators will have to note when their videos use artificial intelligence (AI)-generated or other modified information that seems realistic. If they don’t include the notice on a regular basis, they may have to deal with the consequences.
The platform declared that the upgrade, which is a part of a broader rollout of new AI policies, will be arriving in the fall.
YouTube will put a label in the description stating that the video contains “altered or synthetic content” and that the “sound or visuals were significantly edited or digitally generated” when a creator reports that their work includes AI-generated content. Videos pertaining to “sensitive” subjects, such politics, will have the label displayed more clearly on the screen.
As stated by the firm last year, content produced using YouTube’s own generative AI tools—which launched in September—will also have unambiguous labeling.
All that YouTube will need producers to do is tag realistic AI-generated video that might lead viewers to believe it is real.
When artificial intelligence (AI) produces content that is obviously implausible or “inconsequential,” such as when it produces animations or modifies lighting or color, creators won’t have to reveal it. Additionally, according to the company, producers won’t be forced to “disclose if generative AI was used for productivity, like generating scripts, content ideas, or automated captions.”
When it comes to synthetic content that needs to be disclosed, creators who don’t utilize the new label on a regular basis risk consequences like having their work removed or being suspended from YouTube’s Partner Program, which allows them to make money off of it.