YouTube owns up to ‘sinister’ child problem
Is YouTube doing enough to regulate itself? The website has announced new measures to remove inappropriate content, particularly to do with children. Yet some want it to go further.
Mickey Mouse lies on the road in a pool of blood. Naked toddlers splash about in the bath. A disobedient child is confronted by a man brandishing a belt.
YouTube has a problem with children’s videos. In recent weeks, media reports have highlighted inappropriate content that features or appeals to kids. Ads for the likes of Mars and Adidas have appeared alongside these videos, leading those brands to withdraw their advertising on the site.
Some videos, aimed at children, depict familiar cartoon characters in violent or sinister circumstances. Some show real children in situations that draw the attention of paedophiles. Many of these live-action clips are uploaded innocently, yet attract lewd comments and links to child pornography sites. Others seem to be deliberately exploitative.
Feeling the pressure, YouTube has announced a new action plan. It currently relies on volunteers and artificial intelligence to flag dodgy content, which reviewers then check and remove if necessary. On Monday, the website said that it would extend its AI system and expand its team of reviewers to 10,000.
Social media’s role in spreading harmful content has come under great scrutiny this year. In March, 250 brands boycotted YouTube after their ads were displayed next to extremist content. The website set up its AI monitoring technology in response. As a result, it claims to have removed over 150,000 videos containing “violent extremism” since June.
Yet many think YouTube should be doing more. Critics point out that the website still employs no one to seek out bad content — only to review what has been flagged. Last month, one of its “trusted flaggers” told The Times that only three unpaid volunteers are in charge of finding child-inappropriate content.
Governments around the world have pushed YouTube to regulate itself more effectively. The UK, USA and EU have threatened to take action if the website does not. Its new measures are a step in the right direction. Should we applaud?
Think of the children
Yes, say some. Remember: every minute, 400 hours of video are uploaded to YouTube. The website has to police all that — without infringing free speech by removing legitimate content. That is a huge challenge, and YouTube cannot be perfect. But it is listening to the public’s concerns, which is what matters.
Think again, reply others. YouTube is dragging its feet over this issue. Although it is owned by Google, one of the world’s richest companies, it does not even pay its “flaggers”. It simply makes token changes whenever journalists and advertisers kick up a fuss — ie, when its revenue is threatened. It could do so much more.
- Should children be allowed to use YouTube?
- Does YouTube do more harm or good?
- Write a short story titled: “The Day YouTube Died”.
- YouTube wants ideas for how it could tackle harmful content. In groups, come up with a new and exciting proposal, drawing on your experiences of the website.
Some People Say...
“The internet is the first thing humanity has built that humanity doesn’t understand.”Eric Schmidt
What do you think?
Q & A
- What do we know?
- YouTube claims to have over a billion users, who watch a billion hours of video every day. Channels with more than 10,000 views (which comply with YouTube’s guidelines) can choose to have ads displayed alongside their videos. By and large, ads are assigned to videos automatically, not by humans. YouTube takes 45% of the revenue; the video’s creator gets the rest.
- What do we not know?
- How much difference YouTube’s measures will make. The website has promised to hire more reviewers, use its AI “more widely”, share more information on how its guidelines are enforced, and “apply stricter criteria” to how ads are matched with videos. These pledges are quite vague, and leave some questions unanswered, such as whether flaggers will continue to work on a voluntary basis.
- Aimed at children
- Some of these disturbing parodies of cartoons were shown on YouTube Kids, an app that is supposed to contain only child-friendly content.
- Deliberately exploitative
- For example, one viral video shows three young girls tied up with a rope.
- Artificial intelligence
- YouTube uses “machine learning” to spot harmful content. This means that the more content its AI system sifts through, the better it gets at spotting patterns, the more effectively it does its job. YouTube says that over 75% of videos taken down for violent extremism were flagged by the AI system, not humans.
- Another example is the role of social media – particularly Facebook – in spreading inflammatory political posts apparently paid for by the Russian government.
- Trusted flaggers
- Anyone can flag a video, but users who are especially good at spotting harmful content are labelled “trusted flaggers” and given special privileges.
- The UK
- In May, the Commons home affairs committee proposed fines of “tens of millions of pounds” for social media companies that fail to remove harmful content swiftly.