Mickey Mouse lies on the road in a pool of blood. Naked toddlers splash about in the bath. A disobedient child is confronted by a man brandishing a belt. YouTube has a problem with children's videos. In recent weeks, media reports have highlighted inappropriate content that features or appeals to kids. Ads for the like of Mars and Adidas have appeared alongside these videos, leading those brands to withdraw their advertising on the site. Some videos, aimed at children, depict familiar cartoon characters in violent or sinister circumstances. Some show real children in situations that draw the attention of paedophiles. Many of these live-action clips are uploaded innocently, yet attract lewd comments and links to child pornography sites. Others seem to ben deliberately exploitative. Feeling the pressure, YouTube has announced a new action plan. It currently relies on volunteers and artificial intelligenceArtificial intelligence, or “AI,” is the ability for a computer to think and learn. With AI, computers can perform tasks that are typically done by people, including processing language, problem-solving, and learning. to flag dodgy content, which reviewers then check and remove if necessary. On Monday, the website said that it would extend its AI system and expand its team of reviewers to 10,000. Social media's role in spreading harmful content has come under great scrutiny this year. In March, 250 brands boycotted YouTube after their ads were displayed next to extremist content. The website set up its AI monitoring technology in response. As a result, it claims to have removed over 150,000 videos containing "violent extremism" since June. Yet many think YouTube should be doing more. Critics point out that the website still employs no one to seek out bad content — only to review what has been flagged. Last month, one of its "trusted flaggers" told The Times that only three unpaid volunteers are in charge of finding child-inappropriate content. Governments around the world have pushed YouTube to regulate itself more effectively. The UK, USA and EU have threatened to take action if the website does not. Its new measures are a step in the right direction. Should we applaud? Yes, say some. Remember: every minute, 400 hours of video are uploaded to YouTube. The website has to police all that — without infringing free speech by removing legitimate content. That is a huge challenge, and YouTube cannot be perfect. But it is listening to the public's concerns, which is what matters. Think again, reply others. YouTube is dragging its feet over this issue. Although it is owned by Google, one of the world's richest companies, it does not even pay its "flaggers". It simply makes token changes whenever journalists and advertisers kick up a fuss — ie, when its revenue is threatened. It could do so much more. Q & A What do we know? YouTube claims to have over a billion users, who watch a billion hours of video every day. Channels with more than 10,000 views (which comply with YouTube's guidelines) can choose to have ads displayed alongside their videos. By and large, ads are assigned to videos automatically, not by humans. YouTube takes 45% of the revenue; the video's creator gets the rest. What do we not know? How much difference YouTube's measures will make. The website has promised to hire more reviewers, use its AI "more widely", share more information on how its guidelines are enforced, and "apply stricter criteria" to how ads are matched with videos. These pledges are quite vague, and leave some questions unanswered, such as whether flaggers will continue to work on a voluntary basis.KeywordsArtificial Intelligence - Artificial intelligence, or “AI,” is the ability for a computer to think and learn. With AI, computers can perform tasks that are typically done by people, including processing language, problem-solving, and learning.
YouTube owns up to ‘sinister’ child problem
Mickey Mouse lies on the road in a pool of blood. Naked toddlers splash about in the bath. A disobedient child is confronted by a man brandishing a belt. YouTube has a problem with children's videos. In recent weeks, media reports have highlighted inappropriate content that features or appeals to kids. Ads for the like of Mars and Adidas have appeared alongside these videos, leading those brands to withdraw their advertising on the site. Some videos, aimed at children, depict familiar cartoon characters in violent or sinister circumstances. Some show real children in situations that draw the attention of paedophiles. Many of these live-action clips are uploaded innocently, yet attract lewd comments and links to child pornography sites. Others seem to ben deliberately exploitative. Feeling the pressure, YouTube has announced a new action plan. It currently relies on volunteers and artificial intelligenceArtificial intelligence, or "AI," is the ability for a computer to think and learn. With AI, computers can perform tasks that are typically done by people, including processing language, problem-solving, and learning. to flag dodgy content, which reviewers then check and remove if necessary. On Monday, the website said that it would extend its AI system and expand its team of reviewers to 10,000. Social media's role in spreading harmful content has come under great scrutiny this year. In March, 250 brands boycotted YouTube after their ads were displayed next to extremist content. The website set up its AI monitoring technology in response. As a result, it claims to have removed over 150,000 videos containing "violent extremism" since June. Yet many think YouTube should be doing more. Critics point out that the website still employs no one to seek out bad content - only to review what has been flagged. Last month, one of its "trusted flaggers" told The Times that only three unpaid volunteers are in charge of finding child-inappropriate content. Governments around the world have pushed YouTube to regulate itself more effectively. The UK, USA and EU have threatened to take action if the website does not. Its new measures are a step in the right direction. Should we applaud? Yes, say some. Remember: every minute, 400 hours of video are uploaded to YouTube. The website has to police all that - without infringing free speech by removing legitimate content. That is a huge challenge, and YouTube cannot be perfect. But it is listening to the public's concerns, which is what matters. Think again, reply others. YouTube is dragging its feet over this issue. Although it is owned by Google, one of the world's richest companies, it does not even pay its "flaggers". It simply makes token changes whenever journalists and advertisers kick up a fuss - ie, when its revenue is threatened. It could do so much more.
<h2>Q & A</h2>
Artificial Intelligence - Artificial intelligence, or "AI," is the ability for a computer to think and learn. With AI, computers can perform tasks that are typically done by people, including processing language, problem-solving, and learning.
YouTube owns up to ‘sinister’ child problem

Glossary
Artificial Intelligence - Artificial intelligence, or “AI,” is the ability for a computer to think and learn. With AI, computers can perform tasks that are typically done by people, including processing language, problem-solving, and learning.