If the system flags a logged-in viewer as being under 18, YouTube will impose the normal controls and restrictions that the site already uses as a way to prevent minors from watching videos and engaging in other behavior deemed inappropriate for that age.
The safeguards include reminders to take a break from the screen, privacy warnings and restrictions on video recommendations. YouTube, which has been owned by Google for nearly 20 years, also doesn't show ads tailored to individual tastes if a viewer is under 18.
If the system has inaccurately called out a viewer as a minor, the mistake can be corrected by showing YouTube a government-issued identification card, a credit card or a selfie.
“YouTube was one of the first platforms to offer experiences designed specifically for young people, and we’re proud to again be at the forefront of introducing technology that allows us to deliver safety protections while preserving teen privacy,” James Beser, the video service's director of product management, wrote in a blog post about the age-verification system.
People still will be able to watch YouTube videos without logging into an account, but viewing that way triggers an automatic block on some content without proof of age.
The political pressure has been building on websites to do a better job of verifying ages to shield children from inappropriate content since late June when the U.S. Supreme Court upheld a Texas law aimed at preventing minors from watching pornography online.
While some services, such as YouTube, have been stepping up their efforts to verify users' ages, others have contended that the responsibility should primarily fall upon the two main smartphone app stores run by Apple and Google — a position that those two technology powerhouses have resisted.
Some digital rights groups, such as the Electronic Frontier Foundation and the Center for Democracy & Technology, have raised concerns that age verification could infringe on personal privacy and violate First Amendment protections on free speech.