I was listening to a left-leaning podcast that had on a technology expert who didn’t actually work for YouTube but claimed this was how YouTube monitored for hate speech.
Basically the idea is, all videos are automatically given automatically generated subtitles by computer voice analysis (you can see this by hitting the CC button on the bottom of a YouTube video and clicking “Auto-Generate Subtitles” if the video didn’t have subtitles uploaded already), this also happens to Streamers. Uploaded videos that’s subtitle scripts include “controversial” words like COVID, Insurrection, Boogaloo, and Roseanne. With videos uploaded, they’re flagged for an actual human to watch the content and approve it but in the meantime the video will be automatically let onto YouTube, however if there’s mass reporting on an already non-approved video the video is automatically taken down pending an investigation.
Similarly, streams are monitored for the same words, but since it’s a stream, if a controversial word or words are said in rapid succession the stream will actually be automatically taken down until a human is able to quickly monitor it on the fly and see if the stream should be approved and put back up or permanently taken off the air.