Banned words on Twitch in 2026: what AutoMod blocks and how to set your own list
April 30, 2026
Updated April 30, 2026
Quick note — twitch never publishes a master list of banned words. The platform combines a private slur filter (built into AutoMod and not user-editable), four configurable filter categories you tune yourself. A Blocked Terms list every channel maintains by hand. Quick note — knowing where each layer lives is the difference between a clean chat and a 24-hour suspension. And the moderation tools that shipped between 2021 and 2025 — this 2026 guide walks through every layer, the rules Twitch enforces site-wide.
List of Banned Words and Expressions
From eight years on this dashboard, twitch refuses to publish a master list. As Dexerto put it in its April 2024 explainer: "Although there is no official list of words that have been banned on Twitch, several categories of offensive words have been banned on the platform." In practice you've three layers to manage. The built-in slur filter sits inside AutoMod and Twitch maintains it. You cannot see it or edit it. The four AutoMod categories are tunable. Your channel-level Blocked Terms list is yours alone.
The categories Twitch site-wide enforces against under the Hateful Conduct Policy cover thirteen protected attributes. The platform names them as race, ethnicity, color, caste, national origin, immigration status, religion, sex, gender, gender identity, sexual orientation, disability, serious medical condition and veteran status (this is the exact line I gave a creator last week). Worth flagging: a slur or coded attack tied to any of those will trigger a strike on the streamer, on the chatter, or on both, depending on context.
- Hate-speech slurs targeting protected attributes (race, ethnicity, religion, gender identity, sexual orientation, disability, veteran status). The slur dictionary is private and updated by Twitch.
- Identity insults outside slurs but used as harassment: "r-word", "a-word", coded variants. AutoMod's evasive-language detection catches the common spellings.
- Three explicitly enforced terms used as insults: "simp", "incel" and "virgin". Twitch ruled in February 2021 these are negative and unacceptable, and channels regularly hand out timeouts when they appear.
- Sexual harassment phrases, unsolicited remarks about a streamer's body, or suggestions of non-consensual scenarios, captured by the Sexual Content category at Level 2 and above.
- Threats of violence or self-harm, including 'kys' and similar shorthand, captured by the Hostility category and reportable to Twitch directly.
- Doxxing material: real names, phone numbers, addresses, work locations posted without consent. AutoMod flags patterns that look like phone numbers; the rest is a manual mod call.
- Hate-symbol references in text, including the Confederate flag wording and known hate-group acronyms. Twitch broadened its hate-symbol enforcement in the 2020 Hateful Conduct update.
On November 15, 2024 Twitch added "Zionist" to the list when used as an attack on a person or group based on their background or religion From eight years of running Partner onboarding for an agency.. Use of the word in political discussion (supportive or critical) is allowed. Use as a slur is not. That update is a useful illustration: enforcement scope changes by quarter, so keep an eye on the Twitch Safety blog rather than third-party word lists.
If you want a starting point for your channel's own Blocked Terms list, Twitch's UserVoice forum and StreamScheme's 2026 audit recommend seeding 30 to 60 entries built from real moderation events on similar channels. We cover the practical workflow further in the StreamRise guide on managing harassment in chat.
Context Selection and Ban Formulation
Twitch enforces against impact, not intent. The 2020 hateful-conduct policy update made this explicit: a streamer can be sanctioned for letting a slur sit in chat for two minutes even if no one in the room "meant" it. Saying a word in jest, quoting a song lyric, reading a Discord message on stream are all scored the same as a direct attack. Account actions land in three buckets and they stack across a 90-day window.
- Warning. A first offense for low-severity terms in your own chat, with no prior history. The strike sits on the account but the channel stays live.
- Suspension. 1-day, 3-day, 7-day or 30-day timeouts on the broadcasting account. This is the most common outcome for repeated AutoMod-eligible content the streamer left in chat.
- Indefinite suspension. Used for the worst categories: severe slurs, doxxing, repeated harassment of a specific viewer, or coordinated raid behavior. Reinstatement requires an appeal through the Twitch help portal and is rarely granted on the first request.
Three rules cut your exposure. First, set AutoMod to Level 2 minimum on day one and only loosen specific categories where your community needs slack. Second, keep your follower-only delay at 10 minutes for the first 90 days of a new channel; the most common ban-evasion accounts are under one hour old. Third, never re-quote a slur to "address" it on stream. The mod-view replay shows what happened, so you do not need to repeat the word.
If chat puts you in an awkward spot, say a slur scrolls by or a viewer brings up a banned topic, the safest move is to delete the message, time out the user, and move on without naming what they said. The Mod View timeline keeps the receipt. We walk through that interface in our Mod View guide.
Extended List of Prohibited Behavior
Words are only one layer. Twitch's Community Guidelines list a wider set of behaviors that pull strikes regardless of which specific word was used. The four AutoMod categories (Discrimination, Sexual Content, Hostility, Profanity) map onto the policy at the message-by-message level. Each category has its own slider.
- Coordinated harassment: telling chat to brigade another streamer's chat, hate-raid coordination, sharing dox links. The 2021 hate-raid lawsuits made it clear this can become a legal matter, not only a Twitch one.
- Discussion that praises designated terror groups, hate organizations, or known mass-violence perpetrators. Twitch's Banned Hate Group list is updated continuously and covers groups by name.
- Promotion of self-harm or suicide. Mention in a support context is allowed; encouragement is not. The line is whether the message could plausibly push a viewer toward acting.
- Sexual content outside the appropriate label: nudity, sexually explicit text in chat, simulating sex acts on cam. Twitch's content classification labels and 18+ tag exist; using them is the legal way to discuss adult content.
- Sharing private information of others (doxxing): home address, employer, school, real-life identity tied to a Twitch handle without consent.
- Impersonation: claiming to be another streamer, a Twitch employee, or a public figure for the purpose of misleading viewers.
The four AutoMod categories are sliders from 0 to 4 inside each one. Twitch describes Level 0 as "only commonly blocked terms", Level 1 as removing hate speech, Level 2 as adding sexually explicit and abusive language, Level 3 as adding more identity language, and Level 4 as the strictest filter that adds profanity and mild trash talk. Honest take from the trenches: most streamers run a hybrid: Discrimination at 4, Sexual Content at 3, Hostility at 2 or 3, Profanity at 1 or 2 depending on the audience.
Suspicious User Detection layers on top. The machine-learning model flags accounts that look like ban-dodgers, splits them into "likely" and "possible", and quietly hides messages from the likely group while letting moderators see the possible group as flagged. As MCV/Develop reported on launch, the goal is to catch evaders "powered by a machine learning model that takes a number of signals into account" rather than chase fresh handles by hand.
Content Prohibited for Streaming
Word filters do not save you when the screen itself shows banned content. Twitch reviews video, audio and on-screen text together. A muted stream that displays slur graffiti on a game wall still counts as a violation. The on-screen content side is the most common source of "I have no idea why I got suspended" appeals, because the streamer was watching chat and missed the visual.
- Real-world violence presented as entertainment: gore footage, war footage shared without journalistic framing, accident videos shared for shock value. The label exists for legitimate reaction streams; using it correctly avoids strikes.
- Pornographic material or close-cropped genitalia, even briefly. AI-generated nudity counts the same as filmed nudity under the 2024 deepfake-content updates.
- Games rated AO (Adults Only) by the ESRB are flat-out banned on Twitch regardless of label settings. Games rated M with explicit nudity are allowed only with the Mature Content classification label set on the stream.
- Sharing copyrighted music as the stream's audio without a license. Twitch's Soundtrack tool, royalty-free libraries, and DMCA muting were rolled out specifically to address this; the rules tightened materially in the 2020-2022 DMCA wave.
- Glorification of designated dangerous individuals, terror groups or mass-violence perpetrators, including in costume or username form.
- Live commission of crime: the stream itself documenting a crime in progress. This is rare but always indefinite-suspension territory.
Mature content classification labels are the safety valve. If your stream contains profanity, sexual themes, drug use, gambling, or violence above the standard threshold, set the label before you go live. The label gates your stream behind a viewer warning rather than acting as a free pass; it pairs with AutoMod, it does not replace it.
Age Restrictions
Twitch's terms set 13 as the minimum age to hold an account. Users 13 to 17 must stream under the supervision of a parent or legal guardian and the supervisor's name must be on the account. This is enforced by report rather than by ID check. Twitch acts when a viewer flags the channel and the account holder cannot demonstrate compliance.
The age threshold matters for chat moderation in two ways. A creator I work with hit this last week — first, an underage broadcaster running adult-leaning content (profanity, sexual humor, alcohol) draws faster suspensions because two policy lines are crossed at once. Second, the Mature Content classification label cannot be used to greenlight content from an underage account. See it weekly in office hours. The platform expects the broadcaster themselves to fit the rating.
If you stream with chat AutoMod off and a minor sends a message containing a slur, you carry the moderation burden. The minor's account may also receive a strike, but Twitch holds the channel accountable for letting it sit. Run the Profanity slider at Level 2 minimum on any channel that knowingly draws viewers under 18, and pair it with phone-verified chat to slow down throwaway accounts.
Account security is a related risk. A compromised account that streams banned content is still your problem to clean up after recovery. Two-factor authentication is the cheapest insurance available; we cover the setup in the StreamRise 2FA guide.
Copyright Violations
DMCA strikes aren't chat bans, but they often arrive in the same week. Or rack up enough strikes to indefinite-suspend the account — a copyright takedown can pull a VOD, mute a clip. The 2020 wave caught thousands of streamers. Worth pinning to the dashboard. The 2022-2024 follow-ups tightened the screws on background music in particular.
- Background music from copyrighted sources played over a live stream. Use Twitch's Soundtrack tool, Pretzel Rocks, Streambeats, or any explicitly licensed library. Spotify and Apple Music are not licensed for streaming.
- Watch-along streams of films, TV episodes or sports broadcasts. Even if you watch quietly, the audience is consuming licensed content, and the rights-holder treats this as redistribution.
- Speedruns and game playthroughs are usually fine because publishers grant streaming rights, but specific titles, cutscenes, and licensed soundtracks within games can trigger mutes. Konami's Metal Gear and Square Enix's Final Fantasy series have historically been hot zones.
- Reaction streams to YouTube videos. The 2023 reaction-content wave pulled dozens of high-profile bans. Quote sparingly and add original commentary that takes up most of the screen time.
- Re-uploading other streamers' clips to your channel without permission. Even a few seconds of someone else's broadcast can pull a strike if the original creator reports it.
The practical countermeasure is a routine: mute non-licensed music sources before going live, run a music-detection plugin on the host PC. Review VOD audio at 1.5× speed within 48 hours of the broadcast. If a song slipped through, hit the manual mute button on the VOD before the rights-holder bot finds it.
Other Reasons for Blocking
A handful of policies sit outside the obvious slur-and-music categories but pull bans regularly. They tend to surface when streamers experiment with new genres or grow into a larger audience and start getting reported by viewers from other communities.
- Gambling-site sponsorships from providers not on Twitch's licensed list. Twitch banned slot, roulette, dice and casino streams from unlicensed operators in October 2022 and that ban remains active.
- Promotion of pirated content: cracked games, leaked code, illicit streaming sites. Even a verbal recommendation in chat is enough to draw a report.
- Selling fraudulent goods or services on stream: ticket scalping, fake event passes, counterfeit merchandise. The same applies to investment scams promoted to viewers.
- Wearing clothing that violates the attire policy: fully transparent tops, exposed undergarments framed sexually, or context where the channel becomes a sexual-content stream by appearance.
- Fake follow / view inflation that uses bot networks. Real viewer-promotion services that deliver real residential IPs operate on a different surface; Twitch's enforcement targets the obvious bot patterns. We are transparent about how StreamRise handles this in our service pages.
- Account sharing: handing your password to a co-streamer without using shared-channel permissions. The 2024 update tightened account-sharing enforcement for streamers in monetization programs.
Twitch lawsuit data illustrates the tail risk. The 2021 hate-raid suit alleged that a single account holder was "linked to 3,000 bot accounts involved in hate raids". Platform enforcement can move from a chat suspension to a federal lawsuit when the volume is large enough.
Frequent Use of Profanity
Profanity sits in its own AutoMod category and Twitch separates "general" profanity from "sexual" profanity. The Profanity slider runs from 0 (everything through) to 4 (very strict). Most adult-audience channels park it at 1 or 2 and rely on the Mature Content label to set viewer expectations. The point of the label is consent: viewers see the warning and choose to enter.
Twitch's 2025 AutoMod updates included a Testing tool that lets you paste a sample message and see how your current settings would handle it. As the GamingCareers newsletter described it: "This lets you input words or phrases and see exactly how your current AutoMod settings would handle them, whether they'd be allowed, held for review, or blocked entirely." Use it after every settings tweak; the difference between Level 2 and Level 3 on Hostility is bigger than the description suggests.
- Set Discrimination to 4 on every channel. The cost of false positives is a single permitted-term entry; the cost of a false negative is a strike.
- Set Sexual Content to 3 unless the channel is labeled adult; then 2.
- Set Hostility to 2 or 3. Too low and trash talk lingers; too high and competitive chat feels sanitized.
- Set Profanity to match your real tone. If you swear casually on stream, your chat will too; running Profanity at 4 here makes you look hypocritical to viewers.
- Add wildcards to your Blocked Terms list for evasion patterns. Twitch's docs confirm wildcards work "at the start or end of terms". For example "hate*" catches "hateful" and "haters". Full regex is not supported in the built-in list; route to Nightbot for that.
- Mark sensitive blocked terms as private so moderators do not see them. Public terms remain editable by your mod team; private terms are owner-only.
If AutoMod is too strict, the Permitted Terms list is the safety valve. Each time a moderator approves a held message, the offending phrase joins Permitted Terms temporarily: first for an hour, then a day, then a week, then permanently. That graduated trust loop means your filter learns the channel's vocabulary without you white-listing every emote by hand. We dig deeper into chat command basics in the StreamRise chat-commands reference, and we map out the moderator-permission tree in the managing-roles guide.
Run the AutoMod Testing tool weekly, audit the Blocked Terms list quarterly, and treat any Hateful Conduct policy update from Twitch as a trigger to review your settings within 48 hours. A clean chat is a NavBoost signal, a mental-load reduction for the streamer, and the difference between a 4-hour broadcast and an unscheduled appeal.
FAQ: banned words and AutoMod on Twitch in 2026
No. Twitch keeps the slur dictionary private, both inside AutoMod and in the site-wide Hateful Conduct enforcement layer. What is public are the four AutoMod filter categories (Discrimination, Sexual Content, Hostility, Profanity) and their 0-to-4 strictness sliders. For specific words, you build your own Blocked Terms list per channel and rely on AutoMod for everything else.
Twitch ruled in February 2021 that these three terms used as insults violate the Hateful Conduct policy. They sit in AutoMod's identity-attack category at every level above 0, and channel mods often add them to Blocked Terms separately. Used in a non-attack context ("I'm not a virgin gamer") most channels still timeout because mods tune for safety.
Twitch lets you tune four categories independently from 0 to 4. Level 0 lets nearly everything through, Level 1 catches hate speech, Level 2 adds sexually explicit and abusive language, Level 3 adds more identity language and sex words, and Level 4 adds profanity and mild trash talk. Most streamers run a hybrid (strict on Discrimination, looser on Profanity) rather than one global level.
Twitch does not publish a hard cap and the practical limit on the dashboard is large enough that no public guide treats it as a constraint. Multiple sources describe it as effectively unlimited for normal channel use. What matters more is keeping the list audited; large unmaintained lists generate false positives and burn moderator attention.
Wildcards work using the asterisk at the start or end of a term. "hate*" catches "hateful" and "haters"; "*someurl.com*" catches that domain in any URL form. Full regex is not supported inside Twitch's native Blocked Terms list. If you need regex, route through Nightbot's blacklist filter, which supports it natively and runs alongside AutoMod.
The AutoMod Testing tool launched as part of Twitch's 2025 moderation updates and lives inside the Creator Dashboard moderation page. You paste a sample message and the tool tells you whether your current settings would let it through, hold it for review, or block it outright. Run it after every settings tweak; Level 2 and Level 3 behave more differently than the labels imply.
No. When you mark a blocked term as private, only the channel owner can see and edit it. Public terms remain editable by your moderator team, a useful split for sensitive examples like personal nicknames or stalker handles you do not want shared with mods.
Suspicious User Detection is a machine-learning layer that flags accounts likely to be ban evaders. "Likely" suspects have their messages hidden from chat by default; "possible" suspects are flagged but visible. The two systems compound: a flagged user typing a borderline term gets caught faster than a regular viewer would. The combination is the most effective hate-raid defense outside Shield Mode.
