Skip to main content

Twitch Channel Audit New

Paste any Twitch channel URL. Pulls public Twitch Helix stats — followers, recent VODs, schedule, common tags, language — and grades discovery surface 0-100.

Public-data only via Twitch Helix. No private data fetched, no analytics tracking on inputs.

Why a pre-flight audit, not a post-mortem dashboard

Twitch analytics tools fall into two camps. Historical archives like TwitchTracker, SullyGnome, StreamsCharts, and SocialBlade index every channel for years and surface charts: peak viewers per stream, weekly average CCV, follower velocity. They answer "what happened on this channel" in meticulous detail. The trade-off is that they are post-mortem by construction. By the time you see a chart, the stream is over and the moment to fix anything has passed. The other camp, sponsor-facing influencer audits (Upfluence, HypeAuditor, Infloq), grade a channel for sponsorship fit: audience authenticity, demographic split, engagement rate. Different question, different audience.

This tool occupies the gap neither camp serves: the moment right before you go live. You're about to press Start Stream. Is your title pulling its weight? Did you remember to set the category from your last test broadcast? Is your tag count above eight, the dilution threshold the recommendation engine starts down-weighting at? Charts can't fix any of that for you, and a sponsor report has the wrong abstraction. A six-dimension surface grade with a point-ranked fix list does.

How the score is computed, dimension by dimension

The grade is the sum of six dimension sub-scores, each tied to a specific Twitch surface a logged-out viewer or the recommendation algorithm sees. We don't blend them into an opaque ML index. Every point is traceable, which is the whole reason you can act on the result.

Title (25 points). Title presence is binary: no title means losing all 25. Length is graded on a curve around the 40-60 character sweet spot: under 20 = lose 12 points (you're using a third of the available card width), 20-39 = lose 6 (usable but underutilised), 40-60 = full marks, 61-70 = lose 3 (minor truncation on smaller cards), over 70 = lose 8 (consistent truncation in Browse). The 40-60 band is where Twitch's Browse-page card layout fits the title without ellipsis at every viewport that matters: desktop, notebook, the 1280-wide laptop most US streamers use.

Category (15 points). Binary. With no category set, the channel is invisible to every Browse page and to category-filtered recommendations. This is the largest single fix on the score, and the most-skipped one because dashboards default to "no category" after a stream ends. We have logged audit results for over 4,000 channels in beta; 11% of them lost the full 15 points to an empty category field. Setting it is a single click and recovers the most points.

Tag count (12 points). Twitch's discovery engine weights tags as a topical signal and applies a dilution penalty above eight tags per channel. Adding more reduces the per-tag weight rather than expanding reach. Zero tags is also penalised because the channel can't surface in any tag-filtered Browse page. The sweet spot is 4-7 specific tags; the score allows up to 8 before deducting points. We deliberately don't grade tag relevance (that requires a category-by- category lookup table that ages out fast), so the score is purely about quantity. Quality is on you.

Bio (10 points), offline banner (8 points). Both are slow-decay surfaces. They contribute to a logged-out viewer's first impression and to the channel's external SEO (the bio shows up in Twitch's site search and in some external indexes). An empty bio reads like an unfinished channel; an empty offline banner reads as dormant. Neither is the difference between Affiliate and not, but together they are 18 points the score will give back for fifteen minutes of work.

VOD activity plus cadence (30 points). The largest dimension because it captures four signals: VOD presence at all (8 points lost if "Store past broadcasts" is off), recency (6-22 points lost on a 14/30/60-day decay curve), cadence consistency (irregular cadence flagged as a bullet), and short-session penalty for pre-Affiliate channels (median session under 60 minutes flags as a warning because Twitch's 500-minute Affiliate threshold clears faster on long sessions). The VOD signals come from the same 20 archives we already fetch; we compute median, standard deviation, and trend server-side rather than ship raw timestamps to the client.

What the cadence signals tell you

Schedule consistency is the coefficient of variation of inter-VOD gaps over your last 20 broadcasts. Tight (CV < 0.35) means your gaps cluster within ±35% of the median: viewers can predict your schedule without needing to check. Regular (0.35-0.7) is fine. Irregular (> 0.7) is the red zone. The recommendation engine biases toward predictable cadence because returning viewers are a stronger retention signal than spike traffic, and Affiliate review uses cadence variance as one of its ramp-irregularity flags.

Median session length matters for two reasons. For pre-Affiliate channels, the 500-minute / 30-day gate clears faster on long sessions: eight 60-minute sessions clear it; sixteen 30-minute sessions also clear it but cost you twice the unique-broadcast-day pressure. For established channels, session length correlates with watch-time per visitor and with raid availability (you can't raid someone if your stream just ended; you can't be raided if you don't stream long enough to be online when others end). The 60-minute threshold isn't magic; it's just where the variance flattens out in the data.

Peak start hour is the mode of your VOD published_at field, in UTC, with a "±N hours" spread that covers 60% of broadcasts. Three uses: confirm you're streaming during your audience's peak Twitch- online window (cross-reference against Twitch's audience insights once you're a level above pre-Affiliate); decide raid timing (your typical end is start plus median session, which is the input the Raid Timing Calculator wants); align Discord/Twitter announcement timing to that window so the ping arrives before viewers commit to other content.

VOD-views trend compares the mean view count of your five most-recent VODs to the mean of your five oldest VODs (within the 20 we fetch). Up = views growing more than 15%. Flat = within ±15%. Down = shrinking more than 15%. Trend matters because absolute numbers are noisy. A 200-view stream followed by a 50-view stream tells you nothing without context. The five-vs-five comparison smooths the noise enough to surface a real signal you can act on.

What NOT to optimise for

A 100/100 score is a configuration bar: every surface is set, every threshold is met. It is not a guarantee of growth. Three things the score deliberately doesn't measure, because they would be false signals: title keyword density (Twitch is not Google; titles ranking by keyword stuffing instead of clarity hurt CTR), follower count (a lifetime counter that says nothing about who watches your live streams in practice), and tag relevance to a specific game (the relevant tags drift category-by-category month-to-month; we don't bake them into the score because a stale lookup table would degrade the audit's accuracy faster than missing tags would).

If the audit gives you 95/100 and your CCV is still 2, the diagnosis is upstream of discovery surface. You have the configuration right; the bottleneck is content, scheduling-vs-audience-time- zone alignment, or organic-promotion volume. Run the Twitch Growth Calculator for the cadence-vs-CCV-vs-followers projection that moves Affiliate timeline.

When to use this vs the Affiliate Safety Checker

This audit looks at the channel's data surface: what Helix returns about your title, category, tags, bio, banner, and VODs. It is precise about the things data can see. The Affiliate Safety Checker is qualitative. It asks about the things data can't see (chat-to-viewer ratio, follow velocity vs baseline, ramp variance) and produces a separate risk score. Use both before submitting an Affiliate review: a clean audit confirms your surface is set; a clean safety check confirms your operational patterns won't trigger a manual reviewer flag.

The two scores are independent. A 92 audit and a 38 safety can co-exist (great surface plus suspicious viewer ramp is exactly the pattern fast-growth channels using paid services badly produce). A 60 audit and a 90 safety also co-exist (boring channel state plus clean organic patterns is what most early-stage streamers look like). Treat them as orthogonal axes, not as the same number from two angles.

How the fix list ranking works (and why order matters)

Free tools that emit unsorted bullet lists hand the user a triage problem they then have to solve manually: read everything, rank it mentally, then act. We do the ranking server-side and surface the highest-impact bullet first. Each fix carries an implicit point cost: missing category is 15, missing title is 25, last VOD older than 60 days is 22, irregular cadence is 4 (it's a softer signal because the algorithm tolerates some variance). The bullet at the top of your list is always the one that recovers the most points if you fix only one thing before going live.

The pre-Affiliate follower nudge is intentionally surfaced at the bottom with zero point-impact. You can't fix it tonight, and showing it as a "loss" would mislead the user into thinking the Affiliate threshold is part of the discovery-surface grade (it isn't). It's informational positioning: "you're 12 followers short of Affiliate" is useful context, not a config bug to fix. That separation is why the score and the fix list can both be honest at the same time.

Re-running the audit and what counts as "fresh"

Server-side cache is 5 minutes with a 15-minute stale-while-revalidate window. That's a deliberate trade-off. Twitch dashboard changes propagate to Helix within seconds, but Helix itself rate-limits per-token, and fresh-every-second polling would burn through the budget on a tool meant to handle thousands of audits an hour. Five minutes is short enough that a save-then-audit loop completes within one cache window for most users.

For inactive channels (last VOD older than 30 days) we skip the live-status call entirely because the channel is unlikely to be live mid-audit. That saves roughly 20% of Helix calls on the long-tail input distribution and lets us extend cache windows further on the endpoints that matter for active channels. The trade-off is rare: a channel returning from a hiatus mid-audit will report as offline until the next cache refresh.

Frequently asked

What data does this tool fetch?
Public-only data via Twitch Helix: profile info (login, display name, avatar, account age, broadcaster type), current channel state (title, category, language, tags), follower count, and the 20 most recent VODs. From those VODs we compute schedule consistency, median session length, peak start hour, and view-count trend with zero extra API calls. No private data, no email, no payout info.
Does it audit my channel or anyone's?
Anyone's. Paste any twitch.tv URL or handle. The data is what Twitch shows publicly to a logged-out viewer; we summarise it, score it on six dimensions, and rank the fix list by point-impact.
How is the grade calculated?
Six dimensions sum to 100: title set plus sweet-spot length (25 pts), category set (15), tag count target 1-8 (12), bio set (10), offline banner present (8), VOD activity plus cadence (30). The "What's in this score" panel under the grade shows per-dimension contribution and the reason for each. Specific fixes are listed below, ranked by how many points each one is currently costing you.
What does "schedule consistency" mean and why does it matter?
It's the coefficient of variation of inter-VOD gaps across your 20 most recent broadcasts: tight (CV < 0.35), regular (0.35-0.7), or irregular (> 0.7). Twitch's recommendation engine biases toward channels with predictable cadence because returning viewers are a stronger retention signal than spike traffic. Irregular cadence is also one of the most-cited Twitch Affiliate review delays.
Why is the peak start hour useful?
It's the mode of your VOD published_at hour, in UTC, plus the spread that covers 60% of broadcasts. Three uses: confirm you're streaming during your audience's peak Twitch-online window, decide raid timing (typical end hour is start plus median session), and align Discord/Twitter announcements to that window.
Why is the audit recomputed automatically?
Helix caches at 30-300 seconds depending on endpoint. We cache server-side at 5 minutes (down from 2 to halve Helix load), so refreshing immediately won't pull new data. The auto-fetch saves a manual click while honouring Twitch's rate-limits.
Why is my score below 65?
The most common causes, in order of frequency on first-time audits, are: no category set (-15 points), title under 20 chars (-12), 0 or >8 tags (-12), last VOD older than 30 days (-14 to -22), and bio empty (-10). Fix the highest-impact bullet first; the list is sorted so the top item is the biggest lever.
Does this work for new channels with no VODs?
Yes. VOD-derived signals (cadence, session length, peak hour, trend) are skipped when there are no archives, but the surface dimensions (title, category, tags, bio, banner) still produce a meaningful grade. New channels typically score 40-60 because the VOD-activity dimension contributes 0 of its 30 points until the first archived broadcast appears.
Is there a way to audit Kick channels?
Not yet. Kick's public-data API is rate-limited and uses a different auth model. Kick channel audit is on the roadmap.