The agency screening problem in one sentence
A bad creator pick costs you a client, not a campaign. Every other workflow decision follows from that.
If you internalize that framing, the screening process becomes defensive — you need documented decisions, isolated client workspaces, and a repeatable scoring rubric that does not depend on who is reviewing that day.
Step 1: Set the brand context per client, not per person
Most agency vetting processes fail because the context lives with the account manager. When they leave, go on vacation, or switch clients, the next reviewer has to rebuild the context from scratch.
The fix is to write the brand context down, store it with the client, and apply it automatically to every creator review. Topics to include, topics to block, audience direction, tone, and risk tolerance.
- Preferred topic clusters (e.g., "home fitness", "outdoor cooking")
- Blocked keywords and categories (e.g., competitors, restricted categories)
- Audience direction (age, geography, interests you want represented)
- Risk tolerance and brand-safety posture
- Historical creators the client has worked with successfully or unsuccessfully
Step 2: Isolate workspaces per client
Running two client brands in the same scanner workspace is a data governance problem waiting to happen. Shortlists get mixed. Context cross-contaminates. A creator flagged for one client's brand safety gets inadvertently recommended for another.
Morthn's Agency plan includes 1000 scans per month and up to 3 isolated sub-account workspaces — one per client, brand, or campaign pod. Shortlists, scans, and settings stay separated.
Step 3: Standardize the scoring rubric
A rubric is what turns a review into a repeatable process. Every reviewer scores the same signals, records the same evidence, and arrives at a comparable recommendation.
The rubric we recommend has five categories scored 1-5 with written reasoning for each: audience quality, engagement authenticity, risk, growth trend, and brand fit. Add category-specific weights if needed, but start with equal weights.
Step 4: Format reports for client delivery, not internal consumption
The report that lands in the client's inbox is the output. Every workflow decision should optimize for that final deliverable.
That means: executive summary at the top, score and recommendation front and center, evidence appended, and formatting that can be pasted into a deck or shared as a link. If you need to copy-paste stats into a slide before sending, the workflow is not optimized for client delivery.
Step 5: Audit the wins and losses quarterly
The screening process itself needs a review loop. Every quarter, pull the creators you recommended and compare against campaign outcomes. Which signals predicted the wins? Which missed on the losses?
Use that to update the rubric, adjust the brand context files, and retrain new reviewers faster. A screening process that does not get reviewed quietly becomes a screening process that drifts.
ROI math: what this workflow is worth
At 300 creators a month, manual vetting eats 90 hours. At agency blended rates of $70/hour, that is $6,300/month in billable time. The Agency plan at $2250/month cuts review time by roughly 60%.
The difference pays for the plan three times over before you factor in better picks, stronger client retention, and the documented-decision coverage that protects you when a campaign does not land.
Built for agency volume and client delivery
Isolated workspaces, shareable reports, and a documented decision for every creator. See the agency workflow in action.
See Morthn for agencies