Why pre-spend screening matters more than post-campaign analytics
Post-campaign analytics tell you how money already spent performed. Pre-spend screening tells you whether to spend at all. The difference is thousands of dollars per bad pick for brands and a client relationship per bad pick for agencies.
Surface metrics like follower count and average likes are the first things teams look at and the last things that predict outcomes. A creator with 300,000 followers and a 3% engagement rate can still deliver worse ROI than a 40,000-follower creator with authentic engagement and a matching audience.
The five signals that actually predict creator ROI
Every creator you consider should be scored against these five signals before you move toward negotiation:
- Audience quality — follower geography, account-age distribution, and inactive or bot concentration
- Engagement authenticity — like-to-comment ratios, comment substance, pod patterns, engagement spikes benchmarked against size-matched peers
- Growth trend — follower curve modeled against expected growth rate to surface sudden spikes, mass unfollows, and suspicious acquisition patterns
- Risk signals — brand-safety history, sponsored-content density, and topic mix alignment with your brand
- Brand fit — topic overlap, tone match, and audience direction against the context you care about
A five-minute manual vetting checklist
If you do not have a screening tool yet, run this checklist for every creator before a contract is drafted. Six minutes per creator is the target. If any step takes longer than that, you are over-indexing on research and under-indexing on decision.
- Pull the last 12 pieces of content and read the comment sections end to end
- Spot-check 20 followers for account age, post history, and geography
- Run the handle through at least one fraud-detection tool
- Check brand-safety by searching their handle plus controversial keywords
- Score the creator against your brand context using a written 1-5 rubric
- Document the decision and the evidence that led to it
Why documented decisions matter as much as the decision itself
In-house teams need a record for budget accountability. Agencies need a record for client accountability. A creator pick without documented reasoning is a liability when the campaign underperforms or the creator has an incident.
Every scan inside Morthn produces a documented rationale — score, signal breakdown, risk flags, and recommendation. The same discipline applies manually: write down the reasoning, attach the evidence, and make the file retrievable.
When to automate the process
At low volume, a spreadsheet and an afternoon of research is fine. At 50 creators a month, manual vetting eats real time. At 1000 creators a month — typical agency volume — manual research hits 90 hours and six thousand dollars in billable time before you get to shortlist and compare.
Automating the scan step saves 60% of that time and makes the process repeatable across team members. That is where Morthn fits: the scan-to-report step runs in seconds and the output is the same every time regardless of who is reviewing.
Run the workflow on a real creator
Paste a handle and see the score, signals, and recommendation Morthn returns. Free preview, no credit card.
Try a free scan