Cold email check

The check before you press send.

Paste your sequence. 500 ICP buyers tell you whether they'd open, reply, ignore, or mark as spam. Per email. Best subject line of the variants. Most common objection. In 60 seconds.

No card. No signup. Three free checks in your first minute.

How the check runs

Four steps. 60 seconds end to end.

/ 01

Paste

Your full sequence. Up to 8 emails. Up to 5 subject-line variants per email.

/ 02

Pick an audience

The ICP you're sending to. B2B SaaS dev-tool buyer, marketing-led SaaS, etc.

/ 03

Run

500 ICP buyers per email decide: open, reply, ignore, mark as spam. Per variant.

/ 04

Read the report

Predicted open + reply rate, spam-trigger phrases, best subject line, top objection.

What you get back

A real report, not a vibe check.

Verdict, sentiment distribution, verbatim quotes from 500 simulated ICP buyers, the most common objection, friction points, recommendations. One artifact you can paste into your team Slack and act on tomorrow.

Cold email check · sample
Run #04219 · 500 buyers
Positive
58%
Neutral
18%
Negative
24%
Top objection
“I don't see a price anywhere, feels like enterprise sales.”
full report
What founders catch

Three real examples of what the check found.

Subject line read as automated

Before

Subject: 'Quick question about {{company}}'

After

41% marked it spam. Rewritten with a specific reference. Reply rate 3×.

Mail-merge syntax visible to humans.

Email 2 was the killer

Before

Sequence open rate fine; reply rate collapsed at email 2.

After

Email 2 read as 'pushy follow-up.' Rewritten as a useful link. Conversion through to email 4.

Bridge email had no value.

CTA was a 15-min meeting

Before

Closing CTA: 'Worth a 15-min call this week?'

After

ICP read 'sales call.' Switched to 'I'll send a 90-second Loom, want it?' Replies +44%.

Meeting ask too heavy.
Common questions

Things SaaS founders ask before running a check.

Is this just GPT?+

No. Every check runs through nine independent corrections: a multi-model ensemble across multiple independent frontier model families, calibration against historical ground truth, revealed-preference weighting, and distribution-shape matching. One model wrapped in a persona prompt is one model's opinion. We give you 500.

How accurate is this?+

87% median accuracy across calibrated SaaS clusters, audited monthly. Every cluster is dated, sourced, and visible on the validation page. If a cluster drifts below 80%, we pause it automatically.

What audiences are available?+

Pre-built clusters for B2B SaaS buyers (SMB and mid-market), indie hackers, dev-tool buyers, marketing-led SaaS, sales-led SaaS, PLG users, agency owners, and API-first buyers. New clusters land monthly. See the audiences page for status and accuracy.

Does it work for B2C?+

Today the calibrated SaaS clusters are the focus. The same engine powers our enterprise customers' B2C work, see the enterprise page for that. If you're a SaaS founder targeting consumers, the indie-hacker and product-led clusters are the closest fit while we calibrate B2C-specific SaaS audiences.

Test your cold email sequence free.

No card. No sales call. Three live reactions in your first minute.