We publish our accuracy.
Ask our competitors to do the same.
Every cluster Prism serves is audited against a real, public, dated ground-truth dataset. Accuracy is mean-absolute percentage agreement between Prism's calibrated predictions and the referenced dataset. Clusters whose audit falls below 80% are paused automatically.
87% median accuracy across 15 live audits · 0 SaaS-specific completed, 5 SaaS audits in progress (target Q2 2026), 15 cross-domain clusters live.
The same cluster, audited again, against fresh ground truth.
If our living agents are working, accuracy should hold steady or improve as the world shifts. If they were static, accuracy would decay between calibration cycles. Three illustrative clusters, each with its full audit history.
Eco-anxious parents
+3pt- 2025-06-1588%
- 2025-09-1289%
- 2025-12-0890%
- 2026-03-1291%
Gen Z urban Americans
+1pt- 2025-05-3086%
- 2025-08-2287%
- 2025-11-1887%
- 2026-02-2787%
US wine consumers
-8pt- 2025-05-0486%
- 2025-08-1284%
- 2025-11-0981%
- 2026-02-0178%
Calibration on SaaS audiences
These are the audits that matter for SaaS founders — predicted reactions vs. observed real-world data on the audiences Prism's customers actually sell to.
Each row links to the methodology, the ground-truth dataset, the sample size, and the audit date. Raw audit CSVs available on request to any customer, journalist, or academic at audits@prism.so.
Cross-domain calibration
Prism's calibration engine is validated against ground truth across demographics beyond SaaS. These are not your audience — they're proof the calibration approach generalizes.
Cluster audit ≥ 85% accuracy against its ground-truth dataset.
Cluster audit between 80% and 85%, usable, flagged, being re-calibrated.
Cluster audit below 80%, automatically suspended, customers notified.
Each cluster is audited against a named, dated, public dataset whose sample frame overlaps the cluster definition. The audit re-runs the dataset's core questions as Prism stimuli and compares calibrated Prism output to the dataset's published distribution. Accuracy is mean absolute agreement across the primary metrics of the dataset (typically: sentiment, purchase intent, brand recall, issue importance).
Ground-truth datasets are listed above. Where the dataset is behind a paywall we cite the nearest published bulletin. The raw audit CSVs are available on request to any customer, journalist, or academic under the same access policy we apply to our own team.