19 April 2026·11 min read·Sander van Waes

Why most synthetic research is just GPT in a costume, and how to tell the difference

A pointed methodology argument for the open-benchmarks position. Aggressive but defensible. Names names. Backed by audit data.

Draft outline, Sander to expand. This is the highest-stakes piece on the blog, so the published version needs your voice and your willingness to name competitors.

Sections planned:

  • The architecture every "synthetic research" product secretly shares: one model, persona prompts, a dashboard.
  • Why that architecture fails on accuracy when measured against real ground truth, with citations.
  • The five tells that a synthetic-feedback platform is just GPT in a costume.
  • The five tells that it isn't (multi-model ensemble, calibration layer, public validation, revealed-preference weighting, audit cadence).
  • What competitors won't publish, and why that omission is the answer.
  • The open-benchmarks call: post your accuracy numbers, dated and sourced, or stop quoting 92%.

Tone reference: the existing /manifesto piece. Pointed but specific. We'll name vendors only where their public marketing makes claims we can demonstrably contradict with our own audit data. We'll be wrong about some things, that's fine, the piece publishes with an "open to revision" note and a credit-the-criticism policy.

Sander van Waes
Founder, Prism
Read the founder's manifesto

Run your own check.

Three free checks. No card. 60 seconds to first reactions. Run one on the landing page or pricing page you're about to ship.