How Nudgent measures conversion health

Last updated: 30 April 2026

Two audiences see every page. Human visitors decide based on what they perceive; AI agents browsing on behalf of humans decide based on what they can parse, verify, and represent. Nudgent scores both. The result is fourteen dimensions in total — seven for visitors, seven for agents — anchored in decades of research, and calibrated against a continuously-growing corpus of audited pages.

The 7 dimensions for visitors

Each dimension answers a single diagnostic question. A page can score 0–10 on each (10 = excellent, 0 = broken). The overall human score is the sum of the seven, scaled to 100.

Cognitive ClarityCan the brain process this without strain?
Decision ClarityIs the next action obvious?
Trust SignalDoes this feel legitimate and safe?
Motivation StrengthIs the value clear before the ask?
Comfort LevelAre anxieties and risks addressed?
Flow CoherenceDo steps connect logically?
Identity MatchCan the visitor see themselves in this?

The 7 dimensions for AI agents

Same scale, parallel structure: each dimension answers one diagnostic question, scored 0–10. The overall agent score is the sum of the seven, scaled to 100. Both scores ship side-by-side in every report. They are not combined — the dual view is the point.

ExtractabilityCan the agent parse the page into structured understanding?
Semantic StructureIs the information architecture machine-navigable?
VerifiabilityCan the agent confirm or weight the credibility of claims?
CompletenessDoes the page provide everything the agent needs for its task?
ActionabilityCan the agent perform intended actions on this page?
Machine ReadabilityIs the explicit metadata complete and accurate?
RepresentabilityCan the agent faithfully convey this page to its human?

The research behind it

The visitor framework consolidates over a hundred named principles from cognitive psychology, behavioural economics, and decision science. The names you would recognise are part of it — Cialdini on social proof and commitment, Kahneman on loss aversion and the peak-end rule, Fogg on the behaviour model B = M × A × P, Sweller on cognitive load — but naming them is the easy part.

The agent framework stands on a different set of shoulders: the established web standards an AI parser actually uses, paired with comprehension theory from information retrieval and natural language processing. Schema.org and WCAG for what an agent can read; OpenGraph and the broader semantic-web stack for explicit metadata; information retrieval and lossy-summarisation theory for what an agent loses when it represents a page back to its human.

The work that compounds is the part you don’t see. On both planes, every principle is scored for relevance to the page type in front of it, ranked by likely impact, cross-referenced against principles that contradict or compound it, weighted by branch (industry × segment × buyer type), reviewed against benchmark outcomes, and adjusted as the corpus grows. A finding doesn’t cite a principle because it sounds plausible. It cites a principle because the framework has tested it against the conditions on your page.

Your audit ships with the principles relevant to your page on both planes, the conditions under which each applies, and the friction each one explains. If you want a citation for any specific finding, ask and we will pull the source.

How we calibrate

Research alone does not tell you what a healthy score looks like for a pricing page in B2B SaaS versus a checkout page in DTC ecommerce. Benchmarks come from running the framework across many pages, on both planes.

Every audit Nudgent runs adds calibration data on both sides. For the visitor plane: the predicted potential lift, the actual outcome when fixes are implemented, the effectiveness of each recommendation type. For the agent plane: how completely an agent could parse the page, how much information was lost in a generated summary, which structural fixes produced the largest comprehension delta. As of this writing, the corpus spans over 1,000 audited pages across landing, signup, pricing, onboarding, dashboards, checkout, and upgrade prompts. Benchmarks on each plane recompute when a branch crosses a data-sufficiency threshold.

Two things follow in practice. First, scores compare your page to similar pages in your branch, not to the global average — on both planes. Second, the framework gets sharper with every audit run, on every page. The corpus is the moat — the part a stronger model can’t replicate without running the same volume of audits.

What we do not measure

Honest scope is part of the methodology. The following are not part of an audit:

  • Live traffic, sessions, or behind-the-login flows. Nudgent reads the public rendering of a page.
  • Brand strategy or positioning beyond what is on the page itself.
  • Pricing strategy in absolute terms. We score whether your pricing page reads clearly; we do not tell you what to charge.
  • SEO ranking, ad performance, or attribution. Other tools cover these well.
  • Visual taste. Aesthetics are scored to the extent they influence perception of clarity and trust, not as an independent design grade.
  • Bot mitigation. The agent plane notes when CAPTCHA or rate limits block agent access, but the framework recommends API or OAuth alternatives rather than removing legitimate bot defences.

Questions

If you want the source paper for any principle cited in your report, or you want to see how a recommendation traces back to research, email hello@nudgent.com and we will pull the citation.

This methodology evolves with the calibration corpus. We will update this page when dimension definitions change or new research enters the framework.