Understanding Your Audit Report
A guide to the three report views, score interpretation, issue severity, and the evidence layer.
The audit report is organized into three primary tabs and one evidence tab. Each tab presents the same underlying data through a different lens. The default view is the mechanism-level report; you can switch to the editorial format via the view toggle in the report header.
The Three Report Views
The primary view. Displays mechanism-level scores for all 15 capabilities, grouped by function. Each capability shows its score (0-100), a brief explanation of the result, and the key signals that drove the score. The overall grade and score appear at the top, alongside desktop and mobile screenshots of the audited page. This tab answers: how do AI systems perceive this brand across every measurable dimension?
Traditional search engine optimization signals. Technical SEO (HTTPS, speed, robots.txt, sitemap, headers, TTFB) and on-page SEO (title tag, meta description, headings, images, internal linking, Open Graph, JSON-LD). These two dimensions together account for 14% of the overall score. The SEO tab is useful for identifying baseline issues that affect both traditional search and AI visibility. A site that blocks AI crawlers or has no structured data will score poorly across multiple mechanism capabilities.
The AI-specific analysis layer. Includes the E-E-A-T assessment (Experience, Expertise, Authoritativeness, Trust), the content depth evaluation, the conversion walkthrough from an AI-referred visitor's perspective, the persona narrative (ideal vs. current), and the two-pass AIO simulation. The AIO simulation shows what an AI Overview for your category would look like, whether your brand would be cited, and the specific gaps preventing citation.
Score Ranges
Every dimension and the overall audit produce a score from 0 to 100. The ranges are calibrated against the current state of AI visibility across industries. A score of 70 does not mean “average”; it means the brand is performing well relative to the mechanisms AI systems use for citation decisions.
Issue Severity Levels
Each audit generates a list of specific issues found across all modules. Issues are tagged with one of three severity levels, a category (which module flagged it), and a recommended fix.
Issues that directly prevent AI systems from citing the brand. Examples: site blocks AI crawlers entirely, no entity found in any knowledge graph, brand name tokenizes into 5+ tokens. These should be fixed first.
Issues that reduce AI visibility but do not completely block it. Examples: JSON-LD schema is present but incomplete, retrieval pool coverage is below 40%, E-E-A-T scores are weak in one dimension. These affect the overall score and should be prioritized after critical issues.
Observations that provide context but do not necessarily require action. Examples: token probability confidence interval is wide (suggesting inconsistent model behavior), competitor has a specific advantage in one capability, hallucination rate is low but nonzero.
Counterfactual Predictions
Many issues include a counterfactual prediction: “if you fixed this issue, your score would increase by X points.” These predictions come from a LightGBM model (the counterfactual sidecar) trained on audit data. The model takes the current feature vector, simulates the fix, and predicts the resulting score change. This helps you prioritize: fix the issues with the highest predicted impact first.
The counterfactual model is currently trained on a small corpus (n=29 audits). Predictions will become more accurate as the training set grows. When the sidecar is unavailable, the system falls back to a heuristic estimate based on the issue severity and the weight of the affected dimension.
The Receipts Tab
The fourth tab on the report page is Receipts. This is the evidence layer. The report is not a black box; every conclusion is traceable to specific evidence.
Receipts exposes 15 sections of raw audit data:
Every prompt sent to every LLM is visible. Every response received is visible. Every citation URL, every classifier output, every crawl signal, every screenshot, every waterfall trace entry. Provider-filterable probe lists use brand-specific colors so long lists stay scannable. The full audit JSON is downloadable with a single click.
The evidence data loads via loadAuditEvidence(auditId), which fires 13 parallel database queries. The system is fault-tolerant: if a table has not been migrated yet (possible during version transitions), the query returns nullrather than an empty array, so the UI can distinguish “not deployed” from “no data.”
The Editorial View
An alternative presentation of the same data is available via the editorial view (accessible at /results/editorial or through the view switcher in the report header). The editorial view presents the audit as a seven-chapter document with a magazine-style layout: executive summary, E-E-A-T analysis, perception and sentiment, AI answer simulation, competitive prism, sub-score breakdown, and recommendations. The data is identical; only the presentation differs.