Accessibility scan results show a snapshot of automatically detectable issues on a page. They flag missing alt text, color contrast problems, empty buttons, form fields without labels, and similar code-level patterns. To interpret them correctly, treat the report as a starting point: each finding represents a real issue worth reviewing, but the absence of findings does not mean the page is accessible. Scans only flag approximately 25% of issues, so the score reflects what software can detect, not WCAG conformance.
| Element | What It Means |
|---|---|
| Score or grade | A measure of detected issues, not a measure of WCAG conformance |
| Issue count | The number of automatically detectable issues found at scan time |
| Severity labels | Tool-assigned ratings that suggest priority but require human review |
| WCAG references | Mapping of each issue to a specific success criterion |
| Clean scan | No automatically detectable issues found, not a conformance claim |

What does a scan score actually measure?
A scan score reflects the count and weighting of issues a tool detected against its rule set. It does not measure how usable a page is for someone using a screen reader, switch device, or magnification software.
Two pages can earn the same score and have very different real-world experiences. A page with a clean scan can still be unusable if focus order is broken, ARIA is misapplied, or interactive components are not operable by keyboard.
Read the score as a code-level health signal. Treat it like a linter for accessibility: useful, fast, and incomplete.
How to read individual issues
Each issue in a scan report typically includes a description, the WCAG success criterion it maps to, the element selector, and a suggested fix. Start with the WCAG reference. That tells you whether the issue is a Level A or Level AA item and what the criterion requires.
Next, look at the element. Confirm the issue exists in the rendered page, not in a template fragment that has since changed. Scans operate on a moment in time, and dynamic content can produce false positives or stale findings.
Finally, read the suggested fix as guidance, not gospel. The right fix depends on the component's role and how the page uses it.
Why issue counts can mislead
A high issue count often reflects a single problem repeated across a template. One missing label on a header search field can produce hundreds of instances across a site. Fixing the template once clears them all.
A low count can mislead in the opposite direction. A page with three findings might have serious keyboard traps or unlabeled custom widgets that the scanner cannot detect at all.
Group issues by rule and by template before drawing conclusions about effort or risk.
What scans cannot tell you
Scans cannot evaluate meaning. They cannot tell you whether alt text is accurate, whether a heading structure conveys the right outline, or whether a video caption matches the audio. They cannot determine whether a custom dropdown is operable by keyboard or whether focus moves logically through a modal.
That gap is why scans cannot determine WCAG conformance. Conformance requires human evaluation across every applicable success criterion. A manual accessibility audit is the only way to determine WCAG conformance.
Turning scan results into action
Sort findings by WCAG criterion and by template location. Patterns make remediation faster: fix the template, verify across instances, and move on.
Use severity ratings as a starting cue, then apply your own Risk Factor or User Impact prioritization formulas. A low-severity contrast issue on a checkout button matters more than a high-severity issue on a rarely visited admin page.
Document what was fixed, what was deferred, and why. Scan platforms that track issues over time make this easier than spreadsheets, especially when remediation spans multiple sprints.
How Accessibility Tracker Platform presents scan data
Accessibility Tracker Platform shows scan findings alongside audit data, so teams can separate what software detected from what an auditor identified. The platform groups issues by WCAG criterion, tracks status across remediation cycles, and keeps a clean record of progress.
That separation matters. Scans support ongoing monitoring between audits. Audits map the full picture. Reading both in context gives leadership a clearer view than a single score ever could.
Frequently Asked Questions
Should I report a clean scan as WCAG conformance?
No. A clean scan means no automatically detectable issues were found. It does not verify conformance with WCAG 2.1 AA or WCAG 2.2 AA. Conformance claims require a manual audit.
How often should I scan?
Conduct scans on a recurring schedule that matches your release cadence. Weekly works for active sites. Monthly is reasonable for stable ones. Scan after every significant deploy.
Are severity labels from a scanner reliable for prioritization?
They are a useful first pass. Severity labels reflect the tool's logic, not your product's risk profile. Cross-reference with traffic data, conversion paths, and known user populations before locking a priority order.
Can scan results replace an audit report?
No. Scan results detect approximately 25% of issues. Audit reports cover the full set of applicable success criteria through human evaluation. The two are separate activities and serve different purposes.
What is the fastest way to clear a long scan report?
Group by rule and by template. Most repeated issues come from shared components. Fix the source, push the change, and re-scan to confirm the count drops across the site.
Scan results are a tool, not a verdict. Read them as code-level signal, fix what they identify, and rely on audit data for the conformance picture.
Contact Accessibility Tracker to track scan and audit data in one place.

