How to Compare Your Last 2 Audit Results

Compare your last 2 accessibility audit report results to measure progress, spot recurring issues, and track WCAG conformance over time.

How to Compare Your Last 2 Audit Results

Place your two most recent accessibility audit reports side by side and compare them at three levels: total issue count, issue severity, and which WCAG criteria appear in both. A meaningful comparison goes beyond whether the number went up or down. It reveals whether your remediation work is closing gaps or whether new development is introducing the same types of issues.

Organizations that conduct regular audits against WCAG 2.1 AA or WCAG 2.2 AA accumulate data worth analyzing. Two reports, separated by a remediation cycle, tell a story about your accessibility trajectory. The question is how to read that story clearly.

Comparing Two Audit Reports: What to Evaluate
Comparison Area What It Tells You
Total issue count Whether your overall conformance posture improved, held steady, or regressed
Recurring issues Which accessibility issues persist across audits, signaling a systemic gap in your development process
New issues introduced Whether recent design or code changes created new WCAG nonconformance
Severity distribution Whether high-impact issues decreased even if total count stayed flat
WCAG criteria affected Which specific success criteria appear in both reports and which were resolved

Why Comparing Two Audit Reports Matters

A single audit report is a snapshot. Two reports, taken together, form a trend line. That trend line is what tells your team whether accessibility work is making measurable progress toward WCAG conformance.

Without comparison, you are left guessing. Did remediation actually resolve what the first audit identified? Did new features introduce fresh accessibility issues? These questions have concrete answers when you compare the two documents.

For organizations tracking ADA compliance or preparing documentation like an ACR, showing measurable improvement between audits strengthens your position. It is evidence of a genuine commitment to accessibility, not a one-time effort.

What Should You Compare First?

Start with the total number of issues in each report. This is the broadest indicator. If your first audit identified 47 issues and your second identifies 22, your remediation cycle cut the count by more than half. That is progress.

But the total alone is misleading without context. If 15 of those 22 issues are brand new, it means your development team fixed old issues and introduced nearly as many new ones. The number dropped, but the underlying process did not improve.

After total count, look at which specific WCAG success criteria appear in both reports. These are the recurring issues, and they are the most important data point in your comparison. Recurring issues suggest that either remediation was incomplete or that your content and code workflows are recreating the same types of nonconformance.

How to Categorize Issues Across Reports

Organize every issue from both reports into three categories:

Resolved: Issues present in the first report that do not appear in the second.

Recurring: Issues present in both reports, mapped to the same WCAG criteria.

New: Issues present only in the second report.

This three-category framework turns raw audit data into an actionable picture. Resolved issues confirm that remediation worked. Recurring issues tell you where to invest in training or process changes. New issues point to gaps in your development workflow that need attention before the next evaluation cycle.

Accessibility Tracker Platform makes this categorization faster by housing audit data in a structured format. When both audit reports live in the same platform, the comparison becomes a matter of filtering rather than manually cross-referencing spreadsheets.

Severity Shifts Are as Important as Count Changes

Two reports with the same total issue count can represent very different accessibility states. If the first report had 30 issues with 12 rated critical, and the second has 30 issues with only 3 rated critical, user experience improved substantially even though the count held flat.

When comparing severity, pay attention to whether high-impact issues moved down in priority or disappeared entirely. Audits that include severity context for each identified issue give teams a direct basis for this kind of comparison.

A shift from critical issues toward lower-severity ones often reflects that your team prioritized correctly during remediation. It also shows decision-makers that accessibility investment is producing real results, which matters when building a case for continued budget allocation.

Tracking Progress with a Structured Platform

Comparing two reports manually is doable. Comparing three, four, or more becomes unwieldy without a structured system. The Accessibility Tracker Platform stores audit results in a centralized location, making it possible to track conformance progress across multiple evaluation cycles without rebuilding a comparison spreadsheet each time.

The platform also applies Risk Factor and User Impact prioritization formulas to your audit data, which means severity comparisons are consistent between reports. When both audits are scored with the same criteria, the comparison is reliable.

For teams managing multiple digital assets, whether web apps, mobile apps, or websites, centralized tracking is the difference between ad hoc comparison and systematic progress monitoring.

Common Patterns When Comparing Audit Results

Certain patterns appear frequently when organizations compare consecutive audit reports:

Color contrast issues recur: Design teams often address flagged instances without updating the underlying style guide, leading to the same WCAG criteria reappearing on new pages.

Form labels resolve, then return: Developers fix labeled forms but new forms ship without the same attention.

Keyboard navigation improves globally: Because keyboard accessibility tends to be addressed at the component level, fixes often carry across the entire digital asset.

Image alt text issues decrease but never hit zero: New content with missing or uninformative alt text is one of the most common sources of new issues between audits.

Recognizing these patterns helps teams anticipate where the next audit is likely to identify issues, which is the point of comparison in the first place.

What If the Second Audit Has More Issues?

An increase does not always mean regression. If your second audit covered more pages or screens, or if it evaluated against WCAG 2.2 AA instead of 2.1 AA, the scope expanded. More scope means more surface area for the auditor to identify issues.

Before drawing conclusions from an increased count, confirm that both audits used the same standard, covered the same scope, and evaluated similar functionality. An apples-to-apples comparison requires matching parameters.

If scope and standard are identical and the count still went up, the increase is meaningful. It likely indicates that new development outpaced remediation, which is a workflow issue worth addressing before the next cycle.

How often should you compare accessibility audit results?

Every time you receive a new audit report. Comparison is most valuable immediately after delivery, while your team still has context on what was remediated and what changed in the product since the last evaluation.

Can automated scans replace a comparison of two audit reports?

No. Scans only flag approximately 25% of issues. A scan comparison would miss the majority of accessibility issues that a manual audit identifies. Scans can supplement monitoring between audits, but they cannot serve as the basis for a meaningful conformance comparison.

Do you need an accessibility platform to compare audit reports?

Not strictly. You can compare two reports using a spreadsheet. But as the number of evaluation cycles grows, a platform like Accessibility Tracker makes the comparison faster and more consistent by storing structured data in one place.

Consistent comparison between audit reports is how accessibility programs move from reactive to proactive. Each comparison sharpens your team's understanding of where issues originate and whether your process is getting better over time.

Contact Accessibility Tracker to centralize your audit data and track conformance progress across evaluation cycles.

Kris Rivenburgh

Founder of Accessible.org

Share

Ready to Track Your Accessibility Progress?

Upload your audit and start tracking, fixing, and validating all in one place.

Get Started Now