Accessibility monitoring between audits keeps your WCAG conformance intact while your product changes. An annual audit gives you a snapshot. Monitoring fills in the months between snapshots so new issues don't accumulate unchecked.
Without monitoring, a single code deployment can introduce dozens of accessibility issues that persist for months before anyone notices. The cost of remediating a year's worth of accumulated issues is significantly higher than catching them as they appear.
| Consideration | Details |
|---|---|
| Purpose | Detect new accessibility issues introduced by code changes, content updates, or third-party integrations between annual audits |
| Scan Coverage | Automated scans flag approximately 25% of issues; monitoring catches regressions in that detectable range continuously |
| Recommended Frequency | Weekly or after each major deployment, whichever comes first |
| What Monitoring Does Not Replace | A manual accessibility audit, which is the only way to determine WCAG conformance |
| Platform Option | Accessibility Tracker Platform includes built-in scan and monitoring alongside audit-based project tracking |

Why the Gap Between Audits Matters
Most organizations conduct an accessibility audit once a year, sometimes less. A lot changes in twelve months. New features get shipped. Content management systems receive updates. Marketing teams publish landing pages on tight deadlines without accessibility review.
Each of those changes can break conformance that the previous audit confirmed. And because no one is watching, those issues compound. By the time the next audit comes around, the remediation list is long and the cost is steep.
Monitoring is the connective tissue between audits. It does not replace evaluation by an auditor, but it catches the detectable regressions early enough to fix them in the same sprint they were introduced.
What Does Accessibility Monitoring Actually Do?
Accessibility monitoring runs automated scans on a schedule. Those scans evaluate your pages against WCAG 2.1 AA or WCAG 2.2 AA criteria that can be checked programmatically. When a scan detects a new issue, it flags it.
Scans only flag approximately 25% of issues. That's a known constraint. But the 25% that scans cover includes highly visible regressions: missing alt text, broken form labels, color contrast violations, missing landmark regions, and empty links. These are the types of issues that get introduced constantly through routine content and code updates.
Monitoring catches those regressions before they sit on your production environment for months.
How Often Should You Conduct Monitoring Scans?
Weekly scans are a good baseline. If your team deploys frequently, scan after each significant release. The goal is to keep the feedback loop tight enough that developers remember the code they wrote when an issue surfaces.
A scan that runs three days after deployment is actionable. A scan result delivered six months later is archaeology.
Accessibility Tracker Platform supports scheduled monitoring, so scans run automatically without anyone remembering to trigger them. That consistency matters more than frequency. A weekly scan that actually runs is better than a daily scan someone forgets to set up.
Setting Up Your Monitoring Workflow
A monitoring workflow has three parts: what you scan, when you scan, and what happens when an issue appears.
Scope: Start with your highest-traffic pages and critical user flows. Login, registration, checkout, dashboards, and primary navigation paths. These pages change most often and carry the most risk if they regress.
Schedule: Configure weekly scans at minimum. Add deployment-triggered scans if your CI/CD pipeline supports it. The Accessibility Tracker Platform allows you to set recurring scan schedules for each project.
Response: When a scan identifies a new issue, route it to the developer or team responsible for that area. Track the issue through resolution. If you let flagged issues pile up without assignment, monitoring becomes noise instead of signal.
Monitoring and Audits Are Separate Activities
This distinction is critical. Monitoring through automated scans and a manual accessibility audit are completely separate activities. Scans cannot determine WCAG conformance. A manual accessibility audit conducted by an auditor is the only way to determine conformance.
Monitoring is surveillance. An audit is diagnosis. You need both, but they serve different purposes and should never be conflated.
Fully manual audits conducted by experienced auditors evaluate every success criterion. Monitoring between those audits protects the work already done and reduces the scope of remediation when the next audit arrives.
How Accessibility Tracker Platform Supports Monitoring
The Accessibility Tracker Platform was built to connect audit results with ongoing monitoring in a single workspace. After an audit, issues get imported into the platform for tracking and remediation. Monitoring scans then run on schedule to catch new regressions.
Because audit data and scan data live in the same environment, teams can see both historical conformance status and current scan results side by side. This makes it clear whether a flagged issue is a known item from the audit or a new regression introduced after remediation.
The platform also generates progress reports, which give leadership visibility into conformance status without requiring them to interpret raw scan output.
What Monitoring Will Not Catch
Automated scans miss approximately 75% of WCAG criteria. Issues related to keyboard navigation logic, screen reader announcement order, cognitive accessibility, complex interactive patterns, and meaningful content alternatives all require human evaluation.
That is why monitoring does not replace audits. It complements them. A team that monitors between audits will have fewer surprises at audit time, but the audit will still identify issues that no scan can detect.
Planning for an annual audit while running continuous monitoring is the standard approach for organizations serious about maintaining WCAG 2.1 AA or WCAG 2.2 AA conformance.
Building a Monitoring Habit
The organizations that get the most value from monitoring are the ones that treat scan results like any other quality signal. When a scan flags a contrast issue on a new landing page, that gets triaged and assigned the same way a broken layout would.
Accessibility monitoring only works when someone is paying attention to the results. Automated scans that run in the background with no review process are technically monitoring but practically decorative.
Assign ownership. Review results weekly. Track remediation. That's the difference between monitoring as a practice and monitoring as a checkbox.
Frequently Asked Questions
Can monitoring replace an annual audit?
No. Monitoring through scans only flags approximately 25% of accessibility issues. A manual accessibility audit is the only way to determine WCAG conformance. Monitoring catches regressions between audits but cannot substitute for a full evaluation by an auditor.
How many pages should I include in monitoring?
Start with your most critical pages: those with the highest traffic and the most user interaction. Expand from there based on your team's capacity to review and act on results. Monitoring every page is ideal if your platform supports it, but a focused scope with active follow-through is more effective than broad coverage with no response process.
Does the Accessibility Tracker Platform connect audit results with scan monitoring?
Yes. The platform allows you to import audit report data and conduct scheduled scans in the same workspace. This means your team can distinguish between known audit issues and newly introduced regressions, keeping remediation organized and current.
Monitoring between audits is a practice, not a product feature. The tools make it easier, but the discipline of reviewing results, assigning issues, and tracking fixes is what keeps your digital assets conformant over time.
Contact Accessibility Tracker to set up monitoring alongside your accessibility project management workflow.

