The Most Common False Positives in Accessibility Scans

False positives in accessibility scans waste hours of dev time. Here are the most common ones flagged by automated checkers and why they happen.

The Most Common False Positives in Accessibility Scans

False positives in accessibility scans are flagged issues that aren't actually issues. Automated scanners apply pattern matching to code, and when a pattern looks problematic on the surface but is correct in context, the scanner reports it anyway. The most common false positives include color contrast errors on decorative text, missing alt attributes on truly decorative images, ARIA role conflicts on intentionally hidden elements, link purpose flags on icon links with proper accessible names, and form label warnings on fields with valid aria-labelledby references.

Scans only flag approximately 25% of issues, and a meaningful portion of what they do flag turns out to be inaccurate once a human reviews it. Knowing which warnings to trust and which to verify saves significant remediation time.

Common False Positives in Automated Accessibility Scans
Flagged Issue Why It's Often a False Positive
Color contrast errors Scanner misreads overlaid text, gradients, or images behind content
Missing alt text Decorative images with alt="" are correct but sometimes flagged
Empty buttons or links Accessible name provided via aria-label or aria-labelledby
Missing form labels Label association exists through aria-labelledby, not a label tag
ARIA role conflicts Elements hidden from assistive tech don't need full ARIA structure
Heading order warnings Visually styled headings vs. semantic heading levels are conflated

Why automated scans produce false positives

Scanners look at code, not context. They apply rules against the DOM and report anything that matches a known issue pattern. But accessibility often depends on how a page actually renders, what assistive technology announces, and whether a user can complete a task. None of that is visible to a script.

A scanner sees a button with no inner text and flags it as empty. It doesn't see the aria-label that gives the button a clear accessible name. That's the gap between code analysis and real accessibility.

Color contrast: the most flagged false positive

Color contrast warnings top the list. Scanners struggle with text overlaid on images, gradients, video backgrounds, or semi-transparent elements. The scanner picks a color value at one point in the rendered output and compares it to a single foreground value. If the actual contrast varies across the element, the result is often wrong.

Text inside SVGs, text rendered through canvas, and text with text-shadow effects also produce contrast warnings that don't reflect what a user actually sees. A human auditor confirms or dismisses these in seconds. A scan cannot.

Missing alt text on decorative images

An image with alt="" is correctly marked as decorative. Screen readers skip it. That's the intended behavior. But some scanners flag any image without descriptive alt text, treating the empty alt as a warning rather than a valid pattern.

The result: developers waste time adding alt text to images that should remain decorative, sometimes degrading the screen reader experience by adding noise where silence was correct.

Empty buttons and links with accessible names

Icon buttons are a constant source of false positives. A button containing only an SVG icon looks empty to a scanner. If that button has aria-label="Close menu" or wraps the icon in a span with visually hidden text, the accessible name is fine. Screen reader users hear "Close menu, button."

Some scanners catch the aria-label. Others don't, depending on how the rule is written. Inconsistent detection across tools is why scan results vary so much from one product to another.

Form labels associated through ARIA

Form fields can be labeled in several valid ways: a label element with a for attribute, an aria-label, an aria-labelledby reference, or a title attribute as a fallback. Scanners that only check for label elements miss the other valid patterns and report missing labels that are present, just associated differently.

How do you separate real issues from false positives?

You evaluate the flagged code manually. Open the page, inspect the element, and verify what assistive technology actually announces. If the accessible name is present, the contrast is correct in context, or the element is hidden from assistive tech intentionally, the scan was wrong.

This is why an accessibility audit conducted by a human is the only way to determine WCAG conformance. The audit identifies real issues and clears the noise. Scans are useful as a starting signal, not a verdict. Accessibility Tracker Platform pairs scan data with audit data so teams can see which flagged items are confirmed issues and which are noise.

What false positives cost teams

The cost shows up in developer hours. A team running a scan, seeing 400 issues, and assigning all of them to engineers will burn weeks chasing problems that don't exist. Worse, real issues get buried under the noise and miss the next release.

Teams that filter scan output through an auditor's review move faster and fix the right things. The signal-to-noise ratio improves the moment a human verifies what the scanner reports.

Are false positives a sign the scanner is bad?

Not necessarily. Every scanner produces them because pattern matching has limits. The better question is whether your workflow accounts for them. A scanner that flags 100 items and helps you identify the 30 real ones is doing its job. The issue is treating scan output as a final report.

Can AI reduce false positives in accessibility scans?

AI can improve pattern recognition and help triage results, but it cannot replace human evaluation against WCAG criteria.

Should I ignore scan results entirely?

No. Scans surface obvious issues quickly and help with ongoing monitoring between audits. Use them as a tripwire for regressions, not as a conformance report. Pair scan data with audit findings and you get a clearer picture of where your product actually stands.

False positives are a permanent feature of automated scanning. The goal isn't to eliminate them. It's to build a workflow that recognizes them, dismisses them quickly, and keeps the focus on real accessibility issues.

Contact the team to see how Accessibility Tracker maps scan results against audit data: Accessibility Tracker.

Kris Rivenburgh

Founder of Accessible.org

Share

Ready to Track Your Accessibility Progress?

Upload your audit and start tracking, fixing, and validating all in one place.

Get Started Now