How to Make Issue Severity Ratings Precise

Sharper severity ratings turn audit data into action. Here's how to make issue severity ratings precise inside Accessibility Tracker Platform.

How to Make Issue Severity Ratings Precise

Severity ratings only work when they map to real user impact and clear remediation priority. To make issue severity ratings precise, anchor each rating to defined criteria, apply the same logic across every issue, and pair the rating with prioritization formulas that reflect risk and user experience. Without this structure, severity becomes a guess. With it, severity becomes the fastest path from audit report to fix order.

Accessibility Tracker Platform applies severity within a structured model so audit findings translate directly into a sequenced remediation plan.

Precision Severity Ratings at a Glance
ElementWhat Makes It Precise
Defined LevelsEach severity tier has written criteria tied to user impact, not auditor instinct.
User Impact AnchorSeverity reflects how the issue affects real assistive technology users.
Consistency Across IssuesThe same WCAG nonconformance receives the same severity every time.
Prioritization PairingSeverity feeds Risk Factor or User Impact prioritization formulas inside the platform.
Reviewable LogicRatings can be audited, questioned, and adjusted with clear reasoning.

Why Severity Ratings Drift Without Structure

Severity ratings drift when auditors apply personal judgment without a written rubric. One auditor calls a missing form label critical. Another calls it moderate. The same issue, two ratings, no defensible logic.

That drift compounds across a 200-issue audit report. Remediation teams lose confidence in the order. Leadership questions the data. The audit becomes harder to act on, not easier.

Precision starts with a written definition for every severity level and the discipline to apply it the same way every time.

What Should Severity Actually Measure?

Severity should measure user impact first. A keyboard trap that blocks a screen reader user from completing checkout is severe. A decorative image with a missing empty alt attribute is not, even though both are WCAG nonconformance.

Three factors anchor a precise rating:

Blocking vs. degrading: Does the issue prevent task completion or make it harder?

Affected population: Does it affect screen reader users, keyboard-only users, low vision users, or all of the above?

Frequency on the asset: Does the issue appear on a checkout page or a single archived blog post?

When all three factors point the same direction, the rating is clear. When they conflict, the written rubric resolves it.

Defining Severity Levels with Written Criteria

A working severity model uses four tiers with concrete definitions:

Critical: Blocks task completion for one or more disability groups. Examples include keyboard traps, missing form labels on required fields, and inaccessible primary navigation.

High: Significantly degrades the experience without fully blocking it. Examples include low contrast on key interactive elements and missing headings on long content pages.

Medium: Creates friction but workarounds exist. Examples include redundant link text and missing landmarks on secondary pages.

Low: Minor nonconformance with limited user impact. Examples include decorative images missing empty alt attributes and minor heading hierarchy issues on low-traffic pages.

Written criteria turn severity from opinion into a repeatable decision.

Pairing Severity with Prioritization Formulas

Severity alone does not produce a remediation order. Two critical issues are not equal if one appears on the homepage and the other on a rarely visited page.

Inside Accessibility Tracker Platform, severity feeds Risk Factor or User Impact prioritization formulas. Risk Factor weighs legal exposure, page traffic, and severity together. User Impact weighs the assistive technology population affected and frequency of the issue across the asset.

The formula generates the actual fix order. Severity is one input, not the entire decision.

How Does Accessibility Tracker Keep Ratings Consistent?

The platform stores every issue with its WCAG criterion, severity rating, page location, and remediation guidance in one structured record. That structure reinforces consistency in two ways.

First, the same WCAG nonconformance carries the same baseline severity across the project. If 1.4.3 contrast issues are rated High once, they are rated High every time unless context shifts the user impact.

Second, ratings are visible, sortable, and reviewable. A team lead can scan all Critical issues in seconds and verify the logic holds. When a rating looks off, it gets flagged and corrected before remediation starts.

Reviewing and Adjusting Ratings After the Audit

Severity is not frozen at delivery. As remediation progresses, context can shift. A medium issue on a page scheduled for redesign may drop to low. A high issue on a newly launched checkout flow may rise to critical.

Precise severity ratings stay precise because they are reviewed against current product reality, not locked to the date of the audit. The platform supports that ongoing adjustment without losing the original rating history.

Frequently Asked Questions

Can severity ratings replace prioritization formulas?

No. Severity is one input. Prioritization formulas combine severity with risk, traffic, and user population to produce the actual fix order. Using severity alone leads to remediation plans that miss high-traffic, lower-severity issues that affect more users in practice.

Who should assign severity ratings on an audit report?

The auditor who identifies the issue assigns the initial severity using the written rubric. A senior reviewer or project lead validates ratings before the report is delivered. Inside Accessibility Tracker, the rating travels with the issue record so any reviewer can check the logic.

How often should severity definitions be revisited?

Review the rubric annually or after a major WCAG version shift. The criteria themselves should stay stable so ratings remain comparable across audits. Frequent definition changes break consistency and make trend analysis impossible.

Do automated scans produce precise severity ratings?

No. Scans only flag approximately 25% of issues and cannot evaluate user impact in context. Precise severity requires a (manual) accessibility audit where an auditor evaluates each issue against real assistive technology behavior and the written rubric.

Severity ratings earn their precision through written criteria, consistent application, and pairing with prioritization formulas that reflect real user impact and project risk.

To see how Accessibility Tracker structures severity and prioritization across a full audit, contact the Accessibility Tracker team.

Kris Rivenburgh

Founder of Accessible.org

Share

Ready to Track Your Accessibility Progress?

Upload your audit and start tracking, fixing, and validating all in one place.

Get Started Now