Accessibility Testing Tools for Developers
Accessibility Testing Tools for Developers
Accessibility testing is one of those things most teams know they should do but consistently deprioritize until a lawsuit threat or a compliance audit lands on their desk. That's a shame, because the tooling has gotten genuinely good. You can catch 30-50% of accessibility issues automatically, and the effort to integrate a11y testing into your workflow is minimal compared to the cost of retrofitting an inaccessible application.
This guide covers the tools that actually work, how to integrate them into CI, and where automated testing falls short.
The Landscape: What Automated Testing Can and Can't Do
Before diving into tools, set your expectations correctly. Automated accessibility testing can reliably catch:
- Missing alt text on images
- Missing form labels
- Insufficient color contrast ratios
- Invalid ARIA attributes and roles
- Missing document language
- Heading hierarchy violations
- Missing skip navigation links
- Duplicate IDs
What it cannot catch:
- Whether alt text is actually meaningful (it just checks if it exists)
- Keyboard navigation flow making logical sense
- Screen reader experience quality
- Whether focus management works correctly in dynamic content
- Cognitive accessibility issues
- Whether custom components behave as expected with assistive technology
The commonly cited figure is that automated tools catch about 30-57% of WCAG issues, depending on the study. That means automated testing is necessary but not sufficient. You still need manual testing. But automated testing catches the low-hanging fruit consistently and prevents regressions.
axe-core: The Industry Standard Engine
axe-core by Deque Systems is the accessibility testing engine that powers most other tools. It's open source, has over 100 rules, and produces zero false positives by design -- if axe reports an issue, it's a real issue.
Using axe-core Directly
npm install --save-dev @axe-core/cli
npx axe https://your-site.com
But the CLI is the least interesting way to use axe. The real power is in the integrations.
axe-core with Playwright
This is the setup I recommend for most teams. Playwright gives you browser automation, and @axe-core/playwright gives you accessibility assertions.
npm install --save-dev @axe-core/playwright @playwright/test
// tests/a11y.spec.ts
import { test, expect } from '@playwright/test';
import AxeBuilder from '@axe-core/playwright';
test.describe('Accessibility', () => {
test('home page has no a11y violations', async ({ page }) => {
await page.goto('/');
const results = await new AxeBuilder({ page }).analyze();
expect(results.violations).toEqual([]);
});
test('login form has no a11y violations', async ({ page }) => {
await page.goto('/login');
const results = await new AxeBuilder({ page })
.include('#login-form') // Scope to specific element
.analyze();
expect(results.violations).toEqual([]);
});
test('dashboard meets WCAG AA', async ({ page }) => {
await page.goto('/dashboard');
const results = await new AxeBuilder({ page })
.withTags(['wcag2a', 'wcag2aa', 'wcag21a', 'wcag21aa'])
.analyze();
expect(results.violations).toEqual([]);
});
});
axe-core with Cypress
npm install --save-dev cypress-axe axe-core
// cypress/support/e2e.js
import 'cypress-axe';
// cypress/e2e/a11y.cy.js
describe('Accessibility', () => {
it('home page is accessible', () => {
cy.visit('/');
cy.injectAxe();
cy.checkA11y();
});
it('modal dialog is accessible when open', () => {
cy.visit('/');
cy.get('[data-testid="open-modal"]').click();
cy.injectAxe();
cy.checkA11y('#modal-container', {
rules: {
'color-contrast': { enabled: true },
'region': { enabled: false } // Disable specific rules if needed
}
});
});
});
axe-core with Jest (jsdom)
For component-level testing with Jest and jsdom. This tests the rendered DOM without a real browser, which is fast but misses CSS-dependent issues like color contrast.
npm install --save-dev jest-axe
import { render } from '@testing-library/react';
import { axe, toHaveNoViolations } from 'jest-axe';
expect.extend(toHaveNoViolations);
test('Button component is accessible', async () => {
const { container } = render(
<Button onClick={() => {}}>Submit</Button>
);
const results = await axe(container);
expect(results).toHaveNoViolations();
});
Caveat: jsdom-based testing cannot evaluate color contrast, focus indicators, or anything that requires a real rendering engine. It's useful for catching missing ARIA attributes and structural issues, but pair it with browser-based tests for full coverage.
Pa11y: The CLI-First Alternative
Pa11y is an opinionated accessibility testing tool that wraps HTML_CodeSniffer (and optionally axe-core). It shines as a CLI tool and in CI pipelines where you want quick, scriptable checks.
npm install -g pa11y pa11y-ci
Basic Usage
# Test a single URL
pa11y https://your-site.com
# Specify WCAG standard
pa11y --standard WCAG2AA https://your-site.com
# Output as JSON for processing
pa11y --reporter json https://your-site.com
# Test with specific viewport
pa11y --width 375 --height 812 https://your-site.com
Pa11y CI for Multi-Page Testing
Pa11y CI is where this tool gets interesting. Define a config file with all the URLs you want to test, and run them in CI.
// .pa11yci.json
{
"defaults": {
"standard": "WCAG2AA",
"timeout": 10000,
"wait": 1000,
"chromeLaunchConfig": {
"args": ["--no-sandbox"]
}
},
"urls": [
"http://localhost:3000/",
"http://localhost:3000/about",
"http://localhost:3000/login",
{
"url": "http://localhost:3000/dashboard",
"actions": [
"set field #username to testuser",
"set field #password to testpass",
"click element #login-btn",
"wait for url to be http://localhost:3000/dashboard"
]
}
]
}
pa11y-ci --config .pa11yci.json
Pa11y vs axe-core
Pa11y uses HTML_CodeSniffer by default, which catches some things axe doesn't and vice versa. You can configure Pa11y to use axe as its runner:
{
"defaults": {
"runner": "axe"
}
}
My recommendation: use both runners if you can. If you have to pick one, axe-core has a larger rule set and broader industry adoption.
Lighthouse Accessibility Audits
Lighthouse is built into Chrome DevTools and also available as a CLI tool. Its accessibility score runs axe-core under the hood, but it packages the results into a 0-100 score that's easy to track over time.
npm install -g lighthouse
# Run accessibility audit only
lighthouse https://your-site.com --only-categories=accessibility --output=json --output-path=./a11y-report.json
# Run in CI with Chrome headless
lighthouse https://your-site.com \
--only-categories=accessibility \
--chrome-flags="--headless --no-sandbox" \
--output=json \
--output-path=./a11y-report.json
Lighthouse CI
For tracking accessibility scores over time, Lighthouse CI lets you set budgets and fail builds when the score drops.
// lighthouserc.json
{
"ci": {
"collect": {
"startServerCommand": "npm start",
"url": ["http://localhost:3000/", "http://localhost:3000/about"],
"numberOfRuns": 3
},
"assert": {
"assertions": {
"categories:accessibility": ["error", { "minScore": 0.9 }]
}
},
"upload": {
"target": "temporary-public-storage"
}
}
}
npm install -g @lhci/cli
lhci autorun --config=lighthouserc.json
When to use Lighthouse vs raw axe-core: Lighthouse is great for high-level tracking and non-technical stakeholders who understand scores. For detailed testing and custom rules, use axe-core directly.
WAVE: The Browser Extension
WAVE by WebAIM is a browser extension that visually overlays accessibility issues on your page. It's not scriptable and doesn't fit into CI, but it's the best tool for manual auditing.
Install the WAVE extension for Chrome or Firefox, navigate to any page, and click the extension icon. It highlights:
- Missing alt text (red icons on images)
- Form label issues
- Heading structure problems
- ARIA issues
- Contrast failures
- Structural elements (landmarks, headings, links)
WAVE also has an API if you need programmatic access, but it's paid ($100/month for 100K credits). For most teams, the browser extension for manual review plus axe-core for automation is the better combination.
Integrating a11y Testing Into CI
Here's a practical CI setup using GitHub Actions that combines axe-core with Playwright.
# .github/workflows/a11y.yml
name: Accessibility Tests
on:
pull_request:
branches: [main]
jobs:
a11y:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 22
- run: npm ci
- run: npx playwright install --with-deps chromium
- name: Build and start app
run: |
npm run build
npm run preview &
npx wait-on http://localhost:4173
- name: Run accessibility tests
run: npx playwright test tests/a11y/
- name: Upload a11y report
if: failure()
uses: actions/upload-artifact@v4
with:
name: a11y-report
path: test-results/
Strategy: Gradual Adoption
If you're adding a11y testing to an existing project, you'll probably have hundreds of violations on day one. Don't try to fix them all at once. Here's a practical adoption strategy:
Phase 1: Audit and baseline. Run axe-core on your key pages. Document the current violation count. Don't fail the build yet.
// tests/a11y-audit.spec.ts
test('audit home page a11y (baseline)', async ({ page }) => {
await page.goto('/');
const results = await new AxeBuilder({ page }).analyze();
console.log(`Violations: ${results.violations.length}`);
// Don't assert yet -- just measure
results.violations.forEach(v => {
console.log(`${v.impact}: ${v.id} - ${v.description} (${v.nodes.length} instances)`);
});
});
Phase 2: Prevent new violations. Configure axe to only fail on critical and serious issues. Fix those first.
const results = await new AxeBuilder({ page })
.options({ resultTypes: ['violations'] })
.analyze();
const criticalViolations = results.violations.filter(
v => v.impact === 'critical' || v.impact === 'serious'
);
expect(criticalViolations).toEqual([]);
Phase 3: Ratchet down. Progressively enable more rules and lower severity thresholds. Add new pages to your test suite. Target zero violations on new features from the start.
Phase 4: Full enforcement. All pages tested, all severities caught, build fails on any violation.
WCAG Compliance Levels
WCAG (Web Content Accessibility Guidelines) has three compliance levels:
- Level A -- Basic accessibility. If you fail these, your site is actively hostile to assistive technology users. Examples: images have alt text, pages have titles, form inputs have labels.
- Level AA -- The standard target for most organizations and the level required by most legal frameworks (ADA, Section 508, EN 301 549). Adds color contrast requirements (4.5:1 for normal text), resizable text, consistent navigation.
- Level AAA -- Enhanced accessibility. Very difficult to achieve site-wide. Includes things like sign language interpretation for video and 7:1 contrast ratios.
Target Level AA. It's the legal standard, it's achievable, and it covers the issues that affect the most users. Level AAA is aspirational but impractical as a site-wide requirement for most projects.
In axe-core, filter by tags:
const results = await new AxeBuilder({ page })
.withTags(['wcag2a', 'wcag2aa', 'wcag21a', 'wcag21aa'])
.analyze();
The 10 Most Common Issues (and How to Fix Them)
Based on the WebAIM Million annual report, here are the issues you'll see most:
1. Low contrast text
/* Bad */
.text { color: #999; background: #fff; } /* 2.85:1 ratio */
/* Good -- meets AA for normal text */
.text { color: #595959; background: #fff; } /* 7:1 ratio */
2. Missing alt text
<!-- Bad -->
<img src="hero.jpg">
<!-- Good -->
<img src="hero.jpg" alt="Team collaborating around a whiteboard">
<!-- Decorative image -- use empty alt -->
<img src="divider.svg" alt="">
3. Missing form labels
<!-- Bad -->
<input type="email" placeholder="Email">
<!-- Good -->
<label for="email">Email address</label>
<input type="email" id="email" placeholder="[email protected]">
4. Empty links
<!-- Bad -->
<a href="/profile"><i class="icon-user"></i></a>
<!-- Good -->
<a href="/profile" aria-label="User profile"><i class="icon-user"></i></a>
5. Missing document language
<!-- Bad -->
<html>
<!-- Good -->
<html lang="en">
6. Empty buttons
<!-- Bad -->
<button><svg>...</svg></button>
<!-- Good -->
<button aria-label="Close dialog"><svg>...</svg></button>
7. Missing heading structure
Don't skip heading levels. Go from h1 to h2 to h3, not h1 to h3.
8. No skip navigation link
<body>
<a href="#main-content" class="skip-link">Skip to main content</a>
<nav>...</nav>
<main id="main-content">...</main>
</body>
9. Missing landmark regions
<header>...</header>
<nav>...</nav>
<main>...</main>
<aside>...</aside>
<footer>...</footer>
10. Keyboard traps
Test with Tab, Shift+Tab, Enter, Escape. Modals must trap focus inside them, and Escape must close them and return focus to the trigger.
Manual Testing Checklist
No automated tool replaces these manual checks. Do these on every major feature:
- Keyboard navigation: Tab through the entire page. Can you reach and operate every interactive element? Is the focus order logical? Can you see where focus is?
- Screen reader testing: Use VoiceOver (macOS), NVDA (Windows, free), or Orca (Linux). Navigate the page. Does it make sense?
- Zoom testing: Zoom to 200% and 400%. Does content reflow? Is anything cut off or overlapping?
- Motion preferences: Enable "prefers-reduced-motion" in your OS. Do animations respect it?
- High contrast mode: Enable Windows High Contrast Mode or forced-colors. Is everything still visible?
Tool Comparison
| Tool | Best For | CI Integration | Cost |
|---|---|---|---|
| axe-core | Programmatic testing, browser integration | Excellent (Playwright, Cypress, Jest) | Free (open source) |
| Pa11y | CLI-first workflows, multi-URL testing | Excellent (Pa11y CI) | Free (open source) |
| Lighthouse | Scoring, tracking over time, stakeholder reports | Good (LHCI) | Free (open source) |
| WAVE | Manual visual auditing | None (browser extension) | Free (extension), Paid (API) |
| Deque axe DevTools | Manual + guided testing in browser | Via axe-core | Free (basic), Paid (pro) |
The Bottom Line
Start with axe-core integrated into your existing test framework -- if you use Playwright, add @axe-core/playwright. If you use Cypress, add cypress-axe. That's the lowest friction path and gives you the highest-quality automated results.
Add Pa11y CI if you want a quick multi-URL smoke test without writing test code. Use Lighthouse CI if you need to track scores over time and report to stakeholders.
But don't stop at automated testing. Budget time for keyboard testing and screen reader testing on every major feature. The automated tools catch the easy stuff. The hard stuff -- the confusing tab orders, the screen reader experiences that technically work but make no sense, the interactions that break with voice control -- those require a human.
For a new project, set up axe-core in CI from day one with zero-tolerance on violations. For an existing project, audit first, fix critical issues, then ratchet down. Either way, the tooling is mature enough that there's no good excuse not to have automated accessibility testing in your pipeline.