Accessibility Automation With AI
Frontend
Automated a11y catches 30–40% of issues. You catch the rest. Don't ship without both.
Ux Eng
AI flags missing alt text. You decide if the description actually helps. Semantics need humans.
Accessibility Automation With AI
TL;DR
- AI and automated tools (axe, Lighthouse, Pa11y) catch structural a11y issues: missing alt, bad contrast, invalid ARIA.
- They miss: semantic correctness, focus order, screen reader flow, "does this make sense to a blind user?"
- Use AI to scale checks. Use yourself to validate that the fix actually works.
Automated accessibility testing has been around for years. AI is making it smarter—and sometimes overconfident. Knowing what to automate and what to verify yourself is the skill now.
What Automation Catches
- Missing or empty alt text. Easy to detect. AI can suggest alt text; you decide if it's useful.
- Color contrast. WCAG formulas are deterministic. Tools nail this.
- Invalid ARIA.
aria-labelwithout a role, orphanedaria-describedby. Structural errors. - Keyboard traps. Some tools simulate tab order. They find obvious blocks.
- Missing form labels. Automated checks are reliable here.
What Automation Misses
- Meaningful alt text. "Image of chart" vs. "Bar chart showing Q3 revenue up 12% year-over-year." Tools see presence, not quality.
- Focus management. Modals, dropdowns, dynamic content. Correct focus order is logic, not markup.
- Screen reader flow. Does the announced order make sense? Does "click here" mean anything without context?
- Cognitive accessibility. Clarity, consistency, error recovery. No tool measures this well.
- Context-dependent fixes. Sometimes the right fix is "redesign the component," not "add an aria-label."
AI-Specific Pitfalls
AI can suggest a11y fixes. It often:
- Adds
role="button"to divs that should be<button> - Proposes generic alt text that doesn't describe the image
- Over-ARIA-ifies:
aria-label,aria-describedby,aria-hiddeneverywhere, creating noise - Misses that the real fix is semantic HTML
Rule: Treat AI a11y suggestions as hypotheses. Verify with a screen reader or a11y expert when it matters.
The Hybrid Workflow
- Automate: Run axe/Lighthouse/Pa11y in CI. Fix what they flag.
- AI-assist: Use AI to propose fixes for common issues. Review every suggestion.
- Manual spot-check: For critical flows (checkout, onboarding, key features), test with keyboard and screen reader yourself.
- Document patterns: Build a component library with a11y baked in. AI generates from that; fewer fixes needed.
Ship feature. Run axe in QA. Fix 20 issues. Still get feedback: 'Screen reader announces this wrong.' Manual testing happens too late.
Click "A11y Workflow With AI" to see the difference →
Quick Check
AI suggested adding aria-label to a div that acts as a button. What's the correct fix?
Do This Next
- Run an a11y audit on one page or component. Compare axe results to a 5-minute manual keyboard/screen reader check. Note what automation missed.
- Ask AI to fix one a11y issue (e.g., missing alt, bad contrast). Review the fix. Did it actually improve things, or did it add noise?