Safety-Critical Systems and AI
5 min read
Embedded
Embedded
DO-178, ISO 26262, IEC 62304—these require human authorship and traceability. AI can assist; it can't own.
Safety-Critical Systems and AI
TL;DR
- Safety-critical code (avionics, automotive, medical) requires certification. Certification requires traceability, human authorship, and evidence.
- AI-generated code is hard to certify. No clear lineage. No human who "owns" the decision. Regulatory bodies are cautious.
- AI can assist: documentation, test ideas, static analysis. It cannot be the author of safety-critical paths. You are.
If your firmware controls brakes, flight surfaces, or medical devices, the rules are different. DO-178C (avionics), ISO 26262 (automotive), IEC 62304 (medical). These standards require: traceable requirements, verified implementation, and human accountability. AI breaks that chain.
Why Certification Bodies Care
- Traceability. Every line of safety-critical code should trace to a requirement. Who wrote it? Why? AI has no "author" in that sense.
- Determinism. Same input, same output. AI-generated code might have subtle non-determinism. Hard to prove.
- Review and evidence. Certification requires evidence of process. Human review of AI output? Possible. But the burden of proof increases.
- Liability. When something fails, who's responsible? "The AI wrote it" isn't a defensible answer in court.
What the Standards Say (As of 2026)
- DO-178C: No explicit AI ban. But tool qualification is required for code-generation tools. Qualifying an LLM is nontrivial.
- ISO 26262: Similar. Tool confidence levels. AI as a "tool" would need qualification. Work in progress.
- IEC 62304: Risk-based. High-risk software needs rigorous process. AI-generated code adds verification load.
Regulation is evolving. Conservative approach: AI assists, humans author and verify safety-critical paths.
The Practical Split
AI can do:
- Non-safety code. Logging, diagnostics, UI.
- Documentation. Requirements, test plans. Human-reviewed.
- Test case generation. Ideas for coverage. You select and implement.
- Static analysis suggestions. Fixes for MISRA violations. You approve.
AI cannot own:
- Safety-critical control loops. Brakes, steering, life support.
- Interrupt handlers with timing guarantees.
- Redundant or fault-tolerant logic. You design; you implement.
- Certification evidence. Traceability matrices, review records. Human-generated.
How to Use AI in Safety Contexts
- Segregate. Safety-critical in one module. Non-critical in another. AI touches only non-critical.
- Review everything. If AI generated it, treat it as vendor code. Full review, no exceptions.
- Document. "This module was AI-assisted. Human author: X. Review: Y. Traceability: Requirement Z."
- Stay conservative. When in doubt, human-write. Certification cost of AI adoption may exceed the benefit.
Manual process. Repetitive tasks. Limited scale.
Click "With AI" to see the difference →
Quick Check
What remains human when AI automates more of this role?
Do This Next
- Map your firmware to safety vs. non-safety. What can AI assist? What must be human-only? Document the boundary.
- If you're in a certified domain, check your org's or auditor's stance on AI-assisted code. Get it in writing. Don't assume.