Skip to main content

Security Scanning With AI

5 min read
AppsecPentestSecurity Arch

Appsec

AI finds the OWASP Top 10. It misses business-logic flaws and 'would an attacker abuse this flow?' You own threat modeling.

Pentest

AI automates the script-kiddie stuff. Creative exploitation and social engineering? Still human.

Security Arch

AI suggests controls. It doesn't know your risk appetite or what the board will tolerate. You do.

Security Scanning With AI

TL;DR

  • AI is excellent at finding known vulnerability patterns: SQL injection, XSS, hardcoded secrets, outdated deps.
  • AI misses business-logic flaws, novel attack vectors, and "does this fit our threat model?"
  • Use AI for breadth. You provide depth and context.

Security tooling has had "AI" in the name for a while. Some of it's marketing. Some of it's genuinely useful. Here's what actually works.

What AI Finds Well

Known Vulnerability Patterns

  • OWASP Top 10: SQL injection, XSS, CSRF, insecure deserialization. AI has seen these in training. It flags them.
  • Hardcoded secrets: API keys, passwords in config. Pattern matching. AI nails it.
  • Outdated dependencies: CVEs in your lockfile. AI (or tools that use ML) correlate fast. Good for prioritization.

Static Analysis at Scale

  • SAST (Static Application Security Testing): AI can augment rules. "This looks like a path traversal." "This input flows to an exec call."
  • Secret scanning: Regex + ML for accidental commits. Works well.
  • Config review: Kubernetes manifests, IAM policies. "This role has admin." AI spots over-permissioned config.

What AI Misses

Business-Logic Flaws

  • "If I refund and reorder, do I get double credit?" — That's product logic. AI doesn't know your business rules.
  • "Can I escalate my own ticket to priority?" — Authorization logic. AI sees code; it doesn't see "this shouldn't be possible."
  • "What if someone abuses the invite flow to spam?" — Abuse cases. Human creativity.

Novel and Chained Attacks

  • Zero-days: AI has seen known exploits. It hasn't seen tomorrow's.
  • Chained exploits: "Vuln A + Vuln B + misconfig C = RCE." AI might find A, B, C separately. Connecting them? Human.
  • Social engineering: No code to scan. AI can't pentest the human layer.

Context and Risk

  • "Is this vulnerability in our threat model?" — Maybe you accept risk on an internal tool. AI flags everything.
  • "What's our actual exposure?" — Is this service internet-facing? What data does it touch? AI doesn't know your architecture.
  • False positive fatigue: AI can over-flag. Tuning and triage — human work.

How to Use AI for Security

  1. Run AI-assisted scans as one input. Don't make it the only input. Combine with manual review and threat modeling.
  2. Tune for your context. Disable checks that don't apply. Prioritize by asset criticality. AI gives raw signal; you filter.
  3. Use AI for triage, not decisions. "Is this a real vuln or a false positive?" AI can suggest. You confirm.
  4. Keep threat modeling human-led. AI can suggest controls. It can't own "what are we protecting and from whom?"

Quick Check

AI finds OWASP Top 10 issues and outdated deps. What will it NOT find?

AI Disruption Risk for Security Engineer / AppSec

Moderate Risk

SafeCritical

AI automates SAST, secret scanning, and OWASP checks — great for breadth. But business-logic flaws, novel attacks, and 'does this fit our threat model?' are human work. Use AI for signal; you own triage and threat modeling.

Do This Next

  1. Run one AI-assisted security scan on a project you know. Compare its findings to what you'd manually flag. What did it miss?
  2. Document one business-logic risk AI wouldn't find. Make it part of your threat model. That's your value add.