Skip to main content

Test Writing With AI

5 min read
QaTest Auto

Qa

AI tests cover the happy path. They miss edge cases, accessibility, and 'what would a malicious user do?' You own strategy.

Test Auto

AI-generated tests are maintainable if you own the patterns. Let AI write the first draft; you own the framework and structure.

Frontend

AI writes component tests. It doesn't know your design system's breakpoints or that 'Submit' is disabled until the form is valid.

Test Writing With AI

TL;DR

  • AI is great at generating unit tests for well-structured, single-responsibility functions.
  • AI struggles with integration tests, E2E flows, and "what should we actually test?"
  • Use AI for coverage of the obvious. You decide what's worth testing and what edge cases matter.

"AI can write your tests" is half true. It can write some tests. Whether they're the right tests is your call.

What AI Does Well

Unit Tests for Pure Functions

Prompt: "Write Jest tests for this function that calculates order total."

What you get: Tests for valid inputs, edge cases like zero and negative (if you ask), maybe a few boundary values. Often correct. Fast to generate.

Why it works: Pure functions have clear inputs and outputs. AI can enumerate cases. Low context, high structure.

Test Skeletons for Components

Prompt: "Write React Testing Library tests for this LoginForm component."

What you get: Renders. Maybe checks for a button. Possibly fills a form and clicks submit.

What's missing: Accessibility assertions. Keyboard navigation. "What if the API returns 401?" Error states. Loading states. Your actual UX flows.

Mocking and Fixtures

Prompt: "Create mock data for a user with 3 orders."

What you get: A plausible JSON structure. Maybe TypeScript types.

What's missing: Data that triggers edge cases. Invalid data. Data that matches your actual API contract.

What AI Does Poorly

Integration Tests

Why: Integration tests depend on real (or realistic) services, DB state, and orchestration. AI doesn't know your infrastructure. It'll give you a sketch. You have to wire it up, handle flakiness, and decide what "success" means.

E2E and User Flows

Why: E2E tests are "click here, then here, then this should happen." AI can generate Playwright or Cypress code — but it doesn't know your app's flow, your selectors, or what "done" looks like. You end up rewriting half of it.

Test Strategy

Why: "What should we test?" is a product and risk question. AI can't answer "we need to test payment flows because we've had 3 prod incidents." You decide scope, priority, and what's in vs. out of scope.

Flaky and Over-Specified Tests

Why: AI tends to assert on implementation details (class names, internal state) or timing. Tests that break when you refactor. You spend more time fixing tests than writing them. Human review catches this.

How to Use AI for Testing

  1. Generate first draft. "Write unit tests for this function." Get coverage of the obvious.
  2. Review and prune. Delete tests that don't matter. Fix assertions that are too brittle.
  3. Add what AI missed. Edge cases. Error paths. "What if the user does X?" That's your domain knowledge.
  4. Own the framework. AI doesn't know your team's patterns. Consistent structure, naming, and setup — that's you.

Quick Check

AI generates Jest tests for a pure function. What's typically missing that you need to add?

You manually write every test: happy path, edge cases, mocks, fixtures. Hours per component. Or you skip tests because it's tedious.

Click "With AI" to see the difference →

Do This Next

  1. Generate tests for one function or component. Review the output. How many tests would you keep? What would you add?
  2. Document one "AI test that's wrong or useless." Build your intuition for when AI tests are worth the review time.