Skip to main content

Navigating Ambiguity

5 min read

Tpm

Your job: turn 'we need better analytics' into something buildable. AI needs that output. You produce it.

Tech Lead

Stakeholders say different things. You synthesize. AI can't do the synthesis — it wasn't in the three meetings.

Solutions Eng

The customer said 'integration' but meant 5 different things. You tease that out. AI gets clean input from you.

Navigating Ambiguity

TL;DR

  • Requirements are messy, contradictory, and incomplete. That's normal.
  • AI needs clear, structured input. Garbage in, garbage out.
  • Your job: turn ambiguity into clarity. AI works downstream of you.

AI is great when the problem is well-defined. "Write a function that does X." "Create an API with these endpoints." Clean input, clean output. Real work is messier. Someone has to clean it first. That someone is you.

Why Ambiguity Breaks AI

AI Assumes Clarity

  • "Build a dashboard." — What metrics? For whom? Real-time or batch? AI will guess. Its guess might be wrong.
  • "Improve performance." — Latency? Throughput? Perceived speed? AI doesn't know. It optimizes something. Maybe the wrong thing.
  • "Make it scalable." — 10 users or 10 million? AI assumes. You know.

Contradictory Inputs

  • Product says: "We need it fast." Engineering says: "We need it right." Sales says: "We need it by quarter end." AI can't resolve that. You broker.
  • "It should be simple and feature-rich." — Trade-off. AI might give you both and create a mess. You decide what "simple" means in practice.

Missing Information

  • "The customer wants better reporting." — What reports? What's "better"? Who's the audience? AI fills in with generic. You discover the real requirements.
  • "We need to migrate off the legacy system." — Why? What's the trigger? What's in scope? AI can't run the discovery workshop. You do.

Your Value: The Clarification Layer

You Ask the Questions

  • "When you say X, do you mean A or B?" — Disambiguation. AI gets the answer. You get the question.
  • "What's out of scope?" — Boundaries. AI doesn't know what not to build. You define it.
  • "What does success look like?" — Criteria. AI optimizes for something. You define what.

You Synthesize

  • Three stakeholders, three opinions. You produce one coherent spec. AI works from that. You created it.
  • "We need A, B, and C — but we only have budget for two." Prioritization. AI can't do it. You do.

You Iterate

  • First version of requirements is always wrong. You learn that in review. You update. AI gets better input in round 2. You're the feedback loop.

How to Use This as a Moat

  1. Own the discovery. Before you prompt AI, do the messy work: interviews, whiteboarding, "what does that actually mean?" The better your input, the better AI's output. And the input is your skill.
  2. Document assumptions. When you give AI a task, write down what you're assuming. "We're optimizing for X, not Y." "Out of scope: Z." That doc is the contract. AI executes. You own the contract.
  3. Treat AI as a tool for the clarified problem. Once you've reduced ambiguity, AI accelerates. Before that, it amplifies confusion. Order matters.

Quick Check

Product says 'We need it fast.' Engineering says 'We need it right.' Sales says 'By quarter end.' What can AI do with that?

You get 'Build a dashboard.' You build something. It's wrong. 'We wanted X, not Y.' Rework. Or you spend weeks in meetings before you touch code.

Click "Clarified Then AI" to see the difference →

Do This Next

  1. Take one vague ask from your backlog (or invent one). Write down 5 clarifying questions you'd ask before you could build it. That's the ambiguity-navigation skill. Practice it.
  2. Before your next AI prompt, add one explicit constraint. "Assume we're on Postgres." "Assume we have 2 months." See how it changes the output. Control the ambiguity.