Skip to main content

Prompt Engineering: Not Just for Code

5 min read

Tech Writer

Same principles apply to docs, tutorials, and API references. Good prompts = better drafts.

Solutions Eng

Discovery questions, proposal outlines, demo scripts — all improve with better prompting.

Prompt Engineering: Not Just for Code

TL;DR

  • Prompting well is a universal skill. It's how you steer any LLM, for any task.
  • Good prompts = context + role + output format + constraints. Not magic phrases.
  • The best prompters are clear thinkers. They know what they want before they type.

"Prompt engineering" sounds like a coding thing. It's not. It's the skill of giving AI enough context, direction, and structure so it produces something useful instead of something generic or wrong.

The Anatomy of a Decent Prompt

Think of it as a brief, not a magic spell. Every effective prompt has (most of) these elements:

  1. Context — What's the situation? "I'm debugging a Python service that times out under load."
  2. Role — What should the AI act as? "You're a senior SRE helping me trace a latency issue."
  3. Task — What exactly do you need? "Summarize these 3 log snippets and suggest the most likely root cause."
  4. Format — How should the output look? "Give me a bullet list, then one paragraph recommendation."
  5. Constraints — What to avoid? "Don't suggest we add more logging; we've already tried that."

You don't need all five every time. But the more ambiguous the task, the more of these you need.

The Big Mistake: Vague Requests

Bad: "Help me with this error."
Good: "I'm seeing this Java NullPointerException in production. Stack trace below. We're on Java 17, Spring Boot 3. No recent deploys. What are the 3 most likely causes?"

Bad: "Write something about our API."
Good: "Write a 2-paragraph intro for our REST API docs. Audience: external developers integrating with us. Tone: friendly but professional. Include that we support webhooks and rate limiting."

The gap isn't intelligence. It's specificity.

Prompting Patterns That Work Across Roles

PatternWhen to UseExample
Role + taskAny domain"As a data architect, suggest a schema for event streaming with GDPR retention."
Few-shotOutput format matters"Here are 2 examples of our incident summaries. Write one for this alert."
Chain of thoughtComplex reasoning"Think step by step: given these metrics, what's the most likely failure mode?"
Constraints firstAvoiding bad answers"Don't recommend Kubernetes. We're on VMs. Suggest VM-based solutions only."

These work for code, docs, analysis, support responses, and more.

When to Iterate vs. Start Over

First response bad? Often it's quicker to refine: "Same task, but shorter" or "Add a section on error handling." Don't assume the AI "failed." Assume the prompt was incomplete.

If you've iterated 3+ times and it's still off, restart with a cleaner prompt. Sometimes a fresh framing fixes it.

'Help me with this error.' 'Write something about our API.' 'Make this code better.' 'I need a status update.'

Click "Effective Prompt" to see the difference →

Quick Check

You ask AI: 'Write a test for this function.' The output is a generic unit test that doesn't match your testing framework. What's the most efficient fix?

Do This Next

  1. Rewrite one prompt you used recently. Add context, role, and output format. Run it again. Compare.
  2. Save 3 prompt templates for tasks you do often (e.g., "explain this error," "draft status update," "summarize meeting notes"). Reuse and refine them.