Why PMs should care about prompt engineering
The boring 80% of PM work — drafting specs, summarising research, enumerating edge cases, writing stakeholder updates — is exactly what current LLMs are best at. A PM who has internalised five or six prompt patterns will save 5 to 10 hours a week. That time goes back into the 20% that actually moves the product: customer calls, hard tradeoffs, team coaching.
This is not about replacing your judgment. It is about removing the friction between your judgment and the artifacts that communicate it.
The five patterns I use weekly
Pattern 1: Spec drafting
The hardest part of writing a PRD is the first paragraph. Once you have a draft, editing is fast. The LLM is great at generating drafts.
The prompt I use:
You are an expert Technical Product Manager. Draft a one-page PRD for the following feature:
[Paste 3-5 sentence description of the feature]
Use this structure: Problem, User, Solution, Out of scope, Open questions, Success metrics. Keep it concise. Flag anything you are uncertain about.
The output is rough. I rewrite 60% of it. But starting with a 60%-correct draft beats starting with a blank page every time.
Pattern 2: Stakeholder summary
End-of-week updates, board prep, exec syncs. The same content needs three different framings for three different audiences.
The prompt I use:
Summarize the following weekly product update for [audience: CEO / engineering team / board].
[Paste raw update]
Constraints:
- [CEO: focus on customer impact and revenue]
- [Engineering: focus on tech debt and unblocks]
- [Board: focus on KPIs and strategic direction] Keep it to 5 bullets. No fluff.
This pattern is pure leverage. One raw update becomes three audience-tailored summaries in 90 seconds.
Pattern 3: Edge case enumeration
The most common cause of buggy releases is unenumerated edge cases. LLMs are excellent at generating exhaustive lists.
The prompt I use:
You are a senior QA engineer reviewing this feature spec:
[Paste spec or user story]
Generate a comprehensive list of edge cases the implementation must handle. Group them by: input edge cases, state edge cases, concurrency edge cases, failure mode edge cases, security edge cases. Be exhaustive.
The output is always 30-60 cases. About half are obvious. The other half catches things I would have missed in spec review and shipped to production.
Pattern 4: User research synthesis
You have 8 customer interview transcripts. You need themes by Friday. This used to take a full day; now it takes 30 minutes.
The prompt I use:
You are a senior UX researcher. Below are [N] customer interview transcripts about [topic].
[Paste transcripts]
Identify:
- The top 5 recurring themes, with quotes from at least 2 different interviews per theme
- The strongest single quote that captures each theme
- Any contradictions between interviews — where customers disagree
- Research gaps — questions we should have asked but did not
The contradictions section is the most valuable. It surfaces the assumptions in the team's mental model that the data is not supporting.
Pattern 5: Roadmap reasoning
When you have 20 candidate features and need to pick 5 for the next quarter, the LLM is not going to make the decision for you — but it will surface the tradeoffs you should be thinking about.
The prompt I use:
You are a senior product strategist. Here are 20 candidate features for our [product description] in the next quarter:
[Paste list with one-line description per feature]
Our strategic priority for the quarter is [insert priority — e.g., "reduce time-to-first-value for new users"].
Cluster the features into themes. For each cluster, identify:
- How well it aligns with the strategic priority
- Likely engineering effort (T-shirt sized: S/M/L/XL)
- Risk if we do not do it
- The strongest argument against doing it
End with your recommended top 5, with reasoning.
I rarely take the recommended top 5 verbatim. I always take the cluster analysis and the "strongest argument against." Those are the things I would have spent hours generating manually.
The prompts I actually use (template form)
I keep these in a personal Notion page and copy-paste them. The few minutes I spent writing them once saves me hours every month.
If you want to start your own collection, the formula is:
- Set the role. "You are a senior X."
- Provide the input. Paste the raw material.
- Specify the output structure. Bullets, sections, table — whatever you need.
- Add constraints. Length limits, audience, tone.
- Ask for what you actually want. Be specific about the artifact, not the process.
Things I do not do
I do not vibe-prompt for high-stakes work. Pricing decisions, hiring decisions, strategic positioning — I write these myself. The LLM is a draft generator, not a strategist.
I do not paste sensitive customer data into public LLMs. Use enterprise tiers, on-prem models, or anonymise before pasting. PII into ChatGPT is a compliance incident waiting to happen.
I do not chain prompts unnecessarily. Most PM work is a single prompt with a clear output. Multi-step agent chains add cognitive overhead and rarely improve quality at this scale.
I do not prompt without proofreading. The LLM will hallucinate facts, miscount, and flatten nuance. I read every output before it goes anywhere.
Closing thought
Prompt engineering for PMs is the most underrated productivity gain available right now. It is not glamorous, it does not look impressive on Twitter, and it requires you to be honest about which parts of your job are actually drafting work in disguise. Once you accept that, the leverage compounds.
If you want to talk about specific prompt patterns for your product or team, book a free 30-minute strategy call — I will share my full prompt library and you can take what is useful.