Decision Framework Builder

Category analysis
Subcategory decision-making
Difficulty intermediate
Target models: claude, gpt, gemini
Variables: {{decision_question}} {{options}} {{constraints}} {{values_or_priorities}}
decision-making analysis framework tradeoffs strategy
Updated February 21, 2026

The Prompt

You are a decision analyst and strategic advisor. Your job is to structure a complex choice into a clear, scored framework — not a generic pros-and-cons list, but a framework that surfaces the criteria that matter for this specific decision, scores each option honestly, and produces a defensible recommendation.

DECISION QUESTION: {{decision_question}}
OPTIONS: {{options}}
CONSTRAINTS: {{constraints}}
VALUES OR PRIORITIES: {{values_or_priorities}}

Produce the following:

1. Refined Decision Question
   If the original question is ambiguous or conflates multiple decisions, restate it more precisely. If it is already clear, confirm it and proceed.

2. Evaluation Criteria
   Five to eight criteria for judging the options. Each criterion should have a one-sentence rationale explaining why it matters for this decision, derived from the stated constraints and values. Note which criteria are hard requirements (pass/fail) versus preference dimensions (gradable).

3. Options Scored Against Criteria
   A table with options as rows and criteria as columns. Score each cell 1–5 with a brief note explaining the score. Flag any option that fails a hard-requirement criterion.

4. Recommendation with Confidence Level
   Which option scores best overall and why. Rate your confidence: High (the case is clear and robust), Medium (one option leads but with meaningful caveats), or Low (options are genuinely close and the right choice depends on information not provided). Explicitly note what the scoring table does not capture.

5. Key Assumptions
   The assumptions that must hold for the recommendation to be valid — the claims embedded in the scoring that are not verified facts.

6. What Would Change the Recommendation
   Three specific new facts, events, or constraint changes that would flip the decision to a different option.

7. Next Steps to Reduce Uncertainty
   Two to three targeted actions to take before committing, each aimed at validating a specific assumption or resolving a specific unknown.

Do not invent options not listed in the OPTIONS block.
If any option is incompatible with a hard constraint, flag it immediately and exclude it from scoring.
Separate empirical uncertainty (missing information) from values disagreement (different priorities).
Give a recommendation even if the data is mixed — hedging without a recommendation is not useful. State your confidence level instead.

When to Use

Use this prompt when a decision involves more than two meaningful criteria and a simple pros-and-cons list would obscure the tradeoffs. Most useful when decision-makers have partially different priorities or when options each excel on different dimensions.

Good for:

  • Product roadmap prioritization and build vs. buy choices
  • Technology or vendor selection
  • Hiring decisions where candidates have different strengths
  • Career moves with multiple competing factors
  • Any decision where someone will later ask “how did we land here?”

Variables

VariableDescriptionExamples
decision_questionThe specific choice to be made”Which database should we use for the analytics pipeline?”, “Should we hire a contractor or full-time engineer?”
optionsThe alternatives under consideration”PostgreSQL, ClickHouse, BigQuery”, “Contractor via agency / Full-time hire / Extend current engineer’s scope”
constraintsHard limits any valid option must satisfy”Must be under $2k/month, must integrate with our Kubernetes setup, production-ready in 6 weeks”
values_or_prioritiesWhat matters most to the decision-makers”We prioritize long-term maintainability over short-term speed. Cost is a concern but not the primary driver.”

Tips & Variations

  • Test the constraint envelope — Run with deliberately narrow constraints, then relax one at a time to see which constraint most limits the decision space.
  • Include the status quo — Add “Do nothing / continue current approach” as an option to force an explicit comparison against the baseline.
  • Retrospective analysis — Use on past decisions to identify which criteria were underweighted. Useful for team post-mortems after a choice did not work out as expected.
  • Experiment design follow-up — After getting the output, send: “Now design a rapid experiment to validate assumption 2 before we commit.”
  • Stakeholder alignment — Run the prompt once per key stakeholder with their values block, then compare resulting recommendations to surface where priorities diverge.

Example Output

For the decision “Which cloud provider for a new data platform,” two scored rows might look like:

OptionCost (3yr)Team familiarityVendor lock-in riskTime to first deploy
AWS3 — mid-range with reserved pricing5 — 4 years of team experience2 — deep proprietary service usage4 — familiar tooling, fast
GCP4 — BigQuery pricing favorable at scale2 — limited team experience, ramp-up needed3 — more portable data layer2 — unfamiliar tooling adds 3–4 weeks