Decision Method
Decision Quality Checklist
Six elements that separate good decisions from lucky outcomes. Check them before you commit.
Last updated: April 2026
What is Decision Quality?
Decision Quality is the practice of evaluating your decision process before you know the outcome. The framework was pioneered by Ronald Howard at Stanford University in the 1960s and later formalized by Carl Spetzler, Jennifer Meyer, and Claudia von Winter in "Decision Quality: Value Creation from Better Business Decisions" (2016). The core insight: you cannot control outcomes, but you can control the quality of the process that leads to the decision. A good decision can have a bad outcome (bad luck). A bad decision can have a good outcome (good luck). Judging decisions by their outcomes is the most common mistake in management.
Think of decision quality as a chain with six links. A chain is only as strong as its weakest link. If your team nailed the Frame, Alternatives, Information, Values, and Reasoning but skipped Commitment (the people who execute weren't consulted), the decision will fail at implementation regardless of how strong the other five elements are. Checking all six is not bureaucracy. It is finding the one weak link that will break under pressure, while you still have time to strengthen it.
The Decision Quality framework is used in strategy consulting (McKinsey, Bain, and BCG use variants), pharmaceutical companies (deciding whether to advance a drug from Phase II to Phase III trials), oil and gas companies (multi-billion investment decisions), and corporate boards evaluating acquisitions. In German-speaking business, the concept overlaps with the Entscheidungslehre tradition and systematic decision processes referenced in VDI/VDE guidelines. It is especially valuable for high-stakes, irreversible decisions where getting the process right matters more than getting the answer fast.
Frame
Can you state the decision in one sentence?
A well-framed decision has a clear scope, a specific timeline, and an identified decision maker. In practice, "framing" means writing the decision as one concrete sentence that everyone agrees on. Most decision failures start here: the team thinks they are deciding "which CRM to buy" but half the room is actually debating "whether we need a CRM at all." When the frame is weak, every subsequent step wastes effort because people are solving different problems. Test: can you state the decision in one sentence that every stakeholder would agree with?
What good looks like: We need to decide by Friday which CRM to pilot with the 8-person sales team for 90 days.
Red flag: We need to figure out our technology strategy. (Too broad to act on. Narrow it before proceeding.)
Alternatives
Would a smart outsider suggest an option you haven't considered?
At least three genuinely different options prevent the dangerous trap of binary thinking ("do it or don't"). Binary framing eliminates creative solutions that might be better than both options. Good alternatives are realistic (each one could actually be implemented), distinct (not minor variations of the same idea), and include at least one option that challenges assumptions. A team that only considers "AWS vs. staying on-premise" has not genuinely explored alternatives. Azure, Google Cloud, a hybrid approach, and managed hosting are all viable options that change the calculus.
What good looks like: We identified 4 options: Salesforce, HubSpot, Pipedrive, and building in-house. Each is genuinely viable.
Red flag: Our options are: do it, or don't do it. (Binary framing misses creative alternatives. Push for at least 3.)
Information
What would change your mind? Do you have that data?
Identify what you know, what you don't know, and what you need to find out before deciding. The goal is not perfect information (which never exists) but sufficient information: enough to differentiate between options on the criteria that matter. What "good" looks like: for each option, you have data on the 3-5 factors that matter most. What goes wrong when this is weak: teams decide based on assumptions they never tested, then discover mid-implementation that a critical assumption was wrong. The cheapest time to gather missing data is before you commit.
What good looks like: We have pricing from all vendors, reference calls with 2 customers each, and a technical assessment of integration effort.
Red flag: We've read the vendor websites and one G2 review. (Not enough data to evaluate. Invest another day before deciding.)
Values
If two people evaluate the options, would they use the same criteria?
Success criteria must be explicit and agreed upon by all decision makers before options are evaluated. "What does good look like?" should have a specific, measurable answer. Without defined values, each person evaluates options against their own private criteria, which leads to the illusion of agreement: everyone votes for Option B, but for completely different reasons. When values are strong, two people evaluating the same options would weight the same criteria similarly. When values are weak, a CFO optimizes for cost while a CTO optimizes for scalability, and nobody realizes they are solving different equations.
What good looks like: The team agreed: success means 80% adoption within 90 days and a 20% reduction in lead response time.
Red flag: Success means the project goes well. (Too vague. Define measurable criteria before evaluating options.)
Reasoning
Could you explain your reasoning to a skeptic and have them follow it?
The logic connecting your information to your recommendation should be clear enough that a skeptic could follow it and challenge specific links. Good reasoning is documented: "We chose Option B because it scored highest on our three most-weighted criteria (integration, scalability, cost) in the Decision Matrix." Bad reasoning is implicit: "We went with Option B because it felt right." When reasoning is weak, the decision cannot be explained to someone who was not in the room, cannot be audited later, and cannot be improved next time because nobody knows which link in the logic chain was wrong.
What good looks like: We scored all 4 options on our 5 criteria using a Decision Matrix. The reasoning from scores to recommendation is documented.
Red flag: The CEO likes Salesforce, so we're going with Salesforce. (Decision by authority, not by reasoning. Run the matrix.)
Commitment
Does everyone who needs to execute this decision agree with it?
The best analysis is worthless without buy-in from the people who will execute. Commitment means the implementers were part of the evaluation (not just informed after the fact), they understand and agree with the rationale, and they have the resources and timeline to execute. A decision that leadership makes without consulting the team who builds it typically fails not because the analysis was wrong but because the team either resists passively or discovers execution barriers that leadership never considered. Test: does everyone who needs to execute this decision agree with it, or are they merely compliant?
What good looks like: The sales team lead was part of the evaluation and publicly supports the choice. Training is scheduled.
Red flag: The team wasn't consulted. They'll find out at the all-hands. (No buy-in means no adoption. Involve implementers before deciding.)
0 / 6 elements checked
Weak: Several gaps. Invest more time before deciding.
Worked Example
A 50-person software company was deciding whether to migrate from on-premise servers to cloud infrastructure. The CTO proposed the migration at a leadership meeting, and the team was ready to approve a move to AWS. Before committing, the VP of Product suggested running a Decision Quality check. It took 25 minutes.
The team went through all 6 elements: Frame was strong ("Should we migrate our production infrastructure to the cloud within 6 months?" with clear scope, timeline, CTO as decision owner). Alternatives was weak: the team had only considered two options, AWS or staying on-premise. Nobody had evaluated Azure (which offered credits for their use case), Google Cloud (which had a managed Kubernetes service matching their architecture), or a hybrid approach. The team had anchored on AWS because the CTO had used it at a previous company.
Information and Values were strong: cost comparisons, performance benchmarks, and agreed success criteria (99.9% uptime, less than 200ms latency increase, migration within 6 months, cost within 15% of current). Reasoning was strong: a Decision Matrix with 5 criteria was prepared.
Commitment was weak: the IT operations team (3 people who would execute the migration) had not been consulted. When the DQ check surfaced this gap, the CTO invited the ops team lead to the next meeting. The ops lead immediately raised two issues: the current backup system was incompatible with any cloud provider's snapshot model, and one critical legacy application required kernel-level access that cloud VMs don't provide. Neither issue had been on anyone's radar.
| Element | Rating | Issue Found |
|---|---|---|
| Frame | Strong | Clear scope, timeline, and decision owner |
| Alternatives | Weak | Only AWS vs. on-premise considered. Missed Azure, GCP, hybrid approach |
| Information | Strong | Cost, performance, and feasibility data available |
| Values | Strong | Success criteria defined: uptime, latency, timeline, cost |
| Reasoning | Strong | Decision Matrix with 5 criteria prepared |
| Commitment | Weak | IT operations team (the implementers) not consulted |
The DQ check took 25 minutes and prevented a commitment to AWS without exploring alternatives or securing buy-in from the team that would execute the migration. After adding Azure and hybrid as options, and involving the ops team, the final decision was a hybrid approach: non-critical services moved to Azure (which offered 50k in credits) while the legacy application stayed on-premise for 6 months until it could be refactored. The ops team designed the migration plan themselves, which meant they owned it instead of resisting it.
Pro tip: Use the checklist at the START of the decision process, not as a final review. Catching a weak element early (like missing alternatives) is cheap. Catching it after you've committed resources is expensive. The earlier you find the weak link, the less it costs to fix.
Pro tip: In team settings, have each person fill out the checklist independently before comparing results. The disagreements about which elements are strong are the most valuable finding. If three people rate "Commitment" as strong but one person rates it weak, that person knows something the others don't.
Pro tip: Pay special attention to "Alternatives" and "Commitment" because these are the two most commonly weak elements. Teams often skip to evaluating the first two options that come to mind (weak Alternatives) and decide without consulting the implementers (weak Commitment). These two failures cause more project delays than any analytical mistake.
Pro tip: When the score is 3-4 out of 6, resist the urge to proceed anyway. Address the weak elements before deciding, even if it takes an extra day or week. The cost of a 5-day delay is almost always lower than the cost of a 6-month project that fails because the frame was wrong or the team didn't have buy-in.
Frequently asked questions
- Ronald Howard at Stanford University developed the foundations in the 1960s. Carl Spetzler, Jennifer Meyer, and Claudia von Winter formalized the six-element model in their book "Decision Quality: Value Creation from Better Business Decisions" (2016). The Strategic Decisions Group (SDG) has applied the framework to corporate decisions for over 30 years. It is now standard in strategy consulting and pharmaceutical decision-making.
- 10-15 minutes for an individual check. 20-30 minutes when a team of 4-6 people checks independently and then compares results. The comparison discussion is where the real value emerges, so don't skip it. For high-stakes decisions, a 30-minute DQ check can prevent months of wasted effort.
- That disagreement IS the finding. If three people rate "Information" as strong but one person rates it weak, that person has identified a data gap the others missed. Explore the disagreement before moving on. The most dangerous decisions are the ones where weak elements go undetected because everyone assumed they were fine.
- The full 6-element check is designed for decisions with significant consequences. For everyday choices, a quick mental scan of the top 3 elements (Frame, Alternatives, Commitment) takes 2 minutes and still catches the most common failures. As a rule of thumb: if the decision is hard to reverse, run the full checklist.
- Decision Quality checks whether your process is sound. The Decision Matrix is one of the tools you use within that process (it addresses the "Reasoning" element by structuring how you score options). DQ is the meta-check: before you run a Decision Matrix, verify that you have the right Frame, enough Alternatives, sufficient Information, agreed Values, and real Commitment. The Matrix handles Reasoning.
Decision Quality Checklist (PDF)
One-page printable with all 6 elements
Related from the blog
Related methods
Decision Matrix
Score options against weighted criteria for an objective, data-driven comparison. The go-to method for complex decisions with multiple factors.
Premortem Analysis
Imagine your project failed. Find the reasons before they happen.
Cognitive Biases
10 biases that distort decisions and how to counter them.