This post walks through Pugh Pairwise Selection, a Pugh Controlled Convergence system. I explain how it works and why it is useful.
Pugh Controlled Convergence is a systematic approach for comparing ideas across different criteria. Pugh Pairwise Selection extends the standard Pugh process by automating the weight assignment to the decision criteria by comparing each decision criteria with each other in a pairwise fashion.
Each idea is compared against each other on each decision criteria. These scores are weighted by the perceived importance of that criteria, and added to create a final score for each idea. This process leads the team to think critically about what are the most important factors in their design evaluation process, and it can help remove bias for any one give idea by evaluating the ideas on each criteria alone rather than comparing the outright. Automation can help make that discussion more rich by allowing participants to tweak the original decisions and observe the impact on the final scores. In practice I don’t think one run of a Pugh matrix is sufficient to make a decision across many ideas, but I think this is a useful discussion tool that can help improve the brainstorming process.
The process has 5 steps:
(Step 1, 2) Listing options and decision criteria is self explanatory, brainstorm it up.
(Step 3) Ranking decision criteria against each other can be tricky because it can be hard to come up with a relative ranking of all the criteria in you head if there are a lot of criteria. Instead I think a better approach is to compare the criteria one at a time. A matrix can help with this, for example:
| | uptime | engineering effort | operating cost | latency |
|--------------------|--------|--------------------|----------------|---------|
| uptime | = | < | < | < |
| engineering effort | | = | ^ | ^ |
| operating cost | | | = | ^ |
| latency | | | | = |
In this example, we compare 4 criteria for a software service. The assigned priorities follow: uptime / service reliability is most important, then latency then operating cost then engineering effort.
Each grid on the top right triangle of the matrix compares two criteria directly. We can fill each of these boxes with a symbol to indicate which one we prefer. ’^’ for the top, ’<’ for the left, or ’=’ for if they are the same. Based on these selections, the computer can automatically generate weights for each of the catagories. Each criteria can get a weight based on the weighted average “number of times chosen” it received in the pairwise matrix (e.g. 0.12). Every option gets 1 point and each additional win yields that criteria another point. This sum is used to compute the weighted average.
| | Number wins | Weight |
|--------------------|-------------|--------|
| uptime | 3 + 1 | 0.4 |
| engineering effort | 0 + 1 | 0.1 |
| operating cost | 1 + 1 | 0.2 |
| latency | 2 + 1 | 0.3 |
|--------------------|-------------|--------|
| Total: | 10 | 1.0 |
(Step 4) Options can be evaluated against a baseline. I’ve used a 1-5 scale before where high score is always the ‘better’ option (e.g. lower cost, more throughput, etc). 3 is the baseline measure. 2,4 represent improvements / worse than baseline. While 1/5 represents major improvement / major cost.
(Step 5) These values are multiplied by the criteria weights and then summed for each concept to compute a final score. The final score can be probative. I think the conversations throughout this process are the most interesting part. When we prioritize decision criteria or rank ideas we begin to inform that final score. The discussions along the way help the team identify issues in how they prioritize different criteria or how they compare the relative strengths of their different concepts. At the end of the day, it’s an imperfect process, but it was interesting all the same to use this tool to get a quantitative sense of our decision space. It forced us to look at the options differently, evaluate tradeoffs, and develope a deeper understanding of the problem we were trying to solve as a result.
I would recommend this process early in the concept stage for a project. It helps to guide brainstorming, by providing a mechanism to compare very different approaches to the solving the same problem, and it helps keep the team aligned on developing solutions that are weighed towards the most important criteria.