A UX Research Prioritization Framework


One of the biggest challenges facing teams wanting to do more hands-on primary UX research is the sheer expense of it. Setting aside weeks or even months of UX designer time to shadow customers, run ethnographic studies, create thorough affinity diagrams and distill genuine insights from them can feel like a lavish expense in even the most well-funded organizations.
The truth, however, is that without these insights, teams stand a far greater chance of misidentifying the underlying problem and therefore building the wrong solution. This outcome, of course, is much more costly than it would have been to send a few researchers into the field to properly define the problem space and design the right tool to address it in the first place.
That said, it would be foolish to think that every project needs a massive contextual inquiry study before designers can start work. I developed this framework to give my teams a means of systematically identifying the projects that would stand to capture the greatest benefit from deep research and which can move forward with secondary research alone, or with smaller, more targeted types of primary research.
Here’s how it works.
Every initiative that appears on a PM roadmap gets evaluated across two key metrics:
Y Axis: Uncertainty. This is a measure of how well we, the UX designers that will be doing the work, understand the problem that needs to be solved. If a PM has already gathered a large amount of customer research and can explain it with credibility and clarity, we may feel comfortable moving into the next phases of design.
By contrast, if no one has a clear handle on exactly what problem we’re trying to solve, or if it feels like there are a number of overlapping problems without clear borders, we’ll mark this as having a high degree of uncertainty.
X Axis: Risk. This is defined specifically as the risk we’re exposing users or customers to in the event that we get the solution wrong. For example, if the required feature is for a user to upload an avatar image to their account, we might choose to call this a fairly low-risk activity.
Conversely, if the requirement is for a feature that will allow creating user groups that carry access and permissions rights for thousands of users at a time, we’d likely identify a high degree of user/customer risk if we make a design or engineering mistake.
What emerges from all this is a top right quadrant that highlights the initiatives that are not only poorly understood, but that carry great risk.
This visualization also accommodates a means of providing basic relative scaling in a t-shirt size manner (shown as bubble size) so that initiatives can more easily be compared and contrasted with one another.
While this tool isn’t the only metric we use for deciding what to research, it’s a fast and valuable lens for helping determine which areas to focus on and how much investment each is worth. From there we take the top priority concerns and then use other exercises to determine which research methods are the best fit for the investment scope, the timeline, and the nature of the problem to be solved.