‘I want a pony!’ – or the critical difference between user research and market research

Research is not a new phenomenon in government. When you start a new project it is very possible that there is a wheelbarrow-full of previous, relevant research for you to review. Most policy, for example, is evidence based. Similarly when it comes to service delivery, there is often no shortage of research – often in the form of market research.

An illustration of a pony with a rainbow coloured mane.

Caption: ‘I want a pony’ – a demonstration of a want, not a need.

Market research goes wide not deep

Market research, usually drawn from focus groups and surveys, is appealing to many large organisations including government. It lets an organisation gather opinions from a reasonably large, geographically and demographically diverse audience.

When we talk about Criteria 1 of the Digital Service Standard – ‘Understand user needs, research to develop a deep knowledge of the users and their context for using the service’, we rarely recommend starting with large scale market research. Instead, we recommend that teams do user research (also known as design research).

What works is more important than what people prefer

When designing government services, we are not competing to win market share or even give people what they think they ‘want’ (ie ‘I want a pony’). Our main concern is to make sure that people know what they need to do and that they can do it as easily as possible. This is a win–win outcome. Increased digital uptake and reduced failure demand both mean less cost to deliver services, while better comprehension and fewer mistakes mean increased compliance and policy effectiveness. Better digital services are also more convenient and easy to use for the people who need to use them – a better user experience.

These priorities mean that usability (including accessibility) is our primary focus.

User research methods offer deeper insights

There is only one way to understand if a service is more or less usable and that is to observe someone attempting to use it – ideally to achieve a realistic outcome in a realistic context. For example, watching someone try to find out if they are eligible for a benefit or grant based on their own circumstances and using existing websites, rather than asking them how they’d like to do it in a focus group room.

There is quite a lot of evidence that shows that when you are doing usability testing it requires only quite a small sample size to identify usability issues. This is why we recommend doing a series of small studies instead of investing in one large scale survey or a series of focus groups.

After each session we are able to apply the insights we’ve gained with the constant goal of attempting to improve the usability of the service before testing it again. Because we work in agile teams we try to do usability testing and subsequent improvements in every sprint.

By working in this iterative way we can guarantee that the service we deliver will be more usable.

Once we have achieved usability for the widest possible audience (including usability for people who have particular access needs) we can start to consider questions of preference.

People prefer government services that work

In market research it is tempting to put pictures of websites in front of people and ask them which they prefer – which one feels more trustworthy or more secure or more modern? In real life, it is not the picture of the website that people have to interact with – it is the actual service. While the initial perception may have an impact for a second or two, the real impression comes from whether people can actually find, understand and undertake the task they need to do easily and successfully. People don’t choose to pick up the phone because they don’t like the look of a digital service. They call because it doesn’t let them get the job done.

Choose the right research tool for the research question at hand

It is important to recognise that we have a wide range of research methods available to us and that we should seek to use the right one for the job at hand. For example, small-scale usability studies won’t let you measure the prevalence of a particular trait across the population. But they are super effective for finding and fixing big usability issues.

Large scale studies including surveys, focus groups and random control trials (popular with behavioural insights experts) can help provide certainty at scale and are an important part of the mix of government research but are not appropriate as the primary tools for either discovery research or research to improve the usability of a digital service.

Both qualitative and quantitative research is important and necessary, but in service design, we should always start with rich, qualitative insights.

Join the Service Design Community

If you’d like to talk more about this with many others who work in government and care about user research and design, please join the DTA’s service design community.

Want to join the conversation?

Read our comment moderation guidelines