Asking the right questions as service designers — Meaningfully learning from our users
How asking the right questions in service design leads to real insights, deeper connections with users, and solutions that truly matter.
As service designers, the questions we ask shape the solutions we develop. None of us in the digital government space are strangers to “solutioning”, which typically means trying to fix problems that aren’t fully understood, or in some cases, don’t exist in the first place. The key to avoiding this is asking the right questions that dig into users’ experience of services. In this article, we’ll explore the different types of questions that help anyone designing a service better understand their users. We’ll cover the dangers of shallow insights and the importance of gathering data with intent, especially when addressing equity issues. We’ll also cover interdisciplinary analysis and how solving smaller problems can lead to larger, more sustainable systemic change.
By asking better questions, we build better tools and services.
The role of the service designer
When we’re interviewing users or creating surveys for them, the goal is to dig deep and understand their needs rather than rushing to fix problems we might not fully understand. Rushing to solve problems can create silos and lead to disconnected efforts in improving experiences. We can avoid this by asking the right questions.
Our job as service designers is to connect the dots and fill the gaps. Our role is similar to that of a doctor diagnosing a patient. The patient doesn’t just walk in and say “I need antibiotics”. Instead, they describe their symptoms, and the medical professional uses their subject-matter expertise to figure out the appropriate treatment. Similarly, users may not always know the best solution to their problems, but every user can tell you about their challenges. So we connect the dots.
Getting started with observations
Before we formally talk to our users, it’s a good idea to start working with what’s already there. Is there an issue that keeps coming up as a point of contention for a certain group of practitioners in our vicinity? Are people complaining about it on the internet? Has someone written or documented some of these concerns in the past? It’s important to gather what we can so we walk in as informed as possible when it’s time to curate a survey or design interviews with our users. It also prevents us from repeating work someone else may have already done in the past. To that end, here’s an an article in 2022 I penned about getting insights from users without actually talking to users.
Open-ended questions for better understanding
The best way to start to learn from our users is to ask open-ended questions that let them describe their experiences in detail. Here are a few examples of good questions:
- How do you do your job?
- What do you like about your job?
- What tools do you use?
- What causes you frustration?
These types of questions help us get a picture of the user’s world. By asking how they work and what tools they use, we get a clearer understanding of the systems and challenges they deal with daily. This helps us understand the root causes of their problems rather than applying digital bandaids or superficial fixes.
Much like the analogy of the medical professional I previously shared, users might be able to give us answers when we ask them what would solve their problems, but it’s often not the best or optimal solution. Users are experts in their own experiences, not system design. Asking for solutions leads to shallow insights, while asking to better understand their pain points and frustrations allows us to apply analysis and come up with the right interventions.
Bad questions lead to bad data
Some questions can seem helpful to us as survey designers, but they don’t actually provide a lot of meaningful information. For example:
- How would you rate this session on a scale of 1–10?
- What subject would you like to see discussed at this training?
Ratings can feel scientific but they don’t carry a lot of depth. One person’s 5 can be another person’s 8 — or the same person’s 8 on a different day! Without context, it’s hard to know what these numbers mean. Asking users what type of subject or content they would like to see in a training is putting the burden of designing the solution on the them. That’s our job!
While users can tell us about their struggles, asking them to self-diagnose misses the opportunity for deeper insights and the ability to identify meaningful trends across participants.
The limits to self-assessment
Asking people to self-assess their skills, knowledge of competencies — especially in the digital world- is a common practice in government but it comes with quite a few limitations. Self-assessments are plagued with bias as people have a tendency to over-estimate or under-estimate their abilities. This is a good time to mention the Dunning-Kuger effect, where those who lack competence in a particular area are often unaware of their shortcomings and tend to overrate themselves, while the opposite is true of those who are highly skilled.
In digital spaces, where skillsets have to rapidly evolve, asking our user to rate their competencies can provide a skewed picture. Relying on self-assessment without deeper exploration of the process and expected outcome can lead to misguided solutions.
Self-assessment can definitely be helpful as an initial touchpoint to understand how our users see themselves relative to the tools and systems they use, but it should always be paired with more objective measures and deeper inquiry.
On starting small and scaling up
Often times, organizations will try to solve service design problems by applying software solutions (chatbots are a recent popular emergence). But when we focus on smaller problems and fixing manageable — like improving a tool for a particular user group or addressing a niche frustration, we can create ripple effects that can impact larger audiences.
I love using the example of Sam Farber, the founder of OXO. Sam noticed that his wife, who had arthritis, struggled with cooking because she found holding and using traditional kitchen tools painful. Instead of overhauling the entire kitchen, Sam created a more comfortable ergonmic grip for everyday tools. This tiny change, starting with a kitchen peeler, led to the creation of a whole line of products that are easier for everyone to use, not just people with disabilities. Focusing on niche challenges can lead to innovations that benefit the wider group, so don’t be afraid to start small.
Making equity and inclusion meaningful
Asking about demographics like age, gender or race is useful but only when asked with a specific purpose in mind. Often, we see these questions lumped in as a convenient add-on to surveys about client experiences but randomly collecting demographic data introduces bias that can skew our results. They also carry the danger of leading to surface level consultations that don’t actually drive the type of change needed.
This doesn’t mean we don’t care about demographic data! When done with care and sensitivity, it’s pretty critical to improving equity and representation but if our goal is to understand how different groups of people experience a service or product, we have to ask the right questions in a meaningful way. We should structure a hypothesis — such as identifying barriers a specific community faces- and collect data with the goal of improving our services.
It is important to be clear in our intent- and these questions should be asked when there is an observable disparity or inequity to be addressed. But if there isn’t a clear purposes, it’s better to not ask them. Collecting demographic data for service design without intent can harm your analysis by introducing irrelevant information or tokenizing an equity-seeking community. It’s important to be aware of extrapolation — just because the users in your group have a problem doesn’t mean that those outcomes can be assumed of the greater population.
Service designers benefit from the input of multiple disciplines. We can’t get the full picture if we’re only using one lens to analyze the data. Whether we use social science, behavioural science, or statistical analysis — each discipline offers unique insights that improve our work and outcomes.
Conclusion: ask better questions, get better results
The foundation of effective service design lies in the questions we ask. By focusing on open-ended, experience-driven questions, we can uncover deeper insights into how our users feel and think. By avoiding asking users to self-diagnose their problems and being cautious with self-assessments, we limit bias. When collecting demographic data, we must do so with intent, especially when addressing inequity. Solving small, specific problems can lead to larger and more meaningful change. With the right questions and interdisciplinary analysis, we can succeed in building services that truly meet our users needs.