Jared Spool  

Putting Perfect Participants in Every Session

November 5th, 2005 by Jared Spool :: see related comic

When putting together a design study, whether it is usability testing, field research, or focus group activity, it turns out that the most critical activity is recruiting the right participants.

Over the past few years, we’ve interviewed several dozen user experience professionals, looking at the practices they use to conduct their research. As we dissected every activity involved in producing a successful study, we came to the conclusion that recruiting participants is the lynchpin that holds the study together.

If you recruit an inappropriate participant, there is very little you can do with task design, session facilitating, or data analysis that will turn the results into something useful. Yet, if you get an ideal participant, you can compensate for practically any amount of poor task design, facilitation, or analysis and still see valuable findings that will improve your design.

We were surprised by the importance of recruiting because we rarely see it discussed in any forum about conducting design studies. At most, you see people asking for names of recruiting agencies. We’ve never seen an in-depth discussion of the activities of recruiting and the best practices to follow.

Recruiting the wrong participant can have dramatic effects. It can slow down the research process, increase costs, and, in the worst-case scenario, create faulty results which waste valuable developer resources as they chase down the wrong issues. We recommend to our clients that they closely review their recruitment process to ensure they are executing best practices.

###Recruitment Best Practices

As we conducted our interviews, we compared the teams that were getting the best results from their studies to those teams that regularly struggled. What we found was fascinating.

The most successful teams all:

- Made building their lists of potential candidates a year-round activity, not just when they were preparing for a study
- Screened and recruited their own participants, instead of using an outside agency
- Saw the recruiting process as a technique to collect data about who is (and who isn’t) in their target audience and what defines each segment
- Used open-discussion interviewing as a screening method, instead of a flow-chart-based screener
- Had the recruiter meet frequently with the team to discuss what the recruiter was learning about the audience and how to better leverage the recruitment process

These teams saw interesting benefits from following these practices: they never had no-shows (instead of the typical 10%-25% expected by the struggling teams) and they never found themselves questioning or throwing out findings because they weren’t sure if the participant was “up to snuff.” This makes the study process more cost effective; maximizing the value from every moment spent researching.

###Demographics vs. Experience and Behavior

In our study, we found that the struggling teams often screened candidates based on demographics, whereas the successful teams focused on candidate’s experience and their potential behaviors. The difference is subtle, but critically important.

It’s easy to think that the demographics of the target audience are the driving force behind selecting study participants. However, the best teams discovered that they learn more from their studies when they focus on the participant previous experience and how they will behave in the study. You don’t need to have someone who is in your target audience. You only need someone who behaves like people in your audience group.

Imagine you’re designing a new video game for an expensive gaming console and you want to learn whether the game’s controls are easy to learn and use during game play. Your target audience’s demographics might indicate that the majority will have reasonably high household incomes ($120,000 or more) and be relatively young (age 13-22).

It would be natural to eliminate candidates who come from households with smaller incomes or those who fall outside the age range. But, that would probably be a mistake and not produce the best results.

Would a participant that comes from a household that has an $80,000 annual income really learn to play the game that differently than a household that brings in $150,000 a year? How about someone who comes from a household that borders on poverty? How will the differences in income affect the results of the study? Unless you’re clear about the affects, screening on income is likely to make the recruiter waste significant time disqualifying perfectly valid candidates without yielding any better participants.

Similarly, it’s not hard to imagine two 16-year-old boys who could be worlds apart, when it comes to game play. What if one is an avid game player and the other who shows no interest in gaming? You’d see dramatically different reactions to the game. Yet, a 30-something who goes home and plays for 5 hours practically every night may have some great insights to share about the game, though he’s far away from being in the age group.

In our study, the best teams often bypassed demographics and looked at experience and behavior instead. In our video game example, they’d likely look at the games the candidates regularly like to play and the amount of time they spend playing. The recruiter would talk to them about their playing history and assess if they would behave like an ideal user.

The recruiter would also pay close attention during the interview to assess how effectively the candidate describes their thought-processes and whether they are comfortable with strangers. After all, a study participant that can’t speak their thoughts or is intimidated by the study’s observers will be of little use to the team. We noticed that many outside recruiting firms, paid to fill study sessions with “warm bodies” often overlooked these key attributes.

###Solid Recruiting is Not Optional

When we talked to the struggling teams about why they weren’t employing the best practices, many told us they’d never really thought about it before. A few indicated that they felt it was a luxury they couldn’t afford.

However, one successful team’s manager told us about the catalyst behind her inspection of their recruiting process: They’d been holding regular usability tests, but she had failed to get upper management to attend and endorse the process, thus impeding the adoption of testing results in her organization.

One day, she managed to convince the CEO of her $18 billion company to attend a test. And, unfortunately, that was the test where the recruiting agency sent a participant who had managed to completely fool their screening process. Though the participant claimed he was an expert online shopper, he couldn’t use a mouse nor did he know how to type into a search box. It was clear he was there for the money and told the recruiter whatever they wanted to hear. (The agency subsequently swore they’d used him in studies before without problems, but upon investigation it turned out they ran short of time and grabbed the first warm body they could find to fill the slot.)

During that session, the CEO quickly became frustrated and left in the middle. At that moment, the manager decided to rip apart all of their recruiting practices and start from scratch — treating every session as one the CEO was attending. (Fortunately, the CEO did sit in on another test a little later. That participant was a perfect match and now usability testing is solidly endorsed in the organization.)

That manager found that having a solid recruiting process was not optional for her organization. She now understands the importance of investing to put perfect participants in every session.

_Jared M. Spool is the Founding Principal of the think-tank, [User Interface Engineering][1]. He recently co-wrote the report [Recruiting without Fear][2]_

[1]:http://www.uie.com “User Interface Engineering”
[2]:http://www.uie.com/reports/recruiting_without_fear/ “Recruiting without Fear”

11 Responses to “Putting Perfect Participants in Every Session”
niblettes wrote:

I think most experienced designers would agree that behaviour and experience are more valuable insights than demographics, and not just in terms of recruiting either.

However getting at behaviour and experience is a tough sell because it’s costly. While marketing departments usually have a wealth of demographic data immediately available, they don’t often have behavioural (beyond purchasing behaviour) or customer experience information. This means having to start from scratch, forcing designers to beg for and justify more time and money.

So, have you learned what tactics and strategies people have used to successfully beg for and justify the extra time and money necessary to follow behaviour-based recruiting?

Robby Slaughter wrote:

Mr. Spool is so wrong. The “most critical activity” in any design study is *not* “recruiting the right participants.” What really matters is getting the stakeholders to believe that studying customers/users has real value and should significantly influence product design and implementation. I never ceased to be amazed how much people push back against the very idea of focus groups, paper prototypes, or usability studies. Naturally, the big-name gurus of large successful firms don’t have this problem. I suspect that 95% of usability enthusiasts, however, find this well-written article painfully irrelevant.

Kevin Cheng wrote:

Robby, you’re basically saying “doing user experience is important and convincing your internal team of that comes first” whereas I think Jared is going on the assumption that once you have that green lighted, one of the most important parts often overlooked is the recruitment phase.

Knowing how to do usability well and correctly and thus yield good results, would certainly help your cause. I find one of the best ways to convince non-believers is have them sit in on a test. If you manage to only convince them to do it -once- then you best do it right or the next time, it’ll be that much harder because they can say, “those participants didn’t give any useful data. They weren’t even close to the people who’d use our stuff”.

Robby Slaughter wrote:

Kevin, you’re certainly right. I know that *I* was convinced of the importance of usability when I sat in on my first test. However, in my experience, getting folks to this level has been worse than pulling teeth.

Readers of Ok-Cancel, am I alone? Do you find in your jobs that usability is mostly a pipe dream, and that your boss won’t even allow you to test designs on your own time with your own money? That’s has been my experience. And, due to the dismal design of almost every user interface out there, I suspect others are having the same problem.

Jared Spool wrote:

The “most critical activity” in any design study is not “recruiting the right participants.”


I agree that buy in from stakeholders is very important. I think we can agree to disagree as to whether that’s an activity in putting together a design study or not. I happen to believe that it comes before you start putting the study together, so I stand by my original statement.

That being said, I’ve written an article on how effective teams get buy-in from stakeholders, called The Cost of Frustration. As the article states, it’s not just a matter of beating into their heads that usability practice is somehow important — it’s about finding where the pain points are in the organization and utilizing your skills as a usability practitioner to relieve that pain.

Hope this helps, Jared

J. Scott wrote:

This is a great article Jared. Very insightful and nice to see data from actual experience rather than theories. good work.

Orion Adrian wrote:

This of course assumes that demographics ends with age, gender and income. I find that what we’re really talking about is the difference between highly correlated and perfectly correlated. Certain demographics are easy to get at like age and gender and others aren’t like how often someone plays a videogame, but they’re both demographics. What you’re saying is put in the extra money and effort to find participants who are perfectly correlated.

Jared Spool wrote:

What you’re saying is put in the extra money and effort to find participants who are perfectly correlated.


I think that would likely be a bad idea.

If perfect correlation exists, (and with any reasonable population size, I doubt it does,) getting close to it would probably increase costs exponentially.

And for what end? What you’re really trying to do is get a bunch of folks in your study who behave like the people you’re designing for. Instead of using a second-order approximation that says, “These folks have the same description of the people we’re looking for, therefore they’ll likely behave the same,” why not just go for folks who *do* behave the same?

It’s cheaper and gets you where you want to go faster.

Jay wrote:

Readers of Ok-Cancel, am I alone? Do you find in your jobs that usability is mostly a pipe dream, and that your boss won’t even allow you to test designs on your own time with your own money?

It is possible to convince your boss to take usability seriously. If I did it (and am about to grow my team to 2), you can too. Your boss sounds a little, um, troubled and you might have to go around him diplomatically (or at least understand his bizarre request that you don’t pursue it on your own time).

Ron Zeno wrote:

I agree with Robby, to a point. Most important is getting stakeholder to believe in the value of spending the extra time and money to design for user needs and capabilities. Testing is only a small part of this. Recruiting is only a small part of testing. Yes, recruiting is usually done very poorly, but no more so than each and every other part as well.

Yes, usability is mostly a pipe dream. Of all the usability-related activities you can be involved in that can have a positive impact on the product, it’s best to focus on those that are extremely simple and highly effective.

texasforrce wrote:

Morning Guys. I’m trying to find a KeyLogger Software .. Anyone have any ideas of website or where I can get this?

Leave a Reply

OK/Cancel is a comic strip collaboration co-written and co-illustrated by Kevin Cheng and Tom Chi. Our subject matter focuses on interfaces, good and bad and the people behind the industry of building interfaces - usability specialists, interaction designers, human-computer interaction (HCI) experts, industrial designers, etc. (Who Links Here) ?