Five Tips for Designing Remote, Unmoderated User Testing Tasks

Carrying out remote, unmoderated user testing is growing more and more popular with tools and services to help popping up all the time. The benefits are obvious – you can test as many people as you can get hold of with no scheduling issues and no need for you to use up part of your busy day actually being there while participants work through the task. All you have to do is send the task out to your participants and wait for the screen and voice recordings to come back, ready to be analysed.

Not being there at the time of the test does have some drawbacks though, and I’ve seen some pretty disappointing results come out of these unmoderated sessions. At best these poor results mean you’ve wasted time and money, but at worst they can lead an inexperienced client or site manager to make changes based on not much more than a throwaway comment.

These are my tips on setting a task to make sure you generate useful, actionable results from your tests.

1) Make sure your site works
This one sounds obvious but a surprising number of test videos come back showing that part of the site doesn’t work as it should. This can really throw participants when there’s no one there to explain it. Some people can get really quite irate, even abandoning the whole test over relatively small bugs.

If you’re carrying out tests on an unfinished site, then it’s probably better to opt for a research technique that lets you be there to explain and encourage participants when any confusing issues arise.

If you’re testing with a work in progress then you’ll need to be extra clear about what you’re providing and what isn’t yet finished. I’ve seen participants completing tasks on clickable wireframes confused into commenting on the Lorem Ipsum content and even the visual design of wireframes. (They were indeed a “bit plain to look at.”)

2) Ask for action not opinion
The best tests are the ones where participants are given a realistic activity to do – to find something out, to buy something, to make a booking. If participants have strong opinions about anything on your site, they’ll mention these as they carry out the task anyway.

Sometimes site owners set tasks along the lines of “Click on all the pages and give feedback.” The results of these types
of task are never particularly useful. For instance, participants may say things like “I love this navigation” or “This menu is cool” as they watch a snazzy bit of jQuery flip about, only to get totally lost when they actually try to use the navigation to do something real.

If you ask people what they think then nine times out of ten, their comments will be focused around look and feel of the site. A certain amount of feedback on this aspect of your site is fine, but it needs to be in a realistic context. “I love this blue” doesn’t mean much. “I’d be reluctant to give this site my card details because it looks cheap” is much more useful.
Of course, choosing a realistic activity can be the main challenge of designing a good task, which brings me to my next tip…

3) Don’t be too prescriptive
You need to make it clear to your participants what you need them to do, but you need to leave them the freedom to act in a natural way. The whole point of user testing is to see what kind of behaviour your site elicits, if you’ve been too strict with your task then all you’ll find out is how well a participant can follow instructions.

It’s often useful to ask participants to imagine they’re using the website to help someone else – to buy a present, to find some information to relay. This way, participants can still act in a natural way even if they don’t feel the particular site or service is totally relevant to their current situation and needs. For example, the participant you’re using to test your hotel-booking website might say, “Of course, I wouldn’t do this in real life because I NEVER stay in hotels. Self-catering all the way, me.” Everything which follows is then not much more than an act, as the participant focuses more on checking the boxes of the task than showing what they’d really do.

If you ask your participant to help someone else, for example “Book a hotel for a friend’s visit” (being sure to ask them to think of a specific friend) then it gives them a genuine person to act for, which is usually much more effective than asking them to second-guess a make-believe version of themselves. Forcing participants to be someone they’re not is never the best way to draw out authentic behaviour.

The other important part of this tip is about not giving too much away. Don’t use any of the wording you’ve used in the site itself in your activity questions. The classic example is the one about asking participants to imagine they have book scattered around the floor of their living room, rather than specifically directing them to the term “bookshelf.”

4) Give participants false details
Lots of site functionality requires participants to register with the site or complete some other kind of form using their personal details. Participants are almost always reluctant to use their real details so they make up dummy names and email addresses on the spot.

Sometimes this skews the testing of forms as validation messages go crazy when faced with names of only one character, invalid email address formats or telephone numbers with the wrong number of digits. Participants can be left feeling much more frustrated than they would be if they’d completed the form with their real details.

It can also cause real problems if participants need to re-use their test details later. I once saw a whole batch of participants abandon a task because they were asked to register and then log in again later in the same activity – every one of them had forgotten the made-up email address they’d stuck in the registration form.

This one’s a pretty easy one to get right – just remember to give your participants dummy details to use.

5) Consider your task’s different parts

This is perhaps a bit more of an obscure one, but I’ve seen it cause a few problems so is still worth mentioning: if your task has multiple activities, then think about how they flow on from one another.

I once saw a task where participants were asked to book an appointment as one activity and cancel it as another. The only problem was that the cancelling activity came right after the booking part, so all the users still had their booking confirmation page on screen. The “cancel this appointment” button was pretty obvious from there, but of course users generally don’t want to cancel an appointment in the same session that they’ve booked it, so this was a bit useless. What the task really needed to be testing was how easy it was to cancel the appointment from the site’s home page.

This kind of thing is easy to guard against but it’s just something to bear in mind – make sure you get participants to close the site between activities.

Overall, I think it’s important to recognise that not everything mentioned in the sessions will be that relevant or helpful. The “think aloud” nature of the tests often means you’ll get a lot of commentary that sometimes won’t be much more than the participant filling the silence. As long as you don’t fall into the trap of reading into everything that comes out of the participants’ mouths, you should get some useful stuff from this technique.

This entry was posted in UX process, UX skills. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

* Copy This Password *

* Type Or Paste Password Here *