At this point, most business owners and webmasters understand the need for a great User Experience, but many don’t know how to get there. Refining any product or website takes experimentation, testing and research. Research is the heart of user experience, and it is this type of exploration that distinguishes UX from other types of design. There are many ways we can gather information about the products that we build and how people use them, but usability testing is the most basic and the most useful.

Unfortunately, testing is often skipped when designers and developers are rushing to launch changes. I’ve heard from plenty of clients who skipped usability testing because they thought it was either too expensive or too time consuming, so today I am going to show you ways to get this type of feedback quickly and inexpensively.

OK It’s Easy, But Why Should I Test?

Usability testing allows us to quickly identify real problems with the product we are building. Simply observing real users perform tasks we have created for them is one of the easiest ways to overcome design hurdles, and it sets the foundation for a long user-feedback cycle that can be used to improve your products and services. So how do we observe these users?

There are three primary methods of usability testing: remote testing (moderated and un-moderated), Lab testing and Guerrilla testing. I’ve written about the basics of each sort of usability testing, but today I want to focus on how you can actually start user testing. Since most of us won’t have access to a laboratory setting, I’ll focus on remote and guerrilla testing.

Remote testing is a favorite tactic of ours because you can conduct tests without being in the same physical location as the test participant, which is convenient if the people we need to test are spread out, geographically, or if you need to get test results quickly. Moderated remote testing can be done simply by using a screenshare tool and chat functionality to perform a test, while unmoderated tests can be performed with third-party vendors like that will handle recruitment and payment of users. All you have to do is prepare tasks, and the third party will provide you with a recording of the session.

And just like the name implies, guerrilla testing can be done by approaching potential testers with a laptop or tablet in a safe environment.

Both forms of testing can be accomplished very quickly be following these five simple steps. First, I always take the time to create a low-fidelity test plan document because it will save time when you need to deliver the study results.

Usability Testing Step 1 – Define Intention & Identify Your Goals

In this step, you are trying to identify what the product owner would like to uncover with this study. A good place to start is to identify what pain points the Product Owner is currently feeling with the site (like lower conversions) or what main task users should be performing on the site.

It is also important to identify how the information you uncover from the test will ultimately be used.  Is it being used for an entire site redesign or small changes that can create quick wins?

Usability testing Step 2 – Recruit Users

It is always best to use real users.  Of course, that’s not always possible, so an alternative is to recruit participants that represent as close to that target audience as possible.

Here is an example of the type of custom recruitment you can do with  The example below is from a recent study I conducted.  Prior user research gave us a pretty good idea of our user definitions including ethno-demographic information, behavioral and attitudinal information.



It was crucial that the test participants matched this target audience, so we also used a screening form to qualify and disqualify candidates.


With that, we had our target audience, and you may have noticed I conducted only 3 usability tests. Believe it or not, that’s all we needed after we had the right users. Unlike surveys, we don’t need a huge sample size since the purpose of user testing is not to provide statistical relevance.  Testing feedback is meant to give us the insight to make the design better, not to uncover traits of our users.

The Nielsen Norman Group is a ground-breaking consulting group and leading voice in the UX community.  Their research has determined that you can get the best results from a usability study by testing no more than 5 participants.

So let’s take a look at this.  This graph demonstrates the insights you gain with each participant you add to the study.


Let’s tackle the painfully obvious fact first: zero test participants yields zero feedback. As soon as you have gathered feedback from your first test participant, however, your product knowledge increases by almost a third of all the information available to learn. If you’re unconvinced about the ease of usability testing, let that sink in: a single test can reveal almost a third of the problems with your site.

Each new user uncovers additional problems, but you’ll see that after 5 tests, you’ll quickly experience diminishing returns. You will continue to see the same problems over and over, so the insights become less valuable. There is no real benefit to continue to add more users, because you do not gain anything by observing the same thing over and over again.

“Testing one user early is better than testing 50 near the end.” ~ Steve Krug, author of “Don’t Make Me Think”

Usability testing Step 3 – Prepare the Tasks

Writing tasks is the trickiest part of usability testing. The goal is to craft tasks that answer the questions we identified in Step One – setting intention and defining goals – and the tasks you’ve arranged for users should focus on answering those questions.

However, we must be mindful that most of human’s cognitive activity occurs in our subconscious.  Harvard Business School professor Gerald Zaltman talks about this in his book “How Customers Think”:

There is a huge difference between what people do and what they say: 95% of our cognitive tasks occur subconsciously. If we ask someone why they did something, they will not be able to tell us reliably, but they will be able to tell us how they felt about a particular task.  So it is more effective to ask actionable questions and then observe behavior than it is to ask someone why they did something.

With that in mind, how do we create tasks? First, understand that there are two types of tasks: open and closed. Closed tasks are specific to what users need to do, and they allow us to measure whether an interaction passed or failed.

Open tasks, on the other hand, offer less specific instructions which allow users to freely explore. This helps discover the areas of a site that will spontaneously attract your customers as well as what matters most to users.  But since participants have control over the tasks, you may not get the feedback you are looking for.  Just like anyone, users can be easily distracted on tangents or down a rabbit hole that does not provide useful information.  It also prevents you from assigning a success rate because there is no right or wrong way to explore.

So when you are writing your tasks, start by setting the stage through a scenario to encourage the participant to take ownership of the tasks as if they were truly performing them. The Nielsen Norman Group (get used to that name, folks) provides 3 tips to write tips that will drastically improve the outcome of your usability studies.

(1) Make the Task Realistic

User goal: Browse product offerings and purchase an item.
Poor task: Purchase a pair of orange Nike running shoes.
Better task: Buy a pair of shoes for under $40.

(2) Make the Task Actionable

User goal: Find movie and show times.
Poor task: You want to see a movie Sunday afternoon. Go to and tell me where you’d click next.
Better task: Use to find a movie you’d be interested in seeing on Sunday afternoon.

(3) Avoid Clues and Describing the Steps

User goal: Look up grades.
Poor task: You want to see the results of your midterm exams. Go to the website, sign in, and tell me where you would click to get your transcript.
Better task: Look up the results of your midterm exams

Usability testing Step 4 – Conduct/Deploy the Test

When using a third party like, deploying the test is as simple as paying for the test and clicking the “Have users start testing” button.  However, sometimes the study you are conducting requires a little more interaction, so remote moderated testing is the better choice.  Moderated testing is useful when the tasks you craft need guidance, or when you want to dig deeper into why a participant took a particular action. For these tests, I typically use, because I can easily hand control over to the participant and record both the audio and screen activity.

Guerrilla testing is the most cost-effective way to get user feedback, and it’s the easiest method to employ. Guerrilla testing requires the same general preparation as remote testing.  The primary difference between remote and guerrilla testing is where the test is conducted and who participates.  I usually start guerrilla testing within the network of people I know, and then I reach out to strangers. A great way to get guerrilla testing done fast is to set up your laptop at a local coffee shop and ask patrons to look over your site in exchange for a cup of coffee.

To remove bias, it is always best to follow a script.  Even though it may sound a little clunky at first, it is always the best way to ensure you are giving participants the correct information.  If you are moderating the testing you will probably need to share some important information with the participant.

Just like crafting your tasks, you should prepare a script that helps guide users through the test. Steve Krug’s excellent book “Don’t Make Me Think” provides an excellent base for creating your own moderated testing script.

Usability testing Step 5 – Analyze and Deliver Results

So now that you’ve run the test, you will either have a video supplied by a third party, your own video (if you moderated the test) or your notes from guerrilla testing. When reviewing the videos and notes, you are trying to connect the dots between the data you have just gathered and improvements you need to make in the product.  You can uncover these design opportunities by paying close attention to the gaps between what people say and the behavior you observe them doing.

I annotate the videos as I am watching them to pinpoint critical moments that provide insightful information.

Then, it’s time to stand and deliver, and you’ve got two primary ways of showing your results. You will either be delivering a professionally prepared findings report with recommendations for improvement to the product owner, or you will be delivering lightweight actionable items to your development team.  This is why it is really important to establish a test plan ahead of time so that you know what type of deliverable you will creating.

As UX professionals, our ability to analyze data and use it to improve design makes us very valuable.  Whether you are delivering your finding to a product owner or a member of your own team, it is a best practice to rate the severity of the usability issues you uncover on an impact scale and include recommendations for improvement.

 Items rated HIGH can contribute to user’s inability to perform critical functions within the site.  Items rated MEDIUM can contribute to errors or frustration.  Items rated low are minor nuisances but do not affect the user’s ability to perform critical interactions on the site.



352 is an innovation and growth firm. Leading companies hire us to find billion-dollar opportunities, build killer new products and create hockey-stick growth. We bring grit and new-fashioned thinking to innovation, digital development and growth marketing.