I am back as I promised. This time with a plan. A full proof plan on “how” rather than “why”. If you are not sure what I am saying, please check my previous blog post on Why is it important to do usability testing?
Preparing a test plan
The first step is to create a test plan. It is a very important step to determine what we are testing, what we are going to give to our users. Before preparing a test plan we need to:
- Define what areas to concentrate on
- Determine potential usability issues
- Write a test plan
- Determine what tasks will be tested
A Test plan should include
- Test task (scenario)
- End state (answers)
Well-formed task scenarios make smoother tests. Here are some quick tips for you to ponder over-
- You(motivation) X…
- What is… which X… how can…
- Use specifics to see if they get (e.g. is X available?)
- According to X(our app) what…
- End with a question
Sequence the task in the order of easy (orientation tasks), difficult and moderate so that users don’t get frustrated at the beginning of the test. This can happen in case they face any difficulty in understanding their role.
Interact with participants before test starts
- Explain to your participants that the objective is to test the software and not the participants’ intelligence. This is important as the participants may feel that their ability is being tested.
- Explain how the test material and record will be used
- Encourage participants to think aloud while using the product
- Prepare a pre-test questionnaire and a post-test questionnaire
Prepare a checklist for Usability Testing
- Success rates
- Time on task
- Errors made in performing the task
- Confusion(unexpected user actions)
- System features used / not used
- System bugs or failures
During the test
- Record techniques and search patterns participants employ when attempting to work through a difficulty, for recording we can use software like ScreenFlow or silverback (mac only)
- If participants are thinking aloud, record assumptions and inferences being made.
- Do not interrupt participants unless absolutely necessary
- If participants need help provide some responses:
- Provide encouragement or hint
- Give general hints before specific hints
- Record the number of hints given
- Watch carefully for the signs of stress in participants:
- Sitting for a long time doing nothing
- Blaming themselves for the problems
- Flipping through documentation without really reading them
- Provide short breaks when needed
- Maintain a positive attitude, no matter what happens
- Use a monotone tonality with users i.e. never let users know that you are super excited or your body language indicate them that they are doing bad and things like that.
- In case users ask questions to try to get your consent if they are doing it right never give them direct answers instead use words like “Okay, uh huh”. Instead, ask reverse questions like “Is it what you were looking for”
- Let them struggle and don’t over moderate
After the test
- Hold a final interview with the participants and tell them what has been learned in the test, and if you noticed any discomfort in users or they gave a signal of confusion through “think aloud” during the test ask them more about such things.
- Provide a follow-up questionnaire that asks participants to evaluate the product or tasks performed. The post-test questionnaire is generally used to gauge what users think of their performance. We should also include the questions like “What is the one thing that you would change…?”.
- If video recording is required then get a written permission and respect participant’s privacy
- Do not generalize the opinion unless 5-8 users indicate the same problem
Analyzing the collected data
- Video Data
- In less formal studies, video can be very useful for informally showing managers or disbelieving system designers exactly what problems encounter.
- In more formal studies two types of analysis can be performed: Task-based analysis and Performance-based analysis-
Task-Based Analysis is used to determine how user tackled the task given where the major difficulties lie and what can be done
Performance analysis is used to obtain clearly defined performance measure from the data collected (task timing, frequency of correct task completion, frequency of errors, productive vs. unproductive time)
- Analyzing Questionnaire
- The quantitative type of questionnaire is often used when statistical data is required to emphasize the results of the test.
- These questionnaires have some form of rating scale associated with it.
- There are a number of different rating scales-
- Three-point scale i.e. “Do you know how to copy a text” YES, NO, DON’T KNOW
- Six point scale i.e. “Rate on the following scale” 1 to 6 from “very useful” to “of no use”
- Likert scale i.e. “The help function in this system gives solutions to complex problems”, “Strongly Agree, Agree, Slightly Agree, Neutral, Slightly Disagree, Disagree, Strongly Disagree”
- Semantic differential scale
Once the Questionnaire has been given to the selected population, the responses obtained on different rating scales are converted into numerical values and statistical Analysis is performed.
Usually, mean and standard deviation from the mean are the main statistics used in the analysis of most survey data.
The questionnaire has to be designed as easy as possible and keep it short. Try to aim for no more than 2 sides of the paper
Strategies for analyzing the test results
Deciding upon the strategy to adopt will depend on the circumstances of the usability testing, including the time available and main objectives of the testing.
Top-down analysis begins with an overall assessment of the user interface from the questionnaire, summarizing the major strengths and weaknesses.
This overall assessment can be collected from the results of the set of general questions on system usability
General questions on the system
Please provide your views on the usability of the website by answering the questions below. There are no right or wrong answers.
- What are the best aspects of the system for the users?
- What are the worst aspects of the system for the users?
- Are there any parts of the system that you found confusing and difficult to fully understand?
- Were there any aspects of the system that you found particularly irritating although they did not cause major problems?
- What were the most common mistakes you made during using the system?
- What changes you would like to the system to make it better from a user’s point of view?
- Is there anything else about the system that you would like to add?
The answers to the general questions may highlight particular aspects of the interface causing problems, and a more detailed investigation may then be carried out to examine other instances where those aspects have been highlighted within the questionnaire.
A bottom-up analysis is a more detailed analysis that investigates the responses to each question within the questionnaire. This type of analysis will enable a more comprehensive picture of the interface to be generated. It is likely to yield a highly detailed specification of aspects of the interface requiring improvement, amendment, addition etc. A detailed analysis of each of the criterion-based selection will enable a summary to be drawn from the interface in terms of each of the criteria.
I have made a kit for you which you can use while conducting usability testing. You can download it from here. I hope you find it useful. Let me know if you have any queries by dropping your comments and I would be happy to help.
This post first appeared on C.R.A.P. Design Principles To Improve Visual Appeal, please read the originial post: here