If your task is to perform usability testing of SaaS applications, having your clients use the app thousands of miles away from you may leave you feeling helpless, but with a proper methodology, this disadvantage can actually be a blessing in disguise.
Unlike traditional corporate websites, e-commerce solutions and social networking sites that are aimed at the general population, SaaS (Software-as-a-Service) web apps are often a collection of tools for collecting, entering, manipulating and using data for commercial purposes, designed for a very specific user type or types, which usually correspond with their job titles.
Designing user interfaces for SaaS, if you start by researching and creating personas for each role, can be a very satisfying process because of that specificity of your end-user and your ability to concentrate on that exact person in representing their interests. The challenge, however, emerges when you need to test your design and realize that:
- unlike conventional websites, you cannot test random people, such as your friends and colleagues, because their demographics are irrelevant; it would basically be like asking my grandma to test the new design of a surgical scalpel
- your pool of actual users is probably very small, far away, sometimes in a different time zone and most certainly busy; you need to acknowledge and respect this, and finally
- you need to sell the benefits of your work and how it will impact the user’s day-to-day tasks, announce the intent to run the tests far in advance, and be ready to work around their schedule to make this happen.
Setting the Goals of your Testing
Do you know what you are trying to do? Can you write it down as a question?
If I tell them to add a charge, will they know where to go and where to click?
If I ask them to delete an account, will they know they need to remove those charges first?
If they didn’t remember, how will they recover and will the system provide any assistance here?
If I ask them to describe what just happened, how can I quantify their response and experience?
By asking yourself these questions, you are slowly building your testing script, and also mentally preparing yourself for the possible outcomes of those tasks, which is one of the essential traits of a good, calm and confident test facilitator.
Scripting and Preparing for the Remote Test
Once you have chosen the areas of the application you want to cover, create a testing script in the form of a check list that you will be using for all your tests. This format lets you:
- Keep track of what you have covered and where you are at; timing your script and noting timestamps can be a useful tool during the test, to give you some sense of elapsed time and whether you should pick up the pace
- Group tasks together, so you can ask qualitative, open ended questions at the end
- Skip over a set of tasks, in case a user gets stuck or the business does not use that area; think of skipping testing of financial bits for a non-profit client that is not charging their customers, and
- As a test facilitator, get a better flow with each test you perform, because this type of testing is unnatural enough even without you sweating on the other side.
What you will want is for the user to share their screen, so go ahead and set up all your crazy technology, of which there is enough out there – never have there been so many paid and free remote screen sharing, UI testing, screen recording, webcam capturing, two-way audio, and video streaming services available to bridge the physical gap between two computer users. So use it to your liking, but don’t forget two important things: the user’s consent to being recorded, and technical redundancy in session recording. If it turns out you can’t get these two things, ask a co-worker to sit in on the session and take notes independently. FYI – a second pair of eyes, in context of user testing, is extremely helpful.
I’m personally using join.me screen sharing service, that comes with a phone number that I dial from a speakerphone, and the user dials from their phone. I’m also recording the session using my fully charged cell phone, that is in the airplane mode, and has sufficient storage available.
After setting up your testing environment, I recommend you do a dry run with one of your colleagues. Just like you would when preparing for a presentation, you need to be ready to conduct a remote test. Do a full recording of this session, time it, annotate the check list, analyze the content and adjust accordingly. Once you have done this, you know what to communicate to the user.
As a last step, communicate with the user. By this time, you have exchanged a few emails in which you have outlined what is the purpose of the test (remember, you are testing the system, not the user), and why you selected them. In this communication, remember to include:
- a short breakdown of what they will need to do before and during the test; let them compile a list of questions you will be answering after the test
- instructions on how to prepare their computer for screen sharing
- how long the test will take, and
- the legal side; you need their consent for being recorded, they might need you to sign an non-disclosure agreement, since you will be looking at live – possibly sensitive – production data.
Running the Test
Breathe. Get yourself a glass of water. Ensure nobody enters the room from which you are running the test. Have your check list ready. Have a post-it with the person’s name, phone number, and organization name in front of you.
After a short introduction, successful screen sharing and recording, reiterate the time frame and purpose of the test, and try to explain why this is so important to you, in a personal manner: this person has taken some time from their busy schedule to help you do your job, so remember to appreciate and mention this. Explain the mechanics of the Thinking Aloud test, and recognize that it is not a natural setting, but that it really helps to get that “raw stream of thoughts” and that sometimes the real problems are revealed in this way, before the actual opinions are formed. Remind them that you will go through their list of questions after the test.
Now start with the first thing on your list and work your way down. While at it:
- Take notes as bullet points; don’t worry, you can always go through the recording after the test,
- Watch the clock and your timestamps; people will go on tangents or have questions – note these down and ask if it is okay to answer them after the test if you feel it might derail the test or break the flow, but be aware of the time,
- Don’t interrupt or bias the user in any way; it is okay if they cannot complete a task,
- After completing a group of tasks, before moving on, ask what they thought about what they just did; you can analyze the response later, but do not try to justify, defend or give your personal opinions at this point – a simple ‘thank you’ and assurance that they are doing well should suffice before moving on.
After completing the Thinking Aloud portion of the test, you can break the structure and allow the user to ask questions. At this point, you are basically providing technical support/troubleshooting, which in the majority of cases means that you are about to discover an issue, while at the same time observing the person use the system.
Evaluating the Results and Reflecting on the Test
After conducting 4-5 tests, a pattern should emerge, and you will have a better understanding of your system. I tend to quickly categorize my findings into categories such as: “Issue, IA”, “Issue, UI”, “Idea, UI”, “Feature Unknown”, “Feature not used”, which then serve as a foundation for my usability report. I didn’t notice anything special evaluating remote usability test results, so you may interpret the test results employing your usual methods.
Interestingly, I did discover that because you are not physically present and the person is on the phone, an explanation of what they are doing on their computer makes the conducting of a remote Thinking Aloud test feel more natural.
Also the fact that the users are sitting in their work space, uninterrupted by your presence, using their own computer in the way they usually would, makes this exercise a good field study substitute. The ability to follow their mouse pointer and watch them use the system, make mistakes, miss elements, open multiple tabs, and use other tools on their computers, helps reveal many UX issues and missing features that they either did not verbalize or simply managed to find a way to work around it.
This test can be performed on the production environment, as well as in beta-environments, where a subset of users would be seeing the new features for the first time, and we know that the cost of fixing issues found at this stage is significantly lower than fixing them post-release.
Remote Usability Testing combined with the right participants can be a great tool that can be employed quickly, often and without great (or any) cost, at many stages of software development. It is Agile-compatible, and it can help teams discover issues and build better systems.










































































































































































































































