Click here to Skip to main content
15,881,882 members
Articles / Operating Systems / Windows
Article

Usability Testing (Chapter 1)

Rate me:
Please Sign up or sign in to vote.
4.79/5 (18 votes)
19 Jan 200217 min read 89K   71   5
An introduction to usability testing

Introduction

Before I begin, I just want to say that I do not consider myself to be an expert in usability testing, user interface design or just about any other area.  I do, however, have some experience with these areas (both positive and negative) and I decided to write this article to share my experience with others.  The concepts and ideas I discuss in this article are not earth-shattering and many of you will already know all of this information.

My current projects have involved a need for wide consumer acceptance and usability.  One project will need to be shipped nation-wide and the other will be open to web-users from just about anywhere, anytime.  The details of these projects are not important, but what is important is the process I used to test, refine and re-test these projects and the ongoing assessments requirements I have uncovered.

The need for software developers to be involved in the usability testing and UI design is (IMHO) critical to achieving maximum usability.  Software developers have an intimate understanding of what is possible and what is not.  We also understand what is involved in making a particular solution work.  Our involvement in the early stages of UI design and later usability testing can mean the difference between a mediocre software package which does the job and an excellent software package that users appreciate, understand and use.

I have learned from experience that it is important to have a broad knowledge of technologies, techniques, utilities and tools available to me.  With this knowledge, I can choose the appropriate language (VC++, VB, C#, ASP, PHP, etc) and technologies (ADO, ODBC, JDBC, etc.) to achieve a desired result.  One area of knowledge that I have omitted in the past has been non-developer product review.  I believe that this is an important area as well. In the past year I have spent an enormous amount of time reviewing 3rd party consumer apps, games, business software, etc.  By doing this, I have a broader scope of user-oriented knowledge.  I can see how other companies have solved particular problems and learn from their successes and mistakes. 

What is Usability Testing

Usability testing is the process of working with end-users directly and indirectly to assess how the user perceives a software package and how they interact with it.  This process will uncover areas of difficulty for users as well as areas of strength.  The goal of usability testing should be to limit and remove difficulties for users and to leverage areas of strength for maximum usability.  

This testing should ideally involve direct user feedback, indirect feedback (observed behavior), and when possible computer supported feedback.  Computer supported feedback is often (if not always) left out of this process. Computer supported feedback can be as simple as a timer on a dialog to monitor how long it takes users to use the dialog and counters to determine how often certain conditions occur (ie. error messages, help messages, etc).  Often, this involves trivial modifications to existing software, but can result in tremendous return on investment.

Ultimately, usability testing should result in changes to the delivered product in line with the discoveries made regarding usability.  These changes should be directly related to real-world usability by average users.  As much as possible, documentation should be written supporting changes so that in the future, similar situations can be handled with ease.

When to Begin

This process can and should begin as early in the development process as possible.  It need not involve a large number of users, in fact 3-10 is probably ideal depending on the scale of the product.  These users should ideally be intended users for the product (ie. alpha/beta testers) and should represent a decent cross section of users targeted by the application.

The real question should be "when to end".  I believe that this is an incremental process with many beginnings and endings.  Ultimately, the job is not done until the product has reached the end of its lifecycle (not just development cycle). Each incremental step should be relatively short with changes and supporting documentation made often (up until initial delivery of the product.)  Once a product is initially delivered it can be difficult to make these kind of changes without affecting active users.  Once the product is shipped, changes should be considered more carefully with special concern for how active users will be affected and how future users will benefit.

However, it is never too late to start.  Even if you are nearing the end of the development cycle, usability testing can still yield enormous results.  Sometimes even minor changes to the UI, help system, reports, etc can make the product more appealing to users.

How to Begin

Usability testing can be quite simple.  I think there are 4 basic ingredients that are essential to success.  These are

  1. The usability testing person/team needs to include a software developer who is open-minded about changes and not offended by criticism.  The goal of usability testing is not to criticize, but improve and learn.  If any member of the team is not ready to receive criticism with an open-mind, the testing will almost certainly fail.  This person needs to have a good working knowledge of the workflow process the application is designed to facilitate and needs to have good communication skills.  Good note taking skills are also essential.
  2. The users selected as test subjects need to be average users (not all power users, and not all entry-level users.)  They should understand what the application is designed to do and should be able to communicate their needs reasonably well.  Often, these users will not be able to communicate their needs in technical terms, but will be able to identify problems they are having.
  3. The development person/team needs to be prepared to make changes on an incremental basis in relatively fast paced environment.  The best situation is to make needed changes quickly so that those changes can be incorporated into the continuing testing process.
  4. Patience.  The usability testing and refinement process can take some time and will sometimes go in the wrong direction.  By keeping changes small and incremental, it is usually easy to back-track and rework the problem without significant setbacks.

As I mentioned in the introduction, there are 3 methods of feedback that need to be incorporated into your testing.  These are

  1. Direct user feedback.  This type of feedback usually occurs by having the test users use the software alone and report back to the usability team/person.  Ideally, reporting back should occur on a regular basis (daily or weekly at most).
  2. Observed behavior.  This type of feedback can occur in conjunction with direct user feedback.  This occurs when the testing team/person observes how users use the software.
  3. Computer supported feedback.  This type of feedback can occur on an ongoing basis throughout the testing process.  As mentioned this is usually quite simple involving timers and hit counters.  

Each of these feedback methods should be used to achieve the ultimate goal.

How to best Leverage Feedback Methods

I believe that their are "best practices" for most things, but I am not sure what those are in all situations.  I will, however, tell you some of the things I found that worked well.

Direct Feedback

  1. Provide users with notebooks, pre-printed forms, a simple program, web page, etc to record their problems as they are having them.  They tend to forget the details if this is not done.
  2. Regularly review the issues users report.
  3. Meet with users on a scheduled (somewhat) basis to discuss their issues and make sure that you fully understand it before proceeding.  Be prepared for this meeting by reviewing their issues prior to the meeting.
  4. Keep a good communication dialog open with all users. 
  5. Prioritize direct-feedback issues highly.  Users need to see results relatively quickly or they get discouraged with the process.

Observed Behavior

  1. Use multiple opportunities to observe behavior.  Whether you are training them, discussing problems found earlier, or just walking past their desks, these are opportunities to observe their behavior.  (I doubt many of us have 1 way glass to watch users, so I won't go into that here.)
  2. Take notes (use same forms, software, web pages, etc as users use if possible.  This will insure you do not forget what you observed.
  3. Compare the notes you are taking against the notes users are taking.  If users are not reporting nearly as many problems as you are finding, it is possible that they are not comfortable with the process yet.
  4. Keep a good communication dialog open with all users. 
  5. Don't interfere with their normal process.  The goal is not to train them on the "right way", but rather to have the software work "their way".
  6. Be prepared to misinterpret user behavior.  Sometimes you might observe a user having an apparent problem, but in-fact you are misunderstanding what they are doing.
  7. Prioritize observed behavior after direct-feedback.  Users are not expecting these changes and they can cause confusion.  Carefully review your observations and discuss them with your team and the users.

Computer Supported Feedback

  1. It is hard to know where to use computer supported feedback early in the development and testing cycles.  Therefore start simple and grow from there.
  2. Be careful that the code that supports this does not interfere with the users workflow and by all means does not crash the software.  (Been there, done that.)
  3. If possible, log all computer-feedback issues into a simple database (Access worked well for me.) 
  4. When reviewing the log, be very careful to not overlook issues and not misinterpret the data.  (Your method need not be statistically valid, just reasonable.)
  5. When you see an issue in the data, try and support it through direct feedback and observed behavior methods.  This data can be very helpful in knowing what to look for when working with users.
  6. Consider leaving this capability in the final product carefully.  Only do so, if the log can be disabled safely and it has been thoroughly tested against all issues (especially exceptional growth in log size.)
  7. I recommend against acting on computer feedback alone.

Real World Examples

To support what I am saying, I will discuss a few of my experiences.

Entering dates and times

In one of my current projects there is a need for users to enter dates on various forms and for various purposes.  Some of these situations require that the user be able to quickly enter the data on a form (including the dates) and enter forms in fast succession.  

In the early version of the application, dates were entered with a masked edit-control.  Early on I observed that users were having problems entering dates.  This was backed up when I discussed it with the users.  I tried various masks, only to increase confusion and problems.  I tried using a date picker control, but users quickly let me know that this slowed them down.  I decided to just let them enter the dates in a free-form edit control.  When I did this, I integrated a log to track which users were entering dates and what they were entering.  Users liked this method much better (as they informed me).  

I was not pleased with this solution, though, because it allowed too much freedom and room for error.  Upon reviewing the log I discovered that not only were users entering dates and times in numerous different formats (ie. MM/DD/YY, MM/DD/YYYY, DD-MM-YYYY, MM-DD), they were entering a lot of "invalid dates" (ie. "Yesterday", "tomorrow", etc.)  I reviewed other applications which allow date entry and discovered that MS Outlook allows for date entry in just about any conceivable way.  

I decided to develop an edit control that mimicked the MS Outlook style date entry edit control.  Once this solution was integrated into the system, I heard nothing more from the users and when reviewing the log, the number of "invalid dates" had been reduced tremendously. The result of this control is available here on Code Project at http://www.codeproject.com/useritems/dateparser.asp.

Searching for records in the database

In the same application as the date entry problems occurred, there is a need for the user to lookup various records from the database.  We had identified this as a need early in the pre-development stage and had provided a search edit-control on a navigation bar on the left of the screen.  This feature was well received and users did not report any problems using it.  

When observing the users behavior, however, it was apparent that users were needing to enter multiple search criteria to find the record they were looking for.  They would try entering the name of an organization, only to have to enter it again with a different spelling before it was found. Since users were not reporting any problems, I decided to test what was actually happening. 

My next step was to begin logging searches attempts, result sets and whether or not the user opened a record returned by the search.  I let this run for about a week before reviewing the results.  When I reviewed the log, I discovered that people were especially having problems with names like "Saint Peters", "Mount Saint Helens", etc.  I spoke with the users about my findings and the response was basically that they had this problem with all software they used and thought it was normal.

I was not happy with this answer, so I looked into ways to improve the search results.  The solution was fairly simple.  I created a table with a set of common name prefixes, suffixes, address labels, etc.  I then wrote an algorithm to take a string and parse it for these common situations and rebuild the string.  I then created an extra column in every table for each column I needed to search.  I placed the rebuilt-version of the string in this new column each time a database update occurred.  The algorithm basically took a word like "Saint Peters" and made it into "ST PETERS", "Mount Saint Helens" became "Mt St Helens".  Addresses were also dealt with so that "1600 W Palm Road" would become "1600 WEST Palm".  Other details of the algorithm attempted to deal with spelling situations by removing duplicate characters, extra white space, etc.

When this new search algorithm was introduced, I was able to see an improvement in the search results by reviewing the log.  It was obvious that most search requests were now fulfilled on the first attempt.  (This search mechanism is now known as "quick search", not because it returns results faster, but because it finds the desired records on the first attempt.)  I hope to post an article with the code for this search mechanism soon.

Where to put the buttons on a dialog?

Early on in the development of one project I had a dispute with another co-worker about the placement of buttons on a dialog.  Should they be on the upper-right aligned vertically or lower right aligned horizontally.  To resolve this dispute I redesigned most of the dialogs (probably about 25) in the app so that they had buttons in both places.  I then logged which buttons the users clicked.  

The results were interesting.  On small dialogs there did not appear to be any preference among users for which buttons they used.  For dialogs of greater height, however, users almost unanimously preferred buttons horizontally aligned on the lower right.

We redesigned all of the dialogs in the app to consistently have the buttons on the lower-right aligned horizontally.  We decided to change even the small dialogs to insure consistency.

This was a simple case, but I believe it improved the final product.

What to expect

Users usually appreciate it when they are involved in the development process.  They feel like their opinions matter and once the process begins usually become better at identifying and describing their problems.  In the early stages their feedback can be sporadic and poorly described.  Often they have difficulty identifying the problems they are having because it is a new software package and sometimes they seem to think that the problems they are having occur just because they are not good yet.  

As much as possible try to instill in the users the idea that their feedback is important and that any problems they are having are important to the process.  This can result in a deluge of problems in the early stages (especially if usability testing occurs late in the development cycle).  This can seem overwhelming but should be considered an opportunity instead of a problem.  By reviewing the problems and prioritizing solutions, you can manage the process and control the ultimate outcome.

Once the testing has been going on for a while, things will probably slow down with fewer problems and issues.  This is an excellent opportunity to begin leveraging computer supported feedback.  At this point users are using the software and testing many parts of its implementation.  Computer supported feedback becomes more valuable at this point because usage is more defined and earlier problems have been dealt with.  Computer assisted feedback can identify more subtle problems that are often overlooked in earlier testing.  (I will discuss some real-world ways to leverage computer supported feedback in a later section.)

Another thing to expect is set-backs.  Sometimes the changes introduced will result in more problems instead of the desired solution.  This is why it is important to keep changes small and incremental as much as possible.  Set-backs will be minor and easy to overcome.  (A good source control/version control package can help too.)  When set-backs do occur, documentation is critical to avoid the same problem later in this project or others.  Set-backs can be turned into opportunities by discussing why the proposed solution did not work or wasn't what they needed.  Many users will perceive this as you diligently attempting to meet their needs and will be inclined to participate even more.

The rewards for the usability testing will be 

  1. Improved user perception of your product.  Because you have involved typical users in the process, the final product is typically better geared for the target audience.  Also, as a result of the testing you may have established an excellent reference for the quality of your work.
  2. Reduced technical support requirements.  Hopefully, the usability testing has uncovered and corrected many problems (not just "bugs") in the application.  Also, the overall user experience is improved, thus eliminating the need for many calls.
  3. Better documentation of application.  Throughout the testing process notes have been collected and organized.  These notes can be given to the documentation team/person to better develop product documentation.  The "How do I...", "Where is the..." type questions have already been uncovered and answered.  All that remains is to document the information in a user-friendly way.
  4. Better preparedness for future projects.  As the usability testing proceeds your ability to predict problems before they occur will improve.  This will expedite later projects greatly.
  5. Pride in the product.  Speaking for myself, I can safely say that the finished product is of much finer quality after the usability testing and refinements than it was before.  I feel very good about telling the sales staff to sell it whole heartedly.
  6. Advantage over competitors.  

Postscript

I hope you have found this article useful or are least interesting.  If you have any comments, please let me know.

I hope to post another article soon to expound further on this subject. I would also like to develop some simple tools for automating feedback and expediting the documentation requirements.  If I have time, I will develop these and post them in the future.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here


Written By
Web Developer
United States United States
This member has not yet provided a Biography. Assume it's interesting and varied, and probably something to do with programming.

Comments and Discussions

 
GeneralExcellent! Pin
Simon Walton3-Feb-02 7:40
Simon Walton3-Feb-02 7:40 
GeneralWonderful Article. Pin
ColinDavies27-Jan-02 13:26
ColinDavies27-Jan-02 13:26 
GeneralRe: Wonderful Article. Pin
Matt Gullett27-Jan-02 13:43
Matt Gullett27-Jan-02 13:43 
GeneralExcelent Article Pin
23-Jan-02 9:05
suss23-Jan-02 9:05 
GeneralGood article :) Pin
Morozov Alexey21-Jan-02 13:09
Morozov Alexey21-Jan-02 13:09 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.