Another thing that came of my discussions with thought leaders is how to design a usability test. A few thoughts on that.
When I first considered this project, I thought I could do an "open ended" field usability test, where I solicited users at large to submit to a usability test of a program. They would do this test on their own computers, according to a set of scenarios that I would design for them. I imagined that, while users would carry their own experiences with them, they would be able to execute the test by following a set of user personas for the program. Personas would describe general users for the program, including details that inform the tester of what the persona would be like.
In discussing this option with thought leaders, I realize that is not the right way to do a test. There's a lot of value by doing an in person usability test. Bring the test subject into a lab (the lab can be formal or informal - even a desk in an office) and ask that person to execute a series of workflow tasks. For example, to test a word processor, we might ask testers to type a few short paragraphs of text (provided for them), start a new tab, search and replace text, print, change the font, and other typical tasks. The key is that these scenarios should demonstrate typical, average use of the program.
The test scenarios do not (necessarily) need to exercise every function of the program, as long as it demonstrates how a typical user of average knowledge would use the program. Generally, those test results can be applied to other parts of the program, as well.
In this study, I am interested in what makes for good usability in open source software. The usability test design for this need not be complicated: I will create a bootable USB "flash" drive that runs Fedora, a version of Linux. This flash drive can be easily re-written between tests, to restore the drive to a known, default state so the next usability tester has the same starting point as the first usability tester.
The usability lab can be as simple as an office with a closed door (to prevent distractions). To start the test, I would provide some background information for the tester: why are we doing the test, what they will be expected to do (the test scenarios) and how they should act during the test (ignore the observer, please speak aloud what is going through your mind at each step, etc.) Usability testers should be drawn from a wide pool, including users that are similar to the target audience: general users. Since I work on a university, potential testers might include students, faculty, and staff. I may also draw on family and friends.
After each test, I will ask the testers about their experience, and possibly explore areas that looked particularly interesting or challenging for them. What did they expect to see during this part of the test? What should this other screen have shown you, since you said you were doing X? To wrap up, engage the tester in a brief "plus/delta" exercise: What worked well in the program? What features were particularly easy to use? But what other features were more challenging to use? Where did you feel confused?
The plus/delta will be important for the results. I will need to report both in the final publication, even though I intend to focus on the features that contributed to good usability, so that other open source developers might mimic those successful features in their own programs.