By way of providing context, let me take a step back to another usability test I performed on the FreeDOS website earlier this spring:
This was a complete redesign of the website, which had not previously undergone any kind of usability study. The old website was a hodgepodge of content by different editors, each with different agendas.Let's jump back to the topic of usability in open source software. Last week, I interviewed Eric S. Raymond (quotes below are from that discussion). He provided an interesting view into open source developers: the process of usability is antithetical to how open source software is created. Open source developers prefer functionality over appearance, and by extension put (typically) no emphasis on interface usability. While some open source projects have a maintainer with good taste who dictates that good taste on the interface, Raymond commented that most programmers view "Menus and icons are like the frosting on the cake after you've baked it" and that any results that try to provide correction to usability is "swimming against a strong cultural headwind."
To redesign the website, I started by creating user personas that represented our user base, and usage scenarios typical of those users. FreeDOS users typically fall into one of three types: casual users who want to use FreeDOS to play old DOS games, people who want to use FreeDOS to run a legacy application, and technical users who use FreeDOS in embedded systems. From that information base, I derived the new site by asking "what does each persona need to find?" and designing navigation and content areas that responded to those needs. Through a process of iteration, I arrived at a prototype of the new website, and invited the FreeDOS community to help test the new website.
The usability evaluation was a typical prototype test: I asked each tester to exercise the new prototype as if they were one of the user personas, and according to the usage scenarios. Could the testers find the information they needed? At the end of the usability test, I asked the testers to respond to a formal questionnaire about using the new website. The questionnaire also included a section where testers identified what was working and what needed improvement with room to provide detailed suggestions.
This first evaluation led to several improvements in the prototype, and another prototype test with questionnaire. Through repetition, I arrived at the FreeDOS website that you see today.
The process of asking usability testers to evaluate a prototype and comment on it via a questionnaire was invaluable. In that usability evaluation, I was interested in both what was working and what needed improvement, or a plus-delta exercise without using the terms "plus" and "delta". The "plus" items helped me identify what features were good, and the "delta" items allowed me to focus on problem areas to improve for the next iteration of the prototype website.
And frankly, the "delta" items would have been no use to me if I had not been interested in using that feedback to improve the website.
Our discussion left us both with the realization that if my study is to have a positive contribution to the open source community, it needs to "focus first on the good examples, rather than trying to fix the bad."
Therefore, I cannot use diagnostic usability in my study. I had originally planned to do a case study on the usability of an open source software program. My unwritten assumption was that I would start with user personas and usage scenarios, working with the author of candidate program to understand the user base and their typical actions. And (I assumed) the study would generate output similar to my work on the FreeDOS web site: what is working and what needs improvement. I might even have created an animated, non-functioning prototype of an existing program, and ask usability testers to evaluate that prototype against the user scenarios.
However, after my discussion with Raymond, I need to modify that assumption. Instead, I plan to study an open source program of a suitable size, one that was successful in addressing usability. The results of the case study wouldn't be a diagnostic analysis of the usability issues, but a summary of what works in open source usability: a usability critical analysis.
The benefits to this kind of usability study are immediate to the open source community. In my experience, and that of Raymond, most open source programmers are more likely to imitate successful designs rather than apply the rigor of usability studies to their own programs. If my study is to benefit the open source community as a whole, I need to change how I approach the case study.
That brings me back to the questions "Who does that?" and "What steps?"
In this model, the usability testers can be any willing participant. When I tested the FreeDOS website, my testers were members of the open source software community - many directly from the FreeDOS Project. In this study, the usability testers can be anyone who is interested. They do not need to be members of that open source software project; they do not even need to be part of any open source software project. The minimum requirement for these critical analysis testers is a willingness to use the program. Some base level of familiarity with the program would be helpful, but in a usability test should not be required.
The steps in this critical analysis would likely be the same that I applied in the FreeDOS website usability test: starting with user personas and scenarios, ask each tester to exercise the program according to those scenarios. At the end of the test, ask the testers to respond to a formal questionnaire about their experience, for each scenario in the test. But in this case, the focus of the questions would be what worked well rather than what needs improvement.
And that leads directly to the output of the study: a publishable result that identifies what works well in software usability, in a format that allows other open source developers to mimic those successful aspects of the design in their own projects.