Friday, July 14, 2017

How I put Linux in the enterprise

I recently wrote an article for OpenSource.com that tells the story about How I introduced my organization to Linux. Here's the short version:

I used to work in higher ed. In the late 1990s, we moved to a new student records system. We created an "add-on" web registration system, so students could register on-line—still a new idea in 1998. But when we finally went live, the load crushed the web servers. No one could register. We tried to fix it, but nothing worked.

Instead, we just shifted everything to Linux, and it worked! No code changes, just a different platform. That was our first time using Linux in the enterprise. When I left the university some seventeen years later, I think about two-thirds of our enterprise servers ran on Linux.

There's a lot going on behind the scenes here, so I encourage you to read the full article. The key takeaways aren't really the move to Linux. Instead, I use this as an example for how to deploy a big change in any environment: Solve a problem, don't stroke an ego. Change as little as possible. Be honest about the risks and benefits. And communicate broadly. These are the keys to success.

Friday, June 30, 2017

FreeDOS is 23 years old

I have been involved in open source software for a long time, since before anyone coined the term "open source." My first introduction to Free software was GNU Emacs on our campus Unix system, when I was an undergraduate. Then I discovered other Free software tools. Through that exposure, I decided to installed Linux on my home computer in 1993. But as great as LInux was at the time, with few applications like word processors and spreadsheets, Linux was still limited—great for writing programs and analysis tools for my physics labs, but not (yet) for writing class papers or playing games.

So my primary system at the time was still MS-DOS. I loved DOS, and had since the 1980s. While the MS-DOS command line was under-powered compared to Unix, I found it very flexible. I wrote my own utilities and tools to expand the MS-DOS command line experience. And of course, I had a bunch of DOS applications and games. I was a DOS "power user." For me, DOS was a great mix of function and features, so that's what I used most of the time.

And while Microsoft Windows was also a thing in the 1990s, if you remember Windows 3.1, you should know that Windows wasn't a great system. Windows was ugly and difficult to use. I preferred to work at the DOS command line, rather than clicking around the primitive graphical user interface offered by Windows.

With this perspective, I was a little distraught to learn in 1994, through Microsoft's interviews with tech magazines, that the next version of Windows would do away with MS-DOS. It seemed MS-DOS was dead. Microsoft wanted everyone to move to Windows. But I thought "If Windows 3.2 or 4.0 is anything like Windows 3.1, I want nothing to do with that."

So in early 1994, I had an idea. Let's create our own version of DOS! And that's what I did.

On June 29, 1994, I made a little announcement to the comp.os.msdos.apps discussion group on Usenet. My post read, in part:
Announcing the first effort to produce a PD-DOS.  I have written up a
"manifest" describing the goals of such a project and an outline of
the work, as well as a "task list" that shows exactly what needs to be
written.  I'll post those here, and let discussion follow.
That announcement of "PD-DOS" or "Public Domain DOS" later grew into the FreeDOS Project that you know today. And today, FreeDOS is now 23 years old!

All this month, we've asked people to share their FreeDOS stories about how they use FreeDOS. You can find them on the FreeDOS blog, including stories from longtime FreeDOS contributors and new users. In addition, we've highlighted several interesting moments in FreeDOS history, including a history of the FreeDOS logo, a timeline of all FreeDOS distributions, an evolution of the FreeDOS website, and more. You can read everything on our celebration page at our blog: Happy 23rd birthday to FreeDOS.

Since we've received so many "FreeDOS story" contributions, I plan to collect them into a free ebook, which we'll make available via the FreeDOS website. We are still collecting FreeDOS stories for the ebook! If you use FreeDOS, and would like to contribute to the ebook, send me your FreeDOS story by Tuesday, July 18.

Monday, June 5, 2017

Help us celebrate 23 years of FreeDOS

This year on June 29, FreeDOS will turn 23 years old. That's pretty good for a legacy 16-bit operating system like DOS. It's interesting to note that we have been doing FreeDOS for longer than MS-DOS was a thing. And we're still going!

There's nothing special about "23 years old" but I thought it would be a good idea to mark this year's anniversary by having people contribute stories about how they use FreeDOS. So over at the FreeDOS Blog, I've started a FreeDOS blog challenge.

If you use FreeDOS, I'm asking you to write a blog post about it. Maybe your story is about how you found FreeDOS. Or about how you use FreeDOS to run certain programs. Or maybe you want to tell a story about how you installed FreeDOS to recover data that was locked away in an old program. There are lots of ways you could write your FreeDOS story. Tell us about how you use FreeDOS!

Your story can be short, or it can be long. Make it as long or short as you need to talk about how you use FreeDOS.

Write your story, post it on your blog, and email me so I can find it. Or if you don't have a blog of your own, email your story to me and I'll put it up as a "guest post" on the FreeDOS Blog.

I'm planning to post a special blog item on June 29 to collect all of these great stories. So you need to write your story by June 28.

Tuesday, May 23, 2017

Please run for GNOME Board

Update: the election is over. Congratulations to the new Board members!
Are you a member of the GNOME Foundation? Please consider running for Board.

Serving on the Board is a great way to contribute to GNOME, and it doesn't take a lot of your time. The GNOME Board of Directors meets every week via a one-hour phone conference to discuss various topics about the GNOME Foundation and GNOME. In addition, individual Board members may volunteer to take on actions from meetings—usually to follow up with someone who asked the Board for action, such as a funding request.

At least two current Board members have decided not to run again this year. (I am one of them.) So if you want to run for the GNOME Foundation Board of Directors, this is an excellent opportunity!

If you are planning on running for the Board, please be aware that the Board meets 2 days before GUADEC begins to do a formal handoff, plan for the upcoming year, and meet with the Advisory Board. GUADEC 2017 is 28 July to 2 August in Manchester, UK. If elected, you should plan on attending meetings this year on 26 and 27 July in Manchester, UK.

To announce your candidacy, just send an email to foundation-announce that gives your name, your affiliation (who you work for), and a few sentences about your background and interest in serving on the Board.

Friday, May 19, 2017

Can't make GUADEC this year

This year, the GNOME Users And Developers European Conference (GUADEC) will be hosted in beautiful Manchester, UK between 28th July and 2nd August. Unfortunately, I can't make it. I missed last year, too. The timing is not great for me.

I work in local government, and just like last year, GUADEC falls during our budget time at the county. Our county budget is set every two years. That means during an "on" year, we make our budget proposals for the next two years. In the "off" year, we share a budget status.

I missed GUADEC last year because I was giving a budget status in our "off" year. And guess what? This year, department budget presentations again happen during GUADEC.

During GUADEC, I'll be making our county IT budget proposal. This is our one opportunity to share with the Board our budget priorities for the next two years, and to defend any budget adjustment. I can't miss this meeting.

Wednesday, May 17, 2017

GNOME and Debian usability testing

Intrigeri emailed me to share that "During the Contribute your skills to Debian event that took place in Paris last week-end, we conducted a usability testing session" of GNOME 3.22 and Debian 9. They have posted their usability test results at Intrigeri's blog: "GNOME and Debian usability testing, May 2017." The results are very interesting and I encourage you to read them!

There's nothing like watching real people do real tasks with your software. You can learn a lot about how people interact with the software, what paths they take to accomplish goals, where they find the software easy to use, and where they get frustrated. Normally we do usability testing with scenario tasks, presented one at a time. But in this usability test, they asked testers to complete a series of "missions." Each "mission" was a set of two of more goals. For example:

Mission A.1 — Download and rename file in Nautilus

  1. Download a file from the web, a PDF document for example.
  2. Open the folder in which the file has been downloaded.
  3. Rename the dowloaded file to SUCCESS.pdf.
  4. Toggle the browser window to full screen.
  5. Open the file SUCCESS.pdf.
  6. Go back to the File manager.
  7. Close the file SUCCESS.pdf.

Mission A.2 — Manipulate folders in Nautilus

  1. Create a new folder named cats in your user directory.
  2. Create a new folder named to do in your user directory.
  3. Move the cats folder to the to do folder.
  4. Delete the cats folder.

These "missions" take the place of scenario tasks. My suggestion to the usability testing team would be to add a brief context that "sets the stage" for each "mission." In my experience, that helps testers get settled into the task. This may have been part of the introduction they used for the overall usability test, but generally I like to see a brief context for each scenario task.

The usability test results also includes a heat map, to help identify any problem areas. I've talked about the Heat Map Method before (see also “It’s about the user: Applying usability in open source software.” Jim Hall. Linux Journal, print, December 2013). The heat map shows your usability test results in a neat grid, coded by different colors that represent increasing difficulty:

  • Green if the tester didn't have any problems completing the task.
  • Yellow if the tester encountered a few problems, but generally it was pretty smooth.
  • Orange if the tester experienced some difficulty in completing the task.
  • Red if the tester had a really hard time with the task.
  • Black if the task was too difficult and the tester gave up.

The colors borrow from the familiar green-yellow-red color scheme used in traffic signals, and which most people can associate with easy-medium-hard. The colors also suggest greater levels of "heat," from green (easy) to red (very hard) and black (too hard).

To build a heat map, arrange your usability test scenario tasks in rows, and your testers in columns. This provides a colorful grid. You can look across rows and look for "hot" rows (lots of black, red and orange) and "cool" rows (lots of green, with some yellow). Focus on the hot rows; these are where testers struggled the most.


Intrigeri's heat map suggests some issues with B1 (install and remove a package), C2 (temporary files) and C3 (change default video player). There's some difficulty with A3 (create a bookmark in Nautilus) and C4 (add and remove world clocks), but these seem secondary. Certainly these are issues to address, but the results suggest to focus on B1, C2 and C3 first.

For more, including observations and discussion, go read Intrigeri's article.

Saturday, May 6, 2017

Not running for Board this year

After some serious thinking, I've decided not to run for the GNOME Foundation Board of Directors for the 2017-18 session.

As the other directors are aware, I've over-committed myself. I think I did a good job keeping up with GNOME Board issues, but it was sometimes a real stretch. And due to some budget and planning items happening at work, I've been busier in 2017 than I planned. I've missed a few Board meetings due to meeting conflicts or other issues.

It's not fair to GNOME for me to continue to be on the Board if I'm going to be this busy. So I've decided to not run again this year, and let someone with more time to take my seat.

However, I do plan to continue as director for the rest of the 2016-17 session.

Thursday, May 4, 2017

How I found Linux

Growing up through the 1980s and 1990s, I was always into computers. As I entered university in the early 1990s, I was a huge DOS nerd. Then I discovered Linux, a powerful Unix system that I could run on my home computer. And I have been a Linux user ever since.

I wrote my story for OpenSource.com, about How I got started with Linux.

In the article, I also talk about how I've deployed Linux in every organization where I've worked. I'm a CIO in local government now, and while we have yet to install Linux in the year since I've arrived, I have no doubt that we will someday.

Tuesday, April 18, 2017

A better March Madness script?

Last year, I wrote an article for Linux Journal describing how to create a Bash script to build your NCAA "March Madness" brackets. I don't really follow basketball, but I have friends that do, so by filling out a bracket at least I can have a stake in the games.

Since then, I realized my script had a bug that prevented any rank 16 team from winning over a rank 1 team. So this year, I wrote another article for Linux Journal with an improved Bash script to build a better NCAA "March Madness" bracket. In brief, the updated script builds a custom random "die roll" based on the relative strength of each team. My "predictions" this year are included in the Linux Journal article.

Since the games are now over, I figured this was a great time to see how my bracket performed. If you followed the games, you know that there were a lot of upsets this year. No one really predicted the final two teams for the championship. So maybe I shouldn't be too surprised if my brackets didn't do well either. Next year might be a better comparison.

In the first round of the NCAA March Madness, you start with teams 1–16 in four regions, so that's 64 teams that compete in 32 games. In that "round of 64," my shell script correctly predicted 21 outcomes. That's not a bad start.

March Madness is single-elimination, so for the second round, you have 32 teams competing in 16 games. My shell script correctly guessed 7 of those games. So just under half were predicted correctly. Not great, but not bad.

In the third round, my brackets suffered. This is the "Sweet Sixteen" where 16 teams compete in 8 games, but my script only predicted 2 of those games.

And in the fourth round, the "Elite Eight" round, my script didn't predict any of the winners. And that wrapped up my brackets.

Following the standard method for how to score "March Madness" brackets, each round has 320 possible points. In round one, assign 10 points for each correctly selected outcome. In round two, assign 20 points for each correct outcome. And so on, double the possible points at each round. From that, the math is pretty simple.

round one:21 × 10 =210
round two:7 × 20 =140
round three:1 × 40 =40
round four:0 × 80 =0
390
My total score this year is 390 points. As a comparison, last year's script (the one with the bug) scored 530 in one instance, and 490 in another instance. But remember that there were a lot of upsets in this year's games, so everyone's brackets fared poorly this year, anyway.

Maybe next year will be better.

Did you use the Bash script to help fill out your "March Madness" brackets? How did you do?

Monday, April 3, 2017

How many testers do you need?

When you start a usability test, the first question you may ask is "how many testers do I need?" The standard go-to article on this is Nielsen's "Why You Only Need to Test with 5 Users" which gives the answer right there in the title: you need five testers.

But it's important to understand why Nielsen picks five as the magic number. MeasuringU has a good explanation, but I think I can provide my own.

The core assumption is that each tester will uncover a certain amount of issues in a usability test, assuming good test design and well-crafted scenario tasks. The next tester will uncover about the same amount of usability issues, but not exactly the same issues. So there's some overlap, and some new issues too.

If you've done usability testing before, you've observed this yourself. Some testers will find certain issues, other testers will find different issues. There's overlap, but each tester is on their own journey of discovery.

How many usability issues is up for some debate. Nielsen uses his own research and asserts that a single tester can uncover about 31% of the usability issues. Again, that assumes good test design and scenario tasks. So one tester finds 31% of the issues, the next tester finds 31% but not the same 31%, and so on. With each tester, there's some overlap, but you discover some new issues too.

In his article, Nielsen describes a function to demonstrate the number of usability issues found vs the number of testers in your test, for a traditional formal usability test:
1-(1-L)n

…where L is the amount of issues one tester can uncover (Nielsen assumes L=31%) and n is the number of testers.

I encourage you to run the numbers here. A simple spreadsheet will help you see how the value changes for increasing numbers of testers. What you'll find is a curve that grows quickly then slowly approaches 100%.


Note at five testers, you have uncovered about 85% of the issues. Nielsen's curve suggests a diminishing return at higher numbers of testers. As you add testers, you'll certainly discover more usability issues, but the increment gets smaller each time. Hence Nielsen's recommendation for five testers.

Again, the reason that five is a good number is because of overlap of results. Each tester will help you identify a certain number of usability issues, given a good test design and high quality scenario tasks. The next tester will identify some of the same issues, plus a few others. And as you add testers, you'll continue to have some overlap, and continue to expand into new territory.

Let me help you visualize this. We can create a simple program to show this overlap. I wrote a Bash script to generate SVG files with varying numbers of overlapping red squares. Each red square covers about 31% of the gray background.


If you run this script, you should see output that looks something like this, for different values of n. Each image starts over; the iterations are not additive:

n=1

n=2

n=3

n=4

n=5

n=10

n=15

As you increase the number of testers, you cover more of the gray background. And you also have more overlap. The increase in coverage is quite dramatic from one to five, but compare five to fifteen. Certainly there's more coverage (and more overlap) at ten than at five, but not significantly more coverage. And the same going from ten to fifteen.

These visuals aren't meant to be an exact representation of the Nielsen iteration curve, but they do help show how adding more testers gives significant return up to a point, and then adding more testers doesn't really get you much more.

The core takeaway is that it doesn't take many testers to get results that are "good enough" to improve your design. The key idea is that you should do usability testing iteratively with your design process. I think every usability researcher would agree. Ellen Francik, writing for Human Factors, refers to this process as the Rapid Iterative Testing and Evaluation (RITE) method, arguing "small tests are intended to deliver design guidance in a timely way throughout development." (emphasis mine)

Don't wait until the end to do your usability tests. By then, it's probably too late to make substantive changes to your design, anyway. Instead, test your design as you go: create (or update) your design, do a usability test, tweak the design based on the results, test it again, tweak it again, and so on. After a few iterations, you will have a design that works well for most users.

Sunday, April 2, 2017

A throwback theme for gedit

This isn't exactly about usability, but I wanted to share it with you anyway.

I've been involved in a lot of open source software projects, since about 1993. You know that I'm also the founder and coordinator of the FreeDOS Project? I started that project in 1994, to write a free version of DOS that anyone could use.

DOS is an old operating system. It runs entirely in text mode. So anyone who was a DOS user "back in the day" should remember text mode and the prevalence of white-on-blue text.

For April 1, we used a new "throwback" theme on the FreeDOS website. We rendered the site using old-style DOS colors, with a monospace DOS VGA font.

Even though the redesign was meant only for a day, I sort of loved the new design. This made me nostalgic for using the DOS console: editing text in that white-on-blue, without the "distraction" of other fonts or the glare of modern black-on-white text.

So I decided to create a new theme for gedit, based on the DOS throwback theme. Here's a screenshot of gedit editing a Bash script, and editing the XML theme file itself:



The theme uses the same sixteen color palette from DOS. You can find the explanation of  why DOS has sixteen colors at the FreeDOS blog. I find the white-on-blue text to be calming, and easy on the eyes.

Of course, to make this a true callback to earlier days of computing, I used a custom font. On my computer, I used Mateusz Viste's DOSEGA font. Mateusz created this font by redrawing each glyph in Fontforge, using the original DOS CPI files as a model. I think it's really easy to read. (Download DOSEGA here: dosega.zip)

Want to create this on your own system? Here's the XML source to the theme file. Save this in ~/.local/share/gtksourceview-3.0/styles/dosedit.xml and gedit should find it as a new theme.
<?xml version="1.0" encoding="UTF-8"?>
<!--
  reference: https://developer.gnome.org/gtksourceview/stable/style-reference.html
-->
<style-scheme id="dos-edit" name="DOS Edit" version="1.0">
<author>Jim Hall</author>
<description>Color scheme using DOS Edit color palette</description>
<!--
  Emulate colors used in a DOS Editor. For best results, use a monospaced font
  like DOSEGA.
-->

<!-- Color Palette -->

<color name="black"           value="#000"/>
<color name="blue"            value="#00A"/>
<color name="green"           value="#0A0"/>
<color name="cyan"            value="#0AA"/>
<color name="red"             value="#A00"/>
<color name="magenta"         value="#A0A"/>
<color name="brown"           value="#A50"/>
<color name="white"           value="#AAA"/>
<color name="brightblack"     value="#555"/>
<color name="brightblue"      value="#55F"/>
<color name="brightgreen"     value="#5F5"/>
<color name="brightcyan"      value="#5FF"/>
<color name="brightred"       value="#F55"/>
<color name="brightmagenta"   value="#F5F"/>
<color name="brightyellow"    value="#FF5"/>
<color name="brightwhite"     value="#FFF"/>

<!-- Settings -->

<style name="text"                 foreground="white" background="blue"/>
<style name="selection"            foreground="blue" background="white"/>
<style name="selection-unfocused"  foreground="black" background="white"/>

<style name="cursor"               foreground="brown"/>
<style name="secondary-cursor"     foreground="magenta"/>

<style name="current-line"         background="black"/>
<style name="line-numbers"         foreground="black" background="white"/>
<style name="current-line-number"  background="cyan"/>

<style name="bracket-match"        foreground="brightwhite" background="cyan"/>
<style name="bracket-mismatch"     foreground="brightyellow" background="red"/>

<style name="right-margin"         foreground="white" background="blue"/>
<style name="draw-spaces"          foreground="green"/>
<style name="background-pattern"   background="black"/>

<!-- Extra Settings -->

<style name="def:base-n-integer"   foreground="cyan"/>
<style name="def:boolean"          foreground="cyan"/>
<style name="def:builtin"          foreground="brightwhite"/>
<style name="def:character"        foreground="red"/>
<style name="def:comment"          foreground="green"/>
<style name="def:complex"          foreground="cyan"/>
<style name="def:constant"         foreground="cyan"/>
<style name="def:decimal"          foreground="cyan"/>
<style name="def:doc-comment"      foreground="green"/>
<style name="def:doc-comment-element" foreground="green"/>
<style name="def:error"            foreground="brightwhite" background="red"/>
<style name="def:floating-point"   foreground="cyan"/>
<style name="def:function"         foreground="cyan"/>
<style name="def:heading0"         foreground="brightyellow"/>
<style name="def:heading1"         foreground="brightyellow"/>
<style name="def:heading2"         foreground="brightyellow"/>
<style name="def:heading3"         foreground="brightyellow"/>
<style name="def:heading4"         foreground="brightyellow"/>
<style name="def:heading5"         foreground="brightyellow"/>
<style name="def:heading6"         foreground="brightyellow"/>
<style name="def:identifier"       foreground="brightyellow"/>
<style name="def:keyword"          foreground="brightyellow"/>
<style name="def:net-address-in-comment" foreground="brightgreen"/>
<style name="def:note"             foreground="green"/>
<style name="def:number"           foreground="cyan"/>
<style name="def:operator"         foreground="brightwhite"/>
<style name="def:preprocessor"     foreground="brightcyan"/>
<style name="def:shebang"          foreground="brightgreen"/>
<style name="def:special-char"     foreground="brightred"/>
<style name="def:special-constant" foreground="brightred"/>
<style name="def:specials"         foreground="brightmagenta"/>
<style name="def:statement"        foreground="brightmagenta"/>
<style name="def:string"           foreground="brightred"/>
<style name="def:type"             foreground="cyan"/>
<style name="def:underlined"       foreground="brightgreen"/>
<style name="def:variable"         foreground="cyan"/>
<style name="def:warning"          foreground="brightwhite" background="brown"/>

</style-scheme>

Friday, March 31, 2017

Screencasts for usability testing

There's nothing like watching a real person use your software to finally understand the usability issues your software might have. It's hard to get that kind of feedback through surveys or other indirect methods. I find it's best to moderate a usability test with a few testers who run through a set of scenario tasks. By observing how they attempt to complete the scenario tasks, you can learn a lot about how real people use your software to do real tasks.

Armed with that information, you can tweak the user interface to make it easier to use. Through iteration (design, test, tweak, test, tweak, etc) you can quickly find a design that works well for everyone.

The simple way to moderate a usability test is to watch what the user is doing, and take notes about what they do. I recommend the "think aloud" protocol, where you ask the tester to talk about what they are doing. If you're looking for a Print button, just say "I'm looking for a Print button" so I can make note of that. And move your mouse to where you are looking, so I can see what you are doing and where you are looking. In my experience, testers adapt to this fairly quickly.

In addition to taking your own notes, you might try recording the test session. That allows you to go back to the recording later to see exactly what the tester was doing. And you can share the video with other developers in your project, so they can watch the usability test sessions.

Screencasts are surprisingly easy to do, at least under Linux. The GNOME desktop has a built-in screencast function, to capture a video of the computer's screen.

But if you're like me, you may not have known this feature existed. It's kind of hard to get to. Press Ctrl+Alt+Shift+R to start recording, then press Ctrl+Alt+Shift+R again to stop recording.

If that's hard for you to remember, there's also a GNOME Extension called EasyScreenCast that, as the name implies, makes screencasts really easy. Once you install the extension, you get a little menu that lets you start and stop recording, as well as set options. It's very straightforward. You can select a sound input, to narrate what you are dong. And you can include webcam video, for a picture-in-picture video.

Here's a sample video I recorded as part of the class that I'm teaching. I needed a way to walk students through the steps to activate Notebookbar View in LibreOffice 5.3. I also provided written steps, but there's nothing like showing rather than just explaining.



With screencasts, you can extend your usability testing. At the beginning of your session, before the tester begins the first task, start recording a screencast. Capture the audio from the laptop's microphone, too.

If you ask your tester to follow the "think aloud" protocol, the screencast will show you the mouse cursor, indicating where the tester is looking, and it will capture the audio, allowing you to hear what the tester was thinking. That provides invaluable evidence for your usability test.

I admit I haven't experimented with screencasts for usability testing yet, but I definitely want to do this the next time I mentor usability testing for Outreachy. I find a typical usability test can last upwards of forty-five minutes to an hour, depending on the scenario tasks. But if you have the disk space to hold the recording, I don't see why you couldn't use the screencast to record each tester in your usability test. Give it a try!

Monday, March 27, 2017

Testing LibreOffice 5.3 Notebookbar

I teach an online CSCI class about usability. The course is "The Usability of Open Source Software" and provides a background on free software and open source software, and uses that as a basis to teach usability. The rest of the class is a pretty standard CSCI usability class. We explore a few interesting cases in open source software as part of our discussion. And using open source software makes it really easy for the students to pick a program to study for their usability test final project.

I structured the class so that we learn about usability in the first half of the semester, then we practice usability in the second half. And now we are just past the halfway point.

Last week, my students worked on a usability test "mini-project." This is a usability test with one tester. By itself, that's not very useful. But the intention is for the students to experience what it's like to moderate their own usability test before they work on their usability test final project. In this way, the one-person usability test is intended to be a "dry run."

For the one-person usability test, every student moderates the same usability test on the same program. We are using LibreOffice 5.3 in Notebookbar View in Contextual Groups mode. (And LibreOffice released version 5.3.1 just before we started the usability test, but fortunately the user interface didn't change, at least in Notebookbar-Contextual Groups.) Students worked together to write scenario tasks for the usability test, and I selected eight of those scenario tasks.

By using the same scenario tasks on the same program, with one tester each, we can combine results to build an overall picture of LibreOffice's usability with the new user interface. Because the test was run by different moderators, this isn't statistically useful if you are writing an academic paper, and it's of questionable value as a qualitative measure. But I thought it would be interesting to share the results.

First, let's look at the scenario tasks. We started with one persona: an undergraduate student at a liberal arts university. Each student in my class contributed two use scenarios for LibreOffice 5.3, and three scenario tasks for each scenario. That gave a wide field of scenario tasks. There was quite a bit of overlap. And there was some variation on quality, with some great scenario tasks and some not-so-great scenario tasks.

I grouped the scenario tasks into themes, and selected eight scenario tasks that suited a "story" of a student working on a paper: a simple lab write-up for an Introduction to Physics class. I did minimal editing of the scenario tasks; I tried to leave them as-is. Most of the scenario tasks were of high quality. I included a few not-great scenario tasks so students could see how the quality of the scenario task can impact the quality of your results. So keep that in mind.

These are the scenario tasks we used. In addition to these tasks, students provided a sample lab report (every tester started with the same document) and a sample image. Every test was run in LibreOffice 5.3 or 5.3.1, which was already set to use Notebookbar View in Contextual Groups mode:
1. You’re writing a lab report for your Introduction to Physics class, but you need to change it to meet your professors formatting requirements. Change your text to use Times New Roman 12 pt. and center your title

2. There is a requirement of double spaced lines in MLA. The paper defaults to single spaced and needs to be adjusted. Change paper to double spaced.

3. After going through the paragraphs, you would like to add your drawn image at the top of your paper. Add the image stored at velocitydiagram.jpg to the top of the paper.

4. Proper header in the Document. Name, class, and date are needed to receive a grade for the week.

5. You've just finished a physics lab and have all of your data written out in a table in your notebook. The data measures the final velocity of a car going down a 1 meter ramp at 5, 10, 15, 20, and 25 degrees. Your professor wants your lab report to consist of a table of this data rather than hand-written notes. There’s a note in the document that says where to add the table.

[task also provided a 2×5 table of sample lab data]

6. You are reviewing your paper one last time before turning it into your professor. You notice some spelling errors which should not be in a professional paper. Correct the multiple spelling errors.

7. You want to save your notes so that you can look back on them when studying for the upcoming test. Save the document.

8. The report is all done! It is time to turn it in. However, the professor won’t accept Word documents and requires a PDF. Export the document as a PDF.
If those don't seem very groundbreaking, remember the point of the usability test "mini-project" was for the students to experience moderating their own usability test. I'd rather they make mistakes here, so they can learn from them before their final project.

Since each usability test was run with one tester, and we all used the same scenario tasks on the same version of LibreOffice, we can collate the results. I prefer to use a heat map to display the results of a usability test. The heat map doesn't replace the prose description of the usability test (what worked v what were the challenges) but the heat map does provide a quick overview that allows focused discussion of the results.

In a heat map, each scenario task is on a separate row, and each tester is in a separate column. At each cell, if the tester was able to complete the task with little or no difficulty, you add a green block. Use yellow for some difficulty, and orange for greater difficulty. If the tester really struggled to complete the task, use a red block. Use black if the task was so difficult the tester was unable to complete the task.

Here's our heat map, based on fourteen students each moderating a one-person usability test (a "dry run" test) using the same scenario tasks for LibreOffice 5.3 or 5.3.1:


A few things about this heat map:

Hot rows show you where to focus

Since scenario tasks are on rows, and testers are on columns, you read a heat map by looking across each row and looking for lots of "hot" items. Look for lots of black, red, or orange. Those are your "hot" rows. And rows that have a lot of green and maybe a little yellow are "cool" rows.

In this heat map, I'm seeing the most "hot" items in setting double space (#2), adding a table (#5) and checking spelling (#6). Maybe there's something in adding a header (#4) but this scenario task wasn't worded very well, so the problems here might be because of the scenario task.

So if I were a LibreOffice developer, and I did this usability test to examine the usability of MUFFIN, I would probably put most of my focus to make it easier to set double space, add tables, and check spelling. I wouldn't worry too much about adding an image, since that's mostly green. Same for saving, and saving as PDF.

The heat map doesn't replace prose description of themes

What's behind the "hot" rows? What were the testers trying to do, when they were working on these tasks? The heat map doesn't tell you that. The heat map isn't a replacement for prose text. Most usability results need to include a section about "What worked well" and "What needs improvement." The heat map doesn't replace that prose section. But it does help you to identify the areas that worked well vs the areas that need further refinement.

That discussion of themes is where you would identify that task 4 (Add a header) wasn't really a "hot" row. It looks interesting on the heat map, but this wasn't a problem area for LibreOffice. Instead, testers had problems understanding the scenario task. "Did the task want me to just put the text at the start of the document, or at the top of each page?" So results were inconsistent here. (That was expected, as this "dry run" test was a learning experience for my students. I intentionally included some scenario tasks that weren't great, so they would see for themselves how the quality of their scenario tasks can influence their test.)

Different versions are grouped together

LibreOffice released version 5.3.1 right before we started our usability test. Some students had already downloaded 5.3, and some ended up with 5.3.1. I didn't notice any user interface changes for the UI paths exercised by our scenario tasks, but did the new version have an impact?

I've sorted the results based on 5.3.1 off to the right. See the headers to see which columns represent LibreOffice 5.3 and which are 5.3.1. I don't see any substantial difference between them. The "hot" rows from 5.3 are still "hot" in 5.3.1, and the "cool" rows are still "cool."

You might use a similar method to compare different iterations of a user interface. As your program progresses from 1.0 to 1.1 to 1.2, etc, you can compare the same scenario tasks by organizing your data in this way.

You could also group different testers together

The heat map also lets you discuss testers. What happened with tester #7? There's a lot of orange and yellow in that column, even for tasks (rows) that fared well with other testers. In this case, the interview revealed that tester was having a bad day, and came into the test feeling "grumpy" and likely was impatient about any problems encountered in the test.

You can use these columns to your advantage. In this test, all testers were drawn from the same demographic: a university student around 18-22 years old, who had some to "moderate" experience with Word or Google Docs, but not LibreOffice.

But if your usability test intentionally included a variety of experience levels (a group of "beginner" users, "moderate" users, and "experienced" users) you might group these columns appropriately in the heat map. So rather than grouping by version (as above) you could have one set of columns for "beginner" testers, another set of columns for "moderate" testers and a third group for "experienced" testers.

Tuesday, March 21, 2017

LibreOffice 5.3.1 is out

Last week, LibreOffice released version 5.3.1. This seems to be an incremental release over 5.3 and doesn't seem to change the new user interface in any noticeable way.

This is both good and bad news for me. As you know, I have been experimenting with LibreOffice 5.3 since LibreOffice updated the user interface. Version 5.3 introduced the "MUFFIN" interface. MUFFIN stands for My User Friendly Flexible INterface. Because someone clearly wanted that acronym to spell "MUFFIN." The new interface is still experimental, so you'll need to activate it through Settings→Advanced. When you restart LibreOffice, you can use the View menu to change modes.

So on the one hand, I'm very excited for the new release!

But on the other hand, the timing is not great. Next week would have been better. Clearly, LibreOffice did not have my interests in mind when they made this release.

You see, I teach an online CSCI class about the Usability of Open Source Software. Really, it's just a standard CSCI usability class. The topic is open source software because there are some interesting usability cases there that bear discussion. And it allows students to pick their own favorite open source software project that they use in a real usability test for their final project.

This week, we are doing a usability test "mini-project." This is a "dry run" for the students to do their own usability test for the first time. Each student is doing the test with one participant each, but using the same program. We're testing the new user interface in LibreOffice 5.3, using Notebookbar in Contexttual Groups mode.

So we did all this work to prep for the usability test "mini-project" using LibreOffice 5.3, only for the project to release version 5.3.1 right before we do the test. So that's great timing, there.

But I kid. And the new version 5.3.1 seems to have the same user interface path in Notebookbar-Contextual Groups. So our test should bear the same results in 5.3 or 5.3.1.

This is an undergraduate class project, and will not generate statistically significant results like a formal usability test in academic research. But the results of our test may be useful, nonetheless. I'll share an overview of our results next week.

Saturday, March 18, 2017

Will miss GUADEC 2017

Registration is now open for GUADEC 2017! This year, the GNOME Users And Developers European Conference (GUADEC) will be hosted in beautiful Manchester, UK between 28th July and 2nd August.

Unfortunately, I can't make it.

I work in local government, and just like last year, GUADEC falls during our budget time at the county. Our county budget is on a biennium. That means during an "on" year, we make our budget proposals for the next two years. In the "off" year, we share a budget status.

I missed GUADEC last year because I was giving a budget status in our "off" year. And guess what? This year, department budget presentations again happen during GUADEC.

During GUADEC, I'll be making our budget proposal for IT. This is our one opportunity to share with the Board our budget priorities for the next two years, and to defend any budget adjustment. I can't miss this meeting.

Friday, March 17, 2017

Learn Linux

Recently, a student asked me about career options after graduation. This person was interested in options that involved open source software, other than a "developer" position. Because this blog is about open source software, I thought you might be interested in an excerpt of my recommendation:
These days, if you want to get a job as a systems administrator, I recommend learning Linux. Linux administrators are in high demand in pretty much any metro area. In the Twin Cities metro area, it's hard not to have a job if you know Linux.

Red Hat is the most popular Linux distribution for the enterprise. So if you learn one Linux system, learn Red Hat Linux. While it's okay to use GUI tools to manage Linux, you should know how to maintain Red Hat Linux without a GUI. Because when you're running a server, you won't have a GUI.

A good way to learn this is to install Linux (Red Hat Linux, but Fedora can be "close enough") and run it in runlevel 3. Set up a cheap PC in your house and use it as an NFS and CIFS file server. Install a web server on it. Set up DNS on it. Learn how to edit config files at the command line. How to partition, format, and mount a disk (even a USB flash drive) from the command line. How to install packages from the command line. Learn how to write Bash scripts to automate things.

If you want a book, try Linux Systems Administration by O'Reilly Press. For formal training, I recommend Red Hat Sys Admin I and Red Hat Sys Admin II.
My advice mirrors my own background. My undergraduate degree was in physics, with a major in mathematics, yet I managed a successful career in IT. For me, it was about following my interests, and doing what I enjoyed doing. There's nothing like getting paid to do something you wanted to do anyway.

When I got my first job, I was a Unix systems administrator for a small geographics company. We used a very old Unix system called Apollo AEGIS/DomainOS, but had a few HP-UX and SunOS systems. I introduced a few Linux systems to do back-office work like DNS, YP, LPD, etc. My second job was a working manager for a document management company, and while we were mostly AIX, HP-UX and Novell, I installed Linux to run our core "backoffice" services (DNS, file, web, etc). At my third job (working manager at a Big-Ten University) we ran a mix of systems, including AIX and Solaris, but I replaced some of our "Big Iron" AIX systems with Linux to save our web registration system.

From there, was promoted into larger roles and greater responsibility, so left my systems administration roots behind. At least, professionally. I still run Linux at home, and I maintain my own Linux server to run a few personal websites.

But I still remember my start as a Unix and Linux systems administrator. I was fortunate that my first boss took a chance on me, and let me learn on the job. That first boss was a supportive mentor, and helped me understand the importance of learning the ropes, of understanding how to do something at the command line before you can use the "shortcut" of a GUI tool. So I encourage others to do the same. Yes, modern Linux has lots of GUI tools to make things easy. But it's better to know how to do it "the old fashioned way" as well.

Saturday, March 4, 2017

Open source branding

I recently discovered this 2016 article from Opensource.com about branding in open source software. The article encourages projects to kill off extra brand names to help your project get recognized.

The article describes the issue in more detail, but here's a summary:
Let's say you are the maintainer for an open source software project, which I'll make up. Let's call it the Wibbler Framework.

Maybe this is a website builder. Developers can use Wibbler to create awesome, dynamic websites really easily. Wibbler is based around different modules that you can load on your website to do different things.

One day, you get a great idea for a new chart display component. So you spend the weekend writing a module that provides a super simple way for websites to display data in different formats.

You think the new module is pretty cool, so you give it a name: ChartZen.
Pretty realistic scenario, right?

But the problem is this: you've added an extra "brand" to your project. When new users find your project, they are confused. Is it Wibbler, or is it ChartZen? How does ChartZen connect to Wibbler? Do I need to get Wibbler if I just want to use ChartZen? Do I need to run two different servers, one to run Wibbler and another for ChartZen?

By adding the second name, you've confused the original project.

It gets worse if you continue to add new "branding" to every new module. If you continue to add new names for every new module, you end you with a confusing forest of competing "brands": Wibbler, ChartZen, AwesomeEdit, FontForce, FormsNirvana, DBConnSupreme, and Pagepaint.

It's better to just name each component after the main Wibbler project. Keep the Wibbler brand intact. ChartZen becomes "Wibbler Charts," which is easier to remember anyway. And it's immediately clear what "Wibbler Charts" does: it's a component for Wibbler that makes charts. Similarly, you also have "Wibbler Editor," "Wibbler Font Picker," "Wibbler Forms," "Wibbler Database Connector," and "Wibbler Page Designer."

How do you manage your open source software project's "brand"? Do you have different names for different components? Or do you maintain one core "brand" that everything connects to?

Sunday, February 26, 2017

How to get started in open source software

A friend pointed me to the Open Source Guides website, a collection of resources for individuals, communities, and companies who want to learn how to run and contribute to an open source project. I thought it was very interesting for new contributors, so I thought I'd share it here.

The website provides lots of information about starting or joining an open source project. There's lots to read, but I hope that doesn't seem like too much for people interested in getting started in open source software. Open Source Guides has several sections for new developers:
  1. How to Contribute to Open Source
  2. Starting an Open Source Pro ject
  3. Finding Users For Your Project
  4. Building Welcoming Communities
  5. Best Practices for Maintainers
  6. Leadership and Governance
  7. Getting Paid for Open Source Work
  8. Your Code of Conduct
  9. Open Source Metrics
  10. The Legal Side of Open Source
I'm not connected with Open Source Guides, but I think it's a great idea!

Of course, there are other ways to learn about open source software, how to get involved and how to start your own project. Eric Raymond's essay series The Cathedral and the Bazaar are the typical go-to examples about open source software. Along similar lines, Opensource.com has a very brief primer on open source contributor guidelines. And there are countless other articles and books I could mention here, including a few articles written by me.

But I'm interested in the open source software community, and anything that helps new folks get started in open source software will help the community grow. So if you're interested in getting involved in open source software, I encourage you to read the Open Source Guides.

Friday, February 24, 2017

Top open source projects

TechRadar recently posted an article about "The best open source software 2017" where they list a few of their favorite open source software projects. It's really hard for an open source software project to become popular if it has poor usability—so I thought I'd add a few quick comments of my own about each.

Here you go:

The best open source office software: LibreOffice


LibreOffice hasn't changed its user interface very substantially for a very long time. In the recent LibreOffice 5.3 release, they introduced a new interface option, which they call the MUFFIN (My User Friendly Flexible INterface).

The new interface has several modes, including Single Toolbar, Sidebar, and Notebookbar. The last mode, Notebookbar, is interesting. This is very similar in concept to the Microsoft Office Ribbon. People who come from an Office background and are used to how Ribbon behaves, and how it changes based on what you are working on, should like the Notebookbar setting.

To comment on the new interface: I think this is an interesting and welcome direction for LibreOffice. I don't think the current user interface is bad, but I think the proposed changes are a positive step forward. The new MUFFIN interface is flexible and supports users they way they want to use LibreOffice. I think it will appeal to current and new users, and "lower the bar" for users to come to LibreOffice from Microsoft Office.

The best open source photo editor: GIMP


I use GIMP at home for a lot of projects. Most often, I use GIMP to create and edit images for my websites, including the FreeDOS Project website. Although we've recently turned to SVG where possible on the FreeDOS website, for years all our graphics were made in the GIMP.

A few years ago, I asked readers to suggest programs that have good usability (I also solicited feedback through colleagues via their blogs and their readers). Many people talked about GIMP, the open source graphics program (very similar to Photoshop). There were some strong statements on either side: About half said it had good usability, and about half said it had bad usability.

In following up, it seemed that two types of users thought GIMP had poor usability: Those who used Photoshop a lot, such as professional graphics editors or photographers. Those who never used Photoshop, and only tried GIMP because they needed a graphics program.

So GIMP is an interesting case. It's an example of mimicking another program perhaps too well, but (necessarily) not perfectly. GIMP has good usability if you have used Photoshop occasionally, but not if you are an expert in Photoshop, and not if you are a complete Photoshop novice.

The best open source media player: VLC


I haven't used VLC, part of the VideoLAN project, in a few years. I just don't watch movies on my computer. But looking at the screenshots I see today, I can see VLC has made major strides in ease of use.

The menus seem obvious, and the buttons are plain and simple. There isn't much decoration to the application (it doesn't need it) yet it seems polished. Good job!

The best open source video editor: Shotcut


This is a new project for me. I have recorded a few YouTube videos for my private channel, but they're all very simple: just me doing a demo of something (usually related to FreeDOS, such as how to install FreeDOS.) Because my videos aren't very complicated, I just use the YouTube editor to "trim" the start and end of my videos.

Shotcut seems quite complicated to me, at first glance. Even TechRadar seems to agree, commenting "It might look a little stark at first, but add some of the optional toolbars and you'll soon have its most powerful and useful features your your fingertips."

I'm probably not the right audience for Shotcut. Video is just not my interest area. And it's okay for a project to target a particular audience, if they are well suited to that audience.

The best open source audio editor: Audacity


I used Audacity many years ago, probably when it was still a young project. But even then, I remember Audacity as being fairly straightforward to use. For someone (like me) who occasionally wanted to edit a sound file, Audacity was easy to learn on my own. And the next time I used Audacity, perhaps weeks later, I quickly remembered the path to the features I needed.

Those two features (Learnability and Memorability) are two important features of good usability. We learn about this topic when I teach my online class about usability. The five key characteristics of Usability are: Learnability, Efficiency, Memorability, Error Rates, and Satisfaction. Although that last one is getting close to "User eXperience" ("UX") which is not the same as Usability.

The best open source web browser: Firefox


Firefox is an old web browser, but still feels fresh. I use Firefox on an almost daily basis (when I don't use Firefox, I'm usually in Google Chrome.)

I did usability testing on Firefox a few years ago, and found it does well in several areas:

Familiarity: Firefox tries to blend well with other applications on the same operating system. If you're using Linux, Firefox looks like other Linux applications. When you're on a Mac, Firefox looks like a Mac application. This is important, because UI lessons that you learn in one application will carry over to Firefox on the same platform.

Consistency: Features within Firefox are accessed in a similar way and perform in a similar way, so you aren't left feeling like the program is a mash of different coders.

Obviousness: When an action produces an obvious result, or clearly indicated success, users feel comfortable because they understand what the program is doing. They can see the result of their actions.

The best open source email client: Thunderbird


Maybe I shouldn't say this, but I haven't used a desktop email client in several years. I now use Gmail exclusively.

However, the last desktop email client I used was definitely Thunderbird. And I remember it being very nice. Sometimes I explored other desktop email programs like GNOME Evolution or Balsa, but I always came back to Thunderbird.

Like Firefox, Thunderbird integrated well into whatever system you use. Its features are self-consistent, and actions produce obvious results. This shouldn't be surprising, though. Thunderbird is originally a Mozilla project.

The best open source password manager: KeePass


Passwords are the keys to our digital lives. So much of what we do today is done online, via a multitude of accounts. With all those accounts, it can be tempting to re-use passwords across websites, but that's really bad for security; if a hacker gets your password on one website, they can use it to get your private information from other websites. To practice good security, you should use a different password on every website you use. And for that, you need a way to store and manage those passwords.

KeePass is an outstanding password manager. There are several password managers to choose from, but KeePass has been around a long time and is really solid. With KeePass, it's really easy to create new entries in the database, group similar entries together (email, shopping, social, etc.) and assign icons to them. And a key feature is generating random passwords. KeePass lets you create passwords of different lengths and complexity, and provides a helpful visual color guide (red, yellow, green) to suggest how "secure" your password is likely to be.

Monday, February 20, 2017

More about DOS colors

In a followup to my discussion about the readability of DOS applications, I wrote an explanation on the FreeDOS blog about why DOS has sixteen colors. That discussion seemed too detailed to include on my Open Source Software & Usability blog, but it was a good fit for the FreeDOS blog.

It's an interesting overview of how color came to be encoded on PC-compatible computers. The brief overview is this:

CGA, the Color/Graphics Adapter from the earlier PC-compatible computers, could mix red (R), green (G) and blue (B) colors. So that's eight colors, from 000 Black to 111 White.

Add an "intensifier" bit, and you have sixteen colors, eight colors from 0000 Black to 0111 White, and another eight colors from 1000 Bright Black to 1111 Bright White.

There's a bit more about the background and the bit-pattern to represent colors. Read the full article for more: Why DOS has sixteen colors

The readability of DOS applications

My recent article about how web pages are becoming hard to read had me wondering: I grew up with DOS, and I still work with DOS, so what's the readability of DOS applications?

Web pages are mostly black-on-white or dark-gray-on-white, but anyone who has used DOS will remember that most DOS applications were white-on-blue. Sure, the DOS command line was white-on-black, but almost every popular DOS application used white-on-blue. (It wasn't really "white" but we'll get there.) Do an image search for any DOS application from the 1980s and early 1990s, and you're almost guaranteed to yield a forest of white-on-blue images like these:

FreeDOS 1.2 install program

As Easy As, the shareware DOS spreadsheet

Free Point of Sale for DOS, by Dale Harris

DosZip, the file manager for DOS

FreeDOS EDIT

DOS is an old operating system. DOS stands for "Disk Operating System" and was designed to let you run applications. People like to think of DOS as being a command-line operating system, and while you could do manipulate file contents at the command prompt with a limited set of tools, DOS really didn't have the rich set of command-line tools like Unix and Linux enjoy. You mostly used the DOS command line to run different applications.

As an operating system interface, DOS was entirely text-based. It used BIOS services on your computer to do most of its work, including displaying text. With DOS, you had a color palette of sixteen colors, enumerated 0 (black) to F (bright white). But most users didn't know the 0–F codes, only the sixteen ANSI escape codes, divided into "normal" and "bright" modes. I've also included the RGB color representation of each:
NormalBright
30Black0,0,0Gray85,85,85
31Red170,0,0Bright Red255,85,85
32Green0,170,0Bright Green85,255,85
33Brown170,85,0Yellow255,255,85
34Blue0,0,170Bright Blue85,85,255
35Magenta170,0,170Bright Magenta255,85,255
36Cyan0,170,170Bright Cyan85,255,255
37White170,170,170Bright White255,255,255
You used these codes by loading an ANSI driver (called ANSI.SYS on MS-DOS, or NANSI.SYS on FreeDOS) and entering an ANSI escape sequence. Most people I know used the ANSI escape codes to make their DOS prompt more colorful, which you could do by using $E to represent an ESC character. For example, you could set red text with $E[31m or bright red text with $E[31;1m. Similarly, the range 40–47 represented background colors.

You may wonder about the brown/yellow line. That's not a typo; it wasn't really "yellow" and "bright yellow" although some references did call them that.

The general trend in the RGB color representations is the "normal" mode colors use 0 or 170, but "bright" mode colors replace 0 with 85 and 170 with 255.

If you have any familiarity with DOS, you should remember most applications using white-on-blue text. But how readable was this text? Using a Bash script to calculate contrast ratios of text, we can compute the readability of a few common color configurations from popular DOS applications.

White text on a blue background was generally considered (at the time) as easier to read, and prettier to look at, than plain white-on-black. And if you've sprung for that expensive monitor that can display colors, you want color. So white-on-blue quickly became a de facto standard.

Remember that DOS didn't support styling of text. You couldn't do italics or bold. Instead, applications such as word processors used colors to represent styles. Most text would be displayed as white-on-blue. Bold text was bright-white-on-blue, italics text was often green-on-blue or cyan-on-blue, and headings were often yellow-on-blue. Error messages might appear in white-on-red or black-on-red with the title in bright-white-on-red. Warnings might be black-on-brown or white-on-brown with yellow-on-brown titles. Status lines were frequently black-on-cyan or black-on-white.

Let's examine the contrast ratios of these color combinations:
White on Blue5.71
Bright White on Blue13.29
Yellow on Blue12.45
Green on Blue4.26
Cyan on Blue4.63
Black on Red2.70
White on Red3.33
Bright White on Red7.75
Black on Brown4.00
White on Brown2.25
Yellow on Brown4.91
Black on Cyan7.32
Black on White9.03
The W3C definition of the contrast ratio falls in the range 1 to 21 (typically written 1:1 or 21:1). The higher the contrast ratio, the more the text will stand out against the background. For example, black text on a white background has a contrast ratio of 21:1.

The W3C says body text should have a contrast ratio of at least 4.5:1, with headings at least 3:1. But that seems to be the bare minimum. The W3C also recommends at least 7:1 for body text and at least 4.5:1 for headings.

But it's also important to remember that DOS text was quite large, compared to today's standards. By default, DOS used an 80-column, 25-line display. Even on a modest 15-inch display (not unreasonable for the time) each character is around .15" [3.81mm] wide and .36" [9.144mm] high. That's quite large compared to today's websites that may use 11pt text. (Assuming your DPI is set correctly for your display, if 72pt is an inch, 11pt is about .152" [3.86mm] high.)

With text at that scale, I think that means the minimum contrast ratio for DOS applications can be somewhere between the W3C's recommendations for body text and heading text. Let's assume a round number of about 4:1.

So how do DOS applications stack up? Notice that white-on-blue is a very comfortable 5.71. Actually, in the above examples, all text on the blue background is quite readable. Other colors are quite clear, as well. Only black-on-red (2.70), white-on-red (3.33) and white-on-brown (2.25) fall below the recommended minimum of 4:1.

Let's examine the DOS application screenshots. The FreeDOS 1.2 installer uses black-on-white (9.03) for its main text, with list selection in yellow-on-blue (12.75). The FreeDOS EDIT program uses white-on-blue (5.71) for its main text, with its menu in black-on-white (9.03) and status bar in black-on-cyan (7.32).

The As Easy As spreadsheet used white-on-blue (5.71) for its main text and data enty line, with comments in green-on-blue (4.26) column and row labels in black-on-white (9.03) and black-on-white (9.03) status bar and white-on-black (9.03) hint line.

These are all very easy to read colors, even by today's standards. I'm not suggesting that websites switch to a white-on-blue color scheme, but it is interesting to note that even with a simple color palette, DOS applications were doing okay for readability.

Saturday, February 18, 2017

Calculating contrast ratios of text

In a comment on my other article about how web pages are becoming hard to read, Shaun referenced the W3C Web Content Accessibility Guidelines. They provide an algorithm to determine if your text meets minimum accessibility guidelines.

The W3C definition of the contrast ratio requires several calculations: given two colors, you first compute the relative luminance of each (L1 and L2) then calculate the contrast ratio. The ratio will fall in the range 1 to 21 (typically written 1:1 or 21:1). The higher the contrast ratio, the more the text will stand out against the background. For example, black text on a white background has a contrast ratio of 21:1.

The W3C says body text should have a contrast ratio of at least 4.5:1, with headings at least 3:1. But that seems to be the bare minimum. The W3C also recommends at least 7:1 for body text and at least 4.5:1 for headings.

Calculating this can be a chore, so it's best to automate it. Shaum implemented the algorithm in XSLT so he could test the various colors in websites. I created a similar implementation using Bash. It's a little ugly, but I thought I'd share it here:

First, you need a way to input colors. I wanted something that could interpret different representations of colors: in html and css, black is the same as #000 or #000000 or rgb(0,0,0). When evaluating the readability of my text, I might want to use any of these.

Fortunately, there's a neat tool in GNOME to provide that input. GNOME Zenity is a scripting tool to display GTK+ dialogs. It supports many modes to read input and display results. One of the input modes is a color selector, which you use this way:
zenity --color-selection
You can give it other options to set the window title and provide a default color. Zenity returns the selected color on standard output. You can even set a default color. So to present two dialogs, one to read the text color and another to read the background color, you simply do this:
color=$( zenity --title 'Set text color' --color-selection --color='black' )
background=$( zenity --title 'Set background color' --color-selection --color='white' )
Zenity returns values like rgb(255,140,0) and rgb(255,255,255), which is fortunate because the W3C calculation for luminance requires values in the range 0 to 255. I wrote a simple function to pull apart the RGB values. There are probably simpler ways to parse RGB, but a quick and easy way is to let awk split the values at the comma. That means a value like rgb(255,140,0) gets split into rgb(255 and 140 and 0) so the R value is a substring starting at the fifth character, G is the second element, and B is a substring up to the last parenthesis.

Once I have the RGB values, then I calculate the luminance using bc. The funky math with e() and l() are to get around a limitation in bc. Specifically, the formula requires a fractional power, and bc can only do integer powers. But if you follow the math, you can get there using e() and l():
function luminance()
{
        R=$( echo $1 | awk -F, '{print substr($1,5)}' )
        G=$( echo $1 | awk -F, '{print $2}' )
        B=$( echo $1 | awk -F, '{n=length($3); print substr($3,1,n-1)}' )

        echo "scale=4
rsrgb=$R/255
gsrgb=$G/255
bsrgb=$B/255
if ( rsrgb <= 0.03928 ) r = rsrgb/12.92 else r = e( 2.4 * l((rsrgb+0.055)/1.055) )
if ( gsrgb <= 0.03928 ) g = gsrgb/12.92 else g = e( 2.4 * l((gsrgb+0.055)/1.055) )
if ( bsrgb <= 0.03928 ) b = bsrgb/12.92 else b = e( 2.4 * l((bsrgb+0.055)/1.055) )
0.2126 * r + 0.7152 * g + 0.0722 * b" | bc -l
}
Once you have the luminance value of the text color and background color, you can compute the contrast ratio. The W3C formula to do this is quite simple, but requires knowing which is the lighter and darker colors. This is an extra step in bc. I wrote this Bash function to calculate the ratio based on two colors:
function contrast()
{
        echo "scale=2
if ( $1 > $2 ) { l1=$1; l2=$2 } else { l1=$2; l2=$1 }
(l1 + 0.05) / (l2 + 0.05)" | bc
}
With those functions, it's fairly straightforward to write a Bash script that reads two colors, then computes the contrast ratio. My script also uses Zenity to output the data:
#!/bin/sh

# read color and background color:

color=$( zenity --title 'Set text color' --color-selection --color='black' )
if [ $? -ne 0 ] ; then
        echo '** color canceled - assume black'
        color='rgb(0,0,0)'
fi

background=$( zenity --title 'Set background color' --color-selection --color='white' )
if [ $? -ne 0 ] ; then
        echo '** background canceled - assume white'
        background='rgb(255,255,255)'
fi

# compute luminance:

function luminance()
{

}

lum1=$( luminance $color )
lum2=$( luminance $background )

# compute contrast

function contrast()
{

}

rel=$( contrast $lum1 $lum2 )

# print results

( echo "Color is $color on $background"
echo "Contrast ratio is $rel"

if [ ${rel%.*} -ge 4 ] ; then
        echo "Ok for body text"
else
        echo "Not good for body text"
fi
if [ ${rel%.*} -ge 3 ] ; then
        echo "Ok for title text"
else
        echo "Not good for title text"
fi) | zenity --text-info --title='Contrast Ratio'
With this script, I have a handy way to calculate the contrast ratio of two colors: text color vs background color. For black text on a white background, the contrast ratio is 21.00, the most visible. The #333 dark gray on white has a contrast ratio of 12.66, which is fine. And the lighter #808080 gray on white has a contrast ratio of 3.95, too low for normal text but acceptable for large text like headings. Very light #ccc gray on white has a contrast ratio of 1.60, which is way too low.

Wednesday, February 15, 2017

I can't read your website

An article at Backchannel discusses an interesting trend in website design, and how the web became unreadable. It's a good read, but I'll summarize briefly:

Web pages are becoming too hard to read.

Put another way, a popular trend in web design is to use low-contrast text. Maybe that looks really cool, but it is also really hard to read. From the article: "I thought my eyesight was beginning to go. It turns out, I’m suffering from design."

I've noticed this trend too, and I do find it hard to read certain websites. Even this blog used to use a #333 dark grey text on white, just because I thought it looked better that way. And to be honest, others were doing it, so I did it too. But when I changed the text to black on white, I find my blog easier to read. I hope you do too.

The colors you choose for your text can affect the readability of your site. This is directly connected to usability.

Don't believe me? Here is a sample paragraph, repeated using different colors.

White on black:
Space: the final frontier. These are the voyages of the starship Enterprise. Its continuing mission: to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no one has gone before.
Black on white:
Space: the final frontier. These are the voyages of the starship Enterprise. Its continuing mission: to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no one has gone before.
White on dark gray:
Space: the final frontier. These are the voyages of the starship Enterprise. Its continuing mission: to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no one has gone before.
Dark grey on white:
Space: the final frontier. These are the voyages of the starship Enterprise. Its continuing mission: to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no one has gone before.
White on gray:
Space: the final frontier. These are the voyages of the starship Enterprise. Its continuing mission: to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no one has gone before.
Gray on white:
Space: the final frontier. These are the voyages of the starship Enterprise. Its continuing mission: to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no one has gone before.
Which one do you find easiest to read?

Saturday, February 11, 2017

Experimenting with LibreOffice 5.3

I finally installed LibreOffice 5.3 to try it out. (This is actually version 5.3.0.3.) This version comes with a new interface called MUFFIN, which I wrote about as LibreOffice updating its user interface.

MUFFIN stands for My User Friendly Flexible INterface. Because someone clearly wanted that acronym to spell "MUFFIN." The new interface is still experimental, so you'll need to activate it through Settings→Advanced. When you restart LibreOffice, you can use the View menu to change modes. The new interface has several modes:
  1. Default
  2. Single Toolbar
  3. Sidebar
  4. Notebookbar
You can probably guess what the first three modes are about. These just tweak the interface in different ways, but I'd say it's still very "LibreOffice-y."

The last mode, Notebookbar, is interesting. This is very similar in concept to the Microsoft Office Ribbon. People who come from an Office background and are used to how Ribbon behaves, and how it changes based on what you are working on, should like the Notebookbar setting.

And in Notebookbar, you have a few options:
  1. Contextual groups
  2. Contextual single
  3. Tabbed
For me, "Tabbed" was the default when I activated Notebookbar. LibreOffice functions are divided into different tabs, which are clearly labelled. New tabs appear and disappear as suits the context of what you are working on. For example, if you insert a table, then when you go into the table, you get a "Table" tab, with different table-oriented actions like adding a new row or column.

Here are a few quick screenshots of the different tabs in Notebookbar. The "Home" tab is the default, so that's my first screenshot:








I haven't experimented too much with the other modes in Notebookbar, but "Contextual single" gives you a single action bar loaded with icons. I find it too busy, even though there's a lot of empty space in it. The single bar just "feels" too busy.

"Contextual groups" is closer to what you might think of as the "Microsoft Office Ribbon." Rather than adding new tabs to expose new functionality, the Notebookbar changes the content of the bar to add features as they are needed. If you insert a new table, then a table style menu appears. Exit the table, and the Notebookbar removes the table style menu in favor of other actions.

I'll need more time to explore and experiment with Notebookbar. My first impression is that I like it, and that I prefer tabs to contextual groups. I may share more on this blog as I continue to learn the new interface.