Wednesday, October 17, 2012

Usability Testing of Web Sites

Both Thomsen-Scott and Tolliver, et al. detail the process of performing usability testing for library websites. While Thomsen-Scott focuses mainly on the process of the testing (and the importance of the user from beginning to end), Tolliver discusses the testing through the lens of help from consultants.

I found Thomsen-Scott's article a very accessible introduction to the world of usability testing. She explained in thorough detail the steps the University of North Texas libraries took to perform usability tests on their "Ask a Librarian" and home pages: formal usabililty tests, focus groups, and cognitive walkthroughs.

The formal usability tests involved direct observations of users as they navigated the website, peforming tasks pre-assigned by the testers. This method included sophisticated video-capture software to track users' movements throughout the website, as well as the more low-tech thinkaloud method, which had users speaking their thoughts out loud as they worked. Thomsen-Scott made sure to note that participants were given chocolate or candy after the testing, both as a reward and as encouragement for those who had difficulty using the site. Acknowledgement in the form of sweets goes a long way!

Another low-tech method, focus groups were used to gather participants together to discuss issues with the libraries' website in an open, free-flowing manner. Again, the comfort of the participants was on the minds of the testers: when one focus group did not communicate as openly as expected, the next group was hosted in a more welcoming room and conversational format to encourage discussion.

Although UNT did not use cognitive walkthroughs as part of its usability testing, Thomsen-Scott helpfully explains this method for librarians who might find it useful in their own testing.

Thomsen-Scott's focus on the users extended from the very rationale for the usability testing--making sure the web sites deliver what the users want--to the methodology of the testing: treats of candy were given to test participants; focus groups were held in welcoming rooms to encourage discussion; and pizza is suggested for librarian friends who volunteer to help with cognitive walkthroughs. Thomsen-Scott shows awareness of the needs of others, not only with her examples of her libraries' usability tests, but also with her clear, accessible tutorial on usability testing. As she points out, with the increase in online courses and other offerings from university libraries, websites constantly need to be updated to meet users' needs. Librarians have a good model to follow in Thomsen-Scott's example.

Meanwhile, Tolliver, et al. focus on the testing itself as experienced with the help of a consultant affiliated with the University of Michigan libraries. Whereas Thomsen-Scott performed the usability review as an update to an existing system, Tolliver describes a complete overhaul of the UM website, the scale of which necessitated usability tests in the design and development phase of the project, which illustrates the flexibility of usability tests: they may be conducted at any time in the development life cycle, and the timing of the testing is most likely determined by the scope of the project.

Tolliver states that the benefits of using a consultant for testing, rather than existing library staff, are the expert experience a consultant brings; a neutral, outsider perspective; and the time saved from librarians' not having to teach themselves how to conduct tests. This latter argument is not convincing, however: as Tolliver notes, usability testing is important, especially with the need to keep revising web sites as services and expectations change. Rather than pay a consultant each time a library performs usability testing, it would be more cost-effective in the long run for librarians to learn the skills themselves. Certainly this would take time upfront, but the economic savings over the longer term would justify this. Also, Tolliver notes that the consultants, while "expert" at testing, were not so expert at creating the content for the testing; rather, they had to defer to the librarians on what usability tasks needed to be tested. Finally, the consultants played a small role during follow-up testing, giving feedback to librarians but not much else.

The scope of the two library systems' projects differed vastly, and perhaps that is why a consultant might be necessary, but Tolliver does not use this reason in his own argument. Both Tolliver and Thomsen-Scott do agree, however, on the importance of usability testing on library web sites.


Wednesday, October 3, 2012

Rethinking the ILS

Marshall Breeding in "Re-Integrating the Integrated Library System" and Sai Deng in "Beyond the OPAC" both cite Google in explaining the need to improve library information systems, and, although they differ in how they approach the problem, both offer compelling solutions.

Breeding takes a broader view, looking at the ILS as a whole and describing its history from the 1970s, when it offered a near-complete automation of library functions. But as Breeding notes, the ILS came of age when libraries only had print materials, and it was never really updated to account for the ever-increasing number of digital materials in a library's collection. This has necessitated numerous add-ons such as link resolvers, metasearch products, and electronic resource management applications. While these are all adequate to meet the need of processing digital content, and in fact allow for customization of an ILS, Breeding notes that constructing this patchwork of ILS plus assorted add-ons requires "a lot of planning, design, and coordination"--that is, it is labor-intensive. 

The obvious solution--creating an ILS that incorporates the functionalities of all of these add-ons--is cost-prohibitive, Breeding concedes. Rather, he envisions an improvement to the ILS-plus-add-on framework: indeed, as these supplements are still fairly new, it will just take some time and adjustments before it is more workable. Either way, as Breeding notes, users will demand improvement, and with Google and Amazon's simple interfaces just one click away, libraries have no choice but to comply.

Deng agrees that Google's interface is much better than most libraries', and approaches improvement strictly with regard to the OPAC: while Breeding looks at the ILS as a whole, Deng examines only the OPAC component, and uses the example of Google and Yahoo's personalization of interfaces and a case study of a collection web site to argue for more personalization of OPAC interfaces.

Deng describes the process of creating a web site for specialized data within an ILS--in this case faculty-produced literature at Wichita State University. While the steps to create an initial web site are numerous and complicated, Deng notes that once the initial site has been constructed, altering it to create yet another tailored site is much simpler. These subsequent sites can be customized by language, user type, library type, author, subject, or topic. The advantages--"better web presentation, easier discovery, and greater user attention"--are hard to dismiss.


Wichita State University's Faculty Research Showcase


Both Deng and Breeding offer solutions to the challenges Google, Amazon, and Yahoo have posed for libraries, and both have merit: while Breeding's argument to keep improving what we have works on the large scale, Deng's suggestion to do what we can to enhance the OPAC is a more focused approach.