ABSTRACT
In a practical and user-centered model for online archival description, what navigational features are effective, efficient, and user-valued components for an academic archives' online finding aid? Using Princeton University's finding aid website as a prototype, this research study collected quantitative as well as qualitative data from ten relatively inexperienced online finding aid users as they interacted with and reacted to the finding aid interface. Major navigational difficulties experienced by users included ambiguous and/or unintuitive labeling, unclear relationships between tabs, and insufficient visual cues for certain navigational features. In contrast, user-valued navigational aids included centralized hyperlinked content, nested and hierarchical content tabs, and a collection-level search bar. The article concludes with ten pragmatic guidelines for archival professionals trying to solve the ongoing puzzle of online finding aid usability.
The traditional archival finding aid was a physical document crafted by an archivist that expressed the structure and content of a collection of materials only accessible from the controlled environment of a supervised reading room. However, in the last few decades, the archival finding aid has transitioned from static document to online interface. Online archival description represents a major step forward in that it facilitates enhanced discovery through remote interaction with collections and allows for wider and easier access to previously sequestered archival materials. However, current user expectations increasingly demand that the online finding aid act as the only point of access to archival collections in today's digital age—this has turned out to be a very tall order for archivists to fill.
The uniqueness and diversity of archival collections, their complicated provenance and context, and their often intricate hierarchical structure all make effective presentation of archival information on the Web a challenge. In the past, archivists have been accused of developing and implementing online archival description without considering user needs.1 Arguably, the profession is still operating outside the user-centered systems movement when it comes to tools and interfaces for the online presentation of archival materials.2 The last two decades of professional discourse about online archival description reveal that, while many extolled the merits of Encoded Archival Description (EAD) for online finding aids early on,3 significant room remains for improvement for online finding aids, especially in the realms of usability, navigation, and user interface design.4
To date, a few dozen usability studies have focused on online archival interfaces, and while most of these were relatively small in scope and scale,5 some were more extensive.6 Taken together, these studies point to several predominant and widespread usability issues, including but not limited to confusing profession-specific jargon, lengthy blocks of unstructured text, long lists of folders and subfolders, and numerous links embedded throughout extensive descriptive hierarchies.7 Suggested solutions to these challenges include simplified labeling terminologies,8 advanced keyword search options,9 and “quick links” for topical searching.10 As a profession, we are just beginning to understand what the ideal user interface might look like for online archival content, and certainly no model specific to finding aid navigation has been proposed yet. In more recent years, several studies have called for further progress and rigor in archival research investigations of online user behavior and information-seeking.11 However, even as many institutions transition to newer archival information management systems and user interfaces, relatively few have considered the added value that improved navigational features could offer online researchers.
In response to this lacuna, this study asks the central research question: what navigational features are effective, efficient, and user-valued components within an academic archives' online finding aid interface? Discovering the answer requires understanding the needs and expectations of users, testing vetted navigational models, and marrying two fields that, until recently, have been siloed in their respective disciplines—online archival description and Web usability.
Literature Review
Online Finding Aids: The Good, the Bad, and the Ugly
Now nearly two decades old, online finding aids have a complicated history within the archives profession. When EAD and online finding aids were new to the scene, they received a wealth of scholarly support and attention. The American Archivist dedicated its entire fall and summer editions of 1997 to a discussion of EAD and its implementation.12 These issues heralded EAD as a potentially groundbreaking technology that the archival community should support and contribute to. Early proponents of EAD were confident in the schema's features, optimistic about its incorporation into professional practices, and even went so far as to imply that EAD finding aids were the logical next step for archival description. Overall, the sense existed that it was never too soon to begin adopting EAD and putting archival content online, at any institution, as its merits were obvious and significant.
While EAD's reception was undeniably positive in these initial moments, Dennis Meissner could see that online finding aids would need substantial reengineering in terms of look, feel, and structure before they could be effective as online collection descriptions.13 He stressed the need “to create finding aids that contain sufficient wayfinding tools to enable users to understand them and the materials they describe without the mediation of archivists” in the context of the virtual environment.14 In the following decade, online archival description and its EAD schema would come under a significant amount of fire as practitioners began to question the functionality, display, and effectiveness of finding aids in the context of the World Wide Web and its increasingly demanding users.
Just a year after the release of EAD1, Wendy Duff and Penka Stoyanova asked users what information about archival materials they would like to see online and how they would prefer it to be displayed.15 In the first usability study of its kind, these researchers used focus group feedback to critique existing finding aid interfaces. Their results indicated that users had trouble with abbreviations and specialized terminology like “linear extent” and “fonds,” and preferred archival information presented on the page according to bibliographic display guidelines and not current archival practice.16 The authors recognized that more research was needed on multilevel description, but suggested that archivists consult current research on system designs and conduct more usability studies to provide better interfaces for users.17 Luckily, others heard their call for more usability testing.
In 2001, Burt Altman and John Nemmers conducted research that pointed to navigation as a central concern for online finding aid functionality, because users needed to be aware of “where they were” in the collection at all times.18 They also discovered a need for both basic and advanced search interfaces to allow for different types of searching within a collection.19 Elizabeth Yakel's usability study a few years later revealed similar findings: the structure of the finding aid proved difficult for study participants, and many stated that they had “gotten lost” within the descriptive hierarchy.20 In addition, Yakel's subjects had trouble understanding archival terminology and how to best search for information within archival websites.21
Another study by Jihyun Kim determined that because of significant element inconsistencies across institutions, users did not understand the meaning of labels when moving from one website to another.22 Kim also discovered that data elements in the EAD tag library were not being sufficiently utilized, meaning finding aids did not provide users diverse or granular access points. Finally, and importantly, Kim determined that EAD finding aids tended to contain narrative forms of information and long container lists without appropriate navigational elements, making it difficult for users to identify critical information and determine its location within the finding aid hierarchy.23 Because of this, browsing within and across collections was proven to be a time-consuming and inefficient activity that did not assist in information retrieval.24
Responding to Kim's note that “search functions are a growing necessity on EAD sites,”25 Xiaomu Zhou analyzed fifty-eight EAD websites and their search capabilities, revealing that search functions supported a disappointingly low number of EAD finding aids. Those finding aids that did allow searching did not arrange search results for users in a structured way.26 Zhou lamented that “It is unfortunate that archivists' focus has been on the issue of encoding finding aids rather than the subsequent process of delivery of archival information via a web interface.”
After a decade of implementation, a consensus was growing within the archival community that unresolved interface issues—particularly overall usability and navigational functionality—represented significant barriers to access and use of online archival description. Summing up the literature and taking into account their professional experiences during a website redesign effort in 2008, J. Gordon Daines and Cory Nimer cited four major problems with online finding aids to date: 1) unintuitive, profession-specific jargon and inconsistently implemented labeling practices; 2) long narratives, big blocks of text, and difficult-to-browse container lists; 3) poor access to item-level content due to ineffective or nonexistent search functionalities; and 4) confusing hierarchical organization and display of content that resulted in users feeling “lost.”27
That same year, Richard Cox declared that despite our having entered the “golden age of archival description, . . . EAD's goal of easy access has been more dream than realization.”28 Cox continued his critique by stating that archivists have been creating their online description “in violation of system analysis . . . and carrying out their descriptive work apart from and with little knowledge of how researchers find and use archival sources.”29
Online Finding Aid Users: Who Are They and What Do They Want?
Despite Cox's accusation, since the advent of EAD, several researchers employing usability and other types of studies have made an effort to understand who the target audience is for online archival content and what their information needs might be.
In a 2004 effort to inform developers about user requirements for new online services, Anna Sexton and the other members of the LEADERS Project asked the important question: “Who uses archival repositories' online description?” The LEADERS team's research identified several types of end users for online archival content, including “personal leisure” users, “individuals using archives as part of their professional occupation,” and “those using archives to support an educational or training program.”30 Sexton's team also determined that a majority of archives users approach online finding aids through “an interest of individuals, families, or organizations,” while the remainder of searchers tend to frame their research topically and temporally.31 Finally, the project's research revealed that most users enter the online archival context already knowing what they are looking for and with some kind of subject area knowledge, yet the majority are inexperienced and uncomfortable with online finding aids as a research tool.
Rosalie Lack's research at the California Digital Library seems to concur; her focus groups, questionnaires, interviews, and usability testing indicate that, for most novice users, the concept of finding aids is extremely difficult to comprehend because new users don't immediately understand the usefulness of a list of physical objects without direct access to the objects via a digital interface.32 Echoing this finding, Christopher Prom also noted that inexperienced searchers expect finding aids to include digitized materials and not just serve as a guide to physical collections.33 Wendy Scheir's writing tends to confirm this; she explained that interactions with online finding aids are sometimes “confounding and frustrating for novice users” who are often unfamiliar with both the subject matter of the content and the inherent structure of archival description.34
Gretchen Gueguen at East Carolina University investigated the typical users of digitized special collection materials in an attempt to support multiple access interfaces and suit the needs of two distinct user groups—undergraduate students and humanities researchers. Her results showed that humanities scholars prefer to first search more broadly across archival materials, and, therefore, benefit from browsing a large and diverse set of resources.35 In contrast, undergraduate students, despite having a higher competency in online library tools, have little to no familiarity with online finding aids and do not find them an effective searching platform. Rather, the students she interacted with prefer to engage with a curated, online exhibit interface that directs their focus and provides item-level descriptions for already digitized materials.36
Daines and Nimer later confirmed that their primary user group—college students and casual researchers—reacted positively to the item-level display feature of their new interface and were able to find the information that they wanted more quickly within that context.37 However, the site's secondary audience—advanced researchers—tended to select the expandable tree menu feature within the new interface believing that it provides greater context for the materials being displayed.38 Wendy Duff and Catherine Johnson offered a thoughtful explanation for these tendencies. They argued that historians represent a separate, distinct, and advanced group of archives users, because while historians' research methods may seem “haphazard” and their discovery path almost “accidental,” in actuality they are “systematic and purposeful in the way they go about building contextual knowledge” from broad queries across a massive amount of archival material.39
In summation, most studies to date identify at most three categories of users (casual researchers, college students, and professional researchers) and at least two levels of users (advanced and novice) who tend to interact with online archival description in very different ways. These distinct user groups have divergent information needs and use different search strategies to accomplish their research goals. Such distinctions are crucial to remember when evaluating the effectiveness of a chosen navigational model for online finding aids.
Research Methodology
This research study focuses exclusively on Princeton University's finding aid website as it existed between September 2014 and May 2015.40 This particular website was chosen because of the range of possible user interactions it encourages and supports.41 The finding aids can be navigated and searched in several distinct ways: 1) a treelike menu of contents on the left can be browsed by clicking on the nested tabs under “Contents and Arrangement”; 2) the contents of a collection can be viewed at the item level by clicking on the hyperlinks for each series, subseries, or item in a central content area on the page; 3) a single collection can be searched by using the search box at the top of the page; and 4) the items within each collection can be reordered by date or title using a special sorting feature located in the item listings' column header. In addition, the interface provides unique Web 2.0 features and plentiful help documentation. Furthermore, Shaun Ellis and Maureen Callahan, both of whom were involved in creating the interface in question, documented and articulated the logic, purpose, and process behind the site's creation.42 The study also benefited from communication with the team that built the website's interface.
Website usability studies represent an effort to evaluate a website's interface by testing it with a group of representative users.43 In this case, the testing group was composed of ten44 English-speaking undergraduate student volunteers without vision, speaking, or motor impairments at a large state university, all of whom received a small amount of financial compensation for their time and effort. 45 Undergraduate students represent a critical population of users that archives attempt to reach with online finding aids, and, therefore, testing the usability of these interfaces with this particular population was both appropriate and essential. The demographics of this study's user group can be seen in Table 1.
All participants were asked to complete typical tasks often attempted by finding aid users by utilizing the existing navigational features on the Princeton University Library's finding aid website. Each participant was given the same set of ten common tasks, with guiding questions corresponding to each one, to be completed solely within the confines of the website within a period of thirty minutes or less. Table 2 shows a generic version of each task and explains the navigational decision that each task required users to make to be successful.
The following usability metrics were collected from each participant's effort to complete a given task: 1) total time spent; 2) the degree of success based on time-sensitive benchmarks; and 3) the number of “clicks” used before completion. In addition to these tasks, participants were also asked to comment on their experiences in brief written pre- and posttest surveys (free-response questions), a reflective interview with the researcher using think-aloud interview protocols (“think-alouds”), and, finally, a Likert-scale user satisfaction survey based on industry standards and best practices (System Usability Scale).
Results
By reviewing written participant responses to the pre- and posttest questionnaires and looking at the System Usability Scale (SUS) survey results, this study shows what participants liked and disliked about the finding aid website interface, how they felt about its design and organization, and what aspects of the interface they found straightforward or confusing. Usability data points and trends in verbal user feedback collected from think-aloud style interviews also indicate the level of effectiveness and satisfaction users experienced within the chosen interface. Taken collectively, these results can suggest more generalizable usability guidelines, not just in the context of Princeton University Archives, but also for the broader community of stakeholders, be they academic archives, cultural heritage institutions, consortia, or developers.
Survey Results
Before being asked to complete tasks within a specific collection on Princeton's finding aid website, participants were given two minutes to explore the website on their own. Participants were encouraged to navigate around a simple, small collection and the website freely. Afterward, each was asked to write about his or her experience on the website for a full five minutes in a pretest questionnaire, with particular attention to liked features, disliked features, aesthetics, and points of confusion. Table 3 synthesizes participants' initial responses to the website.
Participants then completed their ten assigned tasks within a single and well-developed collection finding aid on the website. Afterward, they were again given five minutes to respond in writing about their experiences on a posttest questionnaire. Table 4 shows participants' responses to the posttest questionnaires, after they had become more familiar with the website and its functions.
As Tables 3 and 4 make clear, at least half of the study participants enjoyed the conciseness of the website and its text, the simple and uncluttered layout of the finding aids as well as the color scheme, and the hierarchically informed viewing it enabled. However, half of respondents indicated that the “Comments Box” at the bottom of every page was more confusing than helpful. Though nearly half of all participants expressed appreciation for an easy-to-find search bar, the same number of participants was disappointed in the lack of visual icons or images available in the finding aids. In addition, some participants found the labels attached to the left-hand tabs unintuitive and the subject terms applied to each collection overly vague.
While a few of the above questionnaire comments are undeniably negative, the results of the SUS survey (see Figure 1), on the whole, reveal a high level of satisfaction with the website, with an average SUS score of 84.5. Since a combined SUS score of over 70 is considered to be above average,46 it seems that all participants rated the website “above average” in terms of usability.
A closer look at specific usability metrics yields even more fruitful data about exactly how users navigated the archival description on Princeton's finding aid website and whether or not that navigation should be considered easy and effective.
Usability Results
One of the most basic ways of determining which tasks might be more difficult to navigate than others is to consider “time on task” data, or the amount of time a participant needs to successfully complete a given task. The average “time on task” for each of the ten tasks presented to participants in this study is shown in Figure 2. These averages indicate that while tasks 8 and 9 were the most time consuming (requiring an average of almost one full minute to complete), tasks 2, 3, 6, and 7 were typically accomplished more quickly (in less than thirty seconds on average), suggesting that they were easier to achieve than the others.
Another way to determine the level of success for each task is to compare each participant's completion time to a set of benchmark completion times. In this case, the benchmarks selected by the researcher were 1) the larger group's average completion time for each task; and 2) twice that value. Any participant who completed a task at or before the first benchmark is classified in Figure 3 as having completed that task “with ease.” Similarly, any participant who took longer to complete his or her task than the first benchmark, but was successful at or before the second benchmark is classified in the chart as having completed that task “with difficulty.” Any participant who took longer to complete the task than did the second benchmark was considered unsuccessful.
By classifying the data in this way, we can see that at least 50% of participants were able to complete all tasks “with ease,” and, in most cases, only one in ten participants was not able to complete a given task as defined; these data, on the whole, represent an overwhelmingly positive group success rate. However, less than ideal results are also presented here. A large percentage (40%–50%) of participants could not complete half of the ten tasks—tasks 3, 4, 8, 9, and 10—“with ease.” The navigational decisions relating to each of these include where to find citation information, where to locate the creator's biographical information, how to find a subseries in the collection hierarchy, how to reorder collection contents, and how to find a single item within the collection. The fact that a large percentage of participants only completed these tasks “with difficulty” raises the question of whether or not navigational inefficiencies are to blame. Efficiency measures like the total number of mouse clicks per task can be helpful indicators of whether participants typically made more navigational errors during certain tasks.
Figure 4 shows two sets of data: 1) the optimal number of mouse clicks for each task—that is, the number of mouse clicks necessary to complete a task in the most efficient way—and 2) the average number of mouse clicks used by all participants for each task in the study.47 The data are overlaid here to show which tasks the participant group performed most efficiently and which it typically performed inefficiently, that is, with far more than the necessary mouse clicks.
These results indicate that the least efficiently executed task, by far, was task 4—finding the creator's biography within the collection's finding aid. Users seemed to make navigational errors frequently when trying to complete this task, which could indicate that the preferred or intended navigational path to the creator's biography is confusing, unintuitive, or simply not apparent to end users. Other tasks that revealed high inefficiencies (those that averaged double or greater mouse clicks than optimal) included tasks 1, 5, 8, 9, and 10. These tasks included performing a global search across all collections, looking for similar items on the same subject as the current collection using subject terms, finding subseries information within the collection hierarchy, determining how to reorder collection contents, and finding a single item of interest within the collection. This implies that the most efficient pathway for completing common tasks on the website is not apparent to end users. Click inefficiencies can be key indicators of “lostness” on the part of the user—when he or she makes navigational errors by going down inefficient paths during task-oriented movements because of experiencing some degree of disorientation.48
User Feedback
During the researcher-led interviews when participants were encouraged to think aloud about their experiences with the finding aid, verbal data were collected to confirm how “lost” or confused users felt. In addition, participants were asked which navigational features they preferred to use to complete their tasks and why. Tables 5 and 6 represent common responses from the participant group during these think-alouds.
These usability data and written survey responses seem to correlate with some of the navigational breakdowns (see Table 5) participants expressed during the verbal response portion of testing. For example, four participants specifically mentioned labeling as a “dislike” in their posttest questionnaire, and the issue came up again as a major navigational failure during the think-aloud. As previously mentioned, task 4, wherein users had to locate the content creator's biography by finding the correct tab label, was the least efficiently executed task. Similarly, the task completion rate for task 4, as well as for task 3, which required users to locate the preferred citation for the collection using tab labels, showed that 50% of users could not complete the task “with ease.” User comments on the first row of Table 5 support this: tab labels confused rather than clarified the proper navigational path for end users in several cases.
One potential, but still unvetted, solution to this vocabulary dilemma is to keep label titles as they are and provide guidance and context for them by inserting hover captions over each label, which would pop up anytime the mouse moved over them. These hover captions, which have been met with positive results in past experiments,49 could briefly note what kinds of information each tab housed and therefore prevent confusion.
The other navigational failure many study participants mentioned is that the series-level tabs located in the left-hand menu bar under the “Contents and Arrangement” tab are not clearly related or connected to that tab in any visual way except by proximity. This confusion may help to explain why 50% of users did not complete tasks 9 and 10 “with ease,” which required interacting with collection contents, and why high levels of click inefficiency characterized these same tasks. Finally, as the last few comments on Table 5 hint, task 8, which required users to interact with the collection contents by reordering items, showed equally high levels of click inefficiency, and only half of all study participants completed it with ease.
According to the participant feedback given in the think-aloud interviews, these navigational failures are not the result of inappropriate navigational components, but rather the result of insufficient user-friendly visual cues. The reorderable item columns have no visual indication of “clickability” until a mouse scrolls over the column header. In the same way, the “Contents and Arrangement” tab and lower level series tabs share no visual indicators that might signal to users that they relate to the same content.
Connecting users, especially inexperienced or first-time users, to specific interface features requires clear and obvious visual cues. Responding to this very issue, one study participant made a practical suggestion that could potentially clarify the less-than-clear relationship between the “Contents and Arrangement” tab and the lower-level series tabs: simply hide the series tabs until the “Contents and Arrangement” tab is selected, making it clear that the information in all these tabs is related and connected. In the case of the too-subtle reordering feature—a small, hidden up or down arrow in the column header that appears only when the mouse rolls over it—it might be more logical to present the component in an explicit (set of) button(s) labeled “Reorder Contents.” This would highlight the feature's functionality and draw attention to its usefulness for the end user.
It may seem surprising that most participants in the study, instead of working exclusively within one of the navigational systems supported by the finding aid's interface, tended to split their efforts between several navigational systems, depending on the tasks they needed to perform. In fact, several participants explained their use of the two collection navigational systems as cooperative rather than mutually exclusive. For example, one participant noted that “At the highest level of the collection, the nested tabs on the left were useful, but to explore sub-series and items I preferred to work directly in the central contents box with the hyperlinks.” This, of course, is in line with data collected from both the pre- and posttest questionnaire, wherein half of all study participants mentioned the benefit of having a hierarchical contents list in the menu, and nearly as many commented on the navigational affordances of a readily accessible search box at the collection level, in addition to centralized content hyperlinks and a visible breadcrumb menu at the top of the page.
Conclusion: The Model
This usability study of Princeton University's finding aid website offers critical information about how end users of online archival content interact with and navigate around the online finding aids of academic archives. In an effort to translate these results into practical guidelines for archivists, the major findings of this study have been synthesized into a working model for online finding aid navigation. The recommendations presented below represent ten critical pieces of this functional model of the still-to-be-solved usability puzzle for online archival description. The hope is that archivists and developers alike can use these guidelines to make iterative, if small, steps toward improving online finding aid interfaces. While usability considerations and user-interface changes can be labor intensive and challenging to implement, it is important to know that even slight adjustments can yield significantly better user experiences. Furthermore, simply being aware of and vocal about the problems that users face in online finding aids are critical and foundational to moving our profession and finding aid technologies forward.
Use words and select titles that make sense to users; that is, make labels inclusive and intuitive.
Provide context for end users by maintaining collection hierarchy in the presentation of archival contents such as series, subseries, and container lists.
Give users a way to visually explore and browse through collection contents without “losing their place.”
Provide easy and quick access to individual items within a collection by minimizing the number of clicks needed to view item-level content.
Implement a navigational system that can present content at varying degrees of granularity to avoid information overload for users; in other words, allow users to hide lower-level detail when they don't want to see it.
Allow for keyword searching at the collection level and at the global level across the entire finding aid website.
Provide sufficient visual cues for special navigational features, such as drop-down menus, sorting buttons, clickable lists, and so on.
When possible, supply users with collection-specific visual content in the form of related images, icons, or graphics.
Keep the interface uncluttered and concise to support clarity and ease of use.
Do not add Web 2.0 features without cause or a consideration of user preferences.
Several of the above recommendations align with the “do's and don't's” of user-friendly finding aids outlined by Joyce Chapman of North Carolina State University.50 Like many researchers before her, Chapman noted that archival terminology is often confusing to users and therefore should be avoided or explained wherever possible.51 In addition, she suggested that navigational menus mimicking a table of contents with links to specific sections of the finding aid can prove useful, as can the “ctrl-F” in-page search function when a collection-level search box is not available.52 Furthermore, Chapman argued that clear and easy-to-find help documentation is another important way to support users.53 While none of the test participants in this study used Princeton's online help documentation, the interface did provide multiple routes and opportunities for them to access such information. Help documentation can act as a security blanket for novice users who are altogether unfamiliar with finding aids, and certainly further research is needed on the best way to provide help documentation within the online finding aid environment.
Many other aspects of finding aid usability remain unexplored. This study uncovered very little data about how to best facilitate global, repository-wide searching, yet users undeniably value this navigational feature. Princeton's finding aid website uses faceted search categories for site-level queries so that searchers can narrow their results by date, subject, language, and so on. However, it remains to be seen whether users value faceted search within online archival finding aids.54 In addition, this research study focused on participants who self-identified as either beginner or intermediate finding aid users. It would be logical to test whether more experienced finding aid users—professional researchers, historians, and genealogists—would reveal the same navigational preferences as participants in a similar study. Finally, much more needs to be understood about the way the Web 2.0 features can be appropriately implemented to enhance the user experience in online finding aid interfaces. Though the “Comments Box” feature on Princeton's finding aid website seemed to generate more confusion than praise from test participants, recent studies point to moderate amounts of user interest in what have been called “participatory” finding aids—those that allow for user annotations and contributions.55 Other Web 2.0 features that remain underresearched in the context of finding aids include tagging, word clouds, hover captions, and even saving and starring features to allow users to revisit their favorite results or queries later.56 To date, little has been determined about the potential effectiveness or efficiency of these kinds of interactive features for online archival description. Future research should explore these new opportunities with the same verve that the past two decades of researchers exhibited in their pursuit and refinement of EAD.
Appendix
Notes
Richard J. Cox, “Revisiting the Archival Finding Aid,” Journal of Archival Organization 5, no. 4 (2008): 5–32.
Ciaran B. Trace and Andrew Dillon, “The Evolution of the Finding Aid in the United States: From Physical to Digital Document Genre,” Archival Science 12, no. 4 (2012): 516.
A. J. Gilliland-Swetland, “Popularizing the Finding Aid: Exploiting EAD to Enhance Online Discovery and Retrieval in Archival Information Systems by Diverse User Groups,” Journal of Internet Cataloging 4, nos. 3–4 (2001): 199–225; L. A. Morris, “Developing a Cooperative Intra-Institutional Approach to EAD Implementation: The Harvard/Radcliffe Digital Finding Aids Project,” The American Archivist 60, no. 4 (1997): 388–407; Janice E. Ruth, “Encoded Archival Description: A Structural Overview,” The American Archivist 60, no. 3 (1997): 310–29; Steven J. DeRose, “Navigation, Access, and Control Using Structured Information,” The American Archivist 60, no. 3 (1997): 298–309.
Wendy Duff and Penka Stoyanova, “Transforming the Crazy Quilt: Archival Displays from a User's Point of View,” Archivaria 45 (1998): 44–79; Elizabeth Yakel, “Encoded Archival Description: Are Finding Aids Boundary Spanners or Barriers for Users?,” Journal of Archival Organization 2, nos. 1–2 (2004): 63–77; Jihyun Kim, “EAD Encoding and Display: A Content Analysis,” Journal of Archival Organization 2, no. 3 (2004): 41–55; Rosalie Lack, “The Importance of User-Centered Design: Exploring Findings and Methods,” Journal of Archival Organization 4, nos. 1–2 (2007): 69–86; Xiaomu Zhou, “Examining Search Functions of EAD Finding Aids Web Sites,” Journal of Archival Organization 4, nos. 3–4 (2007): 99–118; Cory Nimer and J. G. Daines, “What Do You Mean It Doesn't Make Sense? Redesigning Finding Aids from the User's Perspective,” Journal of Archival Organization 6, no. 4 (2008): 216–32; J. G. Daines and Cory L. Nimer, “Re-Imagining Archival Display: Creating User-Friendly Finding Aids,” Journal of Archival Organization 9, no. 1 (2011): 4–31; Morgan G. Daniels and Elizabeth Yakel, “Seek and You May Find: Successful Search in Online Finding Aid Systems,” The American Archivist 73, no. 2 (2010): 535–68; Joyce Celeste Chapman, “Observing Users: An Empirical Analysis of User Interaction with Online Finding Aids,” Journal of Archival Organization 8, no. 1 (2010): 4–30; Christina J. Hostetter, “Online Finding Aids: Are They Practical?,” Journal of Archival Organization 2, nos. 1–2 (2004): 117–45.
Duff and Stoyanova, “Transforming the Crazy Quilt,” 44–79; Burt Altman and John R. Nemmers, “The Usability of Online Archival Resources: The Polaris Project Finding Aid,” The American Archivist 64, no. 1 (2001): 121–31; Yakel, “Encoded Archival Description,” 63–77; Anna Sexton, Chris Turner, Geoffrey Yeo, and Susan Hockey, “Understanding Users: A Prerequisite for Developing New Technologies,” Journal of the Society of Archivists 25, no. 1 (2004): 33–49; Wendy Scheir, “First Entry: Report on a Qualitative Exploratory Study of Novice User Experience with Online Finding Aids,” Journal of Archival Organization 3, no. 4 (2006): 49–85; Lack, “The Importance of User-Centered Design,” 68–86; Nimer and Daines, “What Do You Mean It Doesn't Make Sense?,” 216–32; Chapman, “Observing Users,” 4–30; Rita D. Johnston, “A Qualitative Study of the Experiences of Novice Undergraduate Students with Online Finding Aids” (master's thesis, MLIS, University of North Carolina at Chapel Hill, 2008); Jane Stevenson, “‘What Happens If I Click on This?’: Experiences of the Archives Hub,”Ariadne 57 (2008), http://www.ariadne.ac.uk/issue57/stevenson/; Tracy Jackson, “I Want to See It: A Usability Study of Digital Content Integrated into Finding Aids,” Journal for the Society of North Carolina Archivists 9, no. 2 (2012): 20–77.
Christopher J. Prom, “User Interactions with Electronic Finding Aids in a Controlled Setting,” The American Archivist 67, no. 2 (2004): 234–68; Dawne E. Howard, “The Finding Aid Container List Optimization Survey: Recommendations for Web Usability” (master's thesis, MLIS, University of North Carolina at Chapel Hill, 2006).
Thomas J. Frusciano, “Online Finding Aids, Catalog Records, and Access? Revisited,” Journal of Archival Organization 9, no. 1 (2011): 1–3.
Danielle L. Fasig, “Usability Evaluation of Finding Aids for Archives” (master's thesis, MLIS, University of North Carolina at Chapel Hill, 2013).
Altman and Nemmers, “The Usability of Online Archival Resources,” 121–31.
Chapman, “Observing Users,” 4–30.
Trace and Dillon, “The Evolution of the Finding Aid,” 516; Jody L. DeRidder, Amanda Axley Presnell, and Kevin W. Walker, “Leveraging EADs for Access to Digital Content: A Cost and Usability Analysis,” The American Archivist 75 (Spring/Summer 2012): 169; Rachel Hu, “Methods to Tame the Madness: A Practitioner's Guide to User Assessment Techniques for Online Finding Aid and Website Design,” RBM: A Journal of Rare Books, Manuscripts, and Cultural Heritage 13 (Fall 2012): 190.
Daniel V. Pitti, “Encoded Archival Description: The Development of an Encoding Standard for Archival Finding Aids,” The American Archivist 60, no. 3 (1997): 268–83; Kris Kiesling, “EAD as an Archival Descriptive Standard,” The American Archivist 60, no. 3 (1997): 344–54; Ruth, “Encoded Archival Description,” 310–29; DeRose, “Navigation, Access, and Control,” 298–309; Elizabeth H. Dow, “EAD and the Small Repository,” The American Archivist 60, no. 4 (1997): 455; Morris, “Developing a Cooperative Intra-Institutional Approach,” 388–407.
Dennis Meissner, “First Things First: Reengineering Finding Aids for Implementation of EAD,” The American Archivist 60, no. 4, special issue on Encoded Archival Description: Part 2—Case Studies (1997): 372–87.
Meissner, “First Things First,” 387.
Duff and Stoyanova, “Transforming the Crazy Quilt,” 44–79.
Duff and Stoyanova, “Transforming the Crazy Quilt,” 65.
Duff and Stoyanova, “Transforming the Crazy Quilt,” 66.
Altman and Nemmers, “The Usability of Online Archival Resources,” 126–27.
Altman and Nemmers, “The Usability of Online Archival Resources,” 126–27.
Yakel, “Encoded Archival Description,” 63–77.
Yakel, “Encoded Archival Description,” 63–77.
Kim, “EAD Encoding and Display,” 41–55.
Kim, “EAD Encoding and Display,” 52.
Kim, “EAD Encoding and Display,” 52.
Kim, “EAD Encoding and Display,” 54.
Zhou, “Examining Search Functions,” 99–118.
Nimer and Daines, “What Do You Mean It Doesn't Make Sense?,” 216–32.
Cox, “Revisiting the Archival Finding Aid,” 9–10.
Cox, “Revisiting the Archival Finding Aid,”5.
Sexton, Turner, Yeo, and Hockey, “Understanding Users,” 43.
Sexton, Turner, Yeo, and Hockey, “Understanding Users,” 44.
Lack, “The Importance of User-Centered Design,” 68–86.
Prom, “User Interactions with Electronic Finding Aids,” 234–68.
Scheir, “First Entry,” 71.
Gretchen Gueguen, “Digitized Special Collections and Multiple User Groups,” Journal of Archival Organization 8, no. 2 (2010): 96–109.
Gueguen, “Digitized Special Collections and Multiple User Groups,” 97.
Nimer and Daines, “What Do You Mean It Doesn't Make Sense?,” 216–32.
Nimer and Daines, “What Do You Mean It Doesn't Make Sense?,” 216–32.
Wendy M. Duff and Catherine A. Johnson, “Accidentally Found on Purpose: Information-Seeking Behavior of Historians in Archives,” The Library Quarterly 72, no. 4 (2002): 494.
While Princeton's finding aid website (http://findingaids.princeton.edu/) remains relatively unchanged at the time of this publication, it is understood that alterations will inevitably be made that would make observing the features and navigation discussed herein difficult or even impossible. Therefore, six screen shots of the website have been provided to readers as a visual reference in an appendix. In addition, the Internet Archive has captured the website about a dozen times between the start and end dates of the study (September 2014 and April 2015). See Internet Archive Wayback Machine, https://web.archive.org/web/20150815000000*/http://findingaids.princeton.edu/. Readers are encouraged to interact with this historically accurate version of the website to experience the navigational elements referenced herein, even though much of the content may be missing due to incomplete Web captures.
“Princeton University Library Finding Aids,” Princeton University Library, http://findingaids.princeton.edu/.
Shaun Ellis and Maureen Callahan, “Prototyping as a Process for Improved User Experience with Library and Archives Websites,” Code4lib Journal 18 (2012), http://journal.code4lib.org/articles/7394.
“Usability Testing,” Usability.gov, http://www.usability.gov/how-to-and-tools/methods/usability-testing.html#.
In a 2012 article, Jakob Nielsen argued that, for qualitative usability studies, more than five testing participants does not result in appreciably more usability insights. I chose to be conservative and double that number in recruiting my own testing participants, with the support of Carnegie Foundation funding, so that any statistical results would have better confidence. See Jakob Nielsen, “How Many Test Users in a Usability Study?” (June 4, 2012), Nielsen Norman Group: Evidenced-Based User Experience Research, Training, and Consulting, http://www.nngroup.com/articles/how-many-test-users/.
The funding for this research cost was supported by a $200 Carnegie Grant awarded to support graduate research in the field of information and library science.
“System Usability Scale,” Usability.gov, https://www.usability.gov/how-to-and-tools/methods/system-usability-scale.html; Rubin and Chisnell, Handbook of Usability Testing, 42–43.
The optimal number of mouse clicks for each task was calculated by determining the shortest possible pathway to the desired search result and then counting the number of mouse clicks that specific pathway required.
Tom Tullis and Bill (William) Albert, Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics (Boston: Elsevier/Morgan Kaufmann, 2008), 89.
Chapman, “Observing Users,” 13–14.
Joyce Celeste Chapman, “Do's and Don't's: A Primer for User-Friendly Finding Aid Design,” Journal for the Society of North Carolina Archivists 8 (Fall 2010): 2–28.
Chapman, “Do's and Don't's,” 11.
Chapman, “Do's and Don't's,” 14–15, 17.
Chapman, “Do's and Don't's,” 9–10.
Rachel Walton, “Searching High and Low: Faceted Navigation as a Model for Online Archival Finding Aids,” Journal for the Society of North Carolina Archivists 12, nos. 1–2 (2015): 65–99.
Lara Farley, “The Participatory Finding Aid and the Archivist: How User Annotations Are Changing Everyone's Role,” Archival Issues 35, no. 2 (2014): 79–98.
Chapman, “Observing Users,” 25–26.