Archival instruction pedagogy is shifting from traditional lecture-based show-and-tell approaches to more active hands-on strategies that fall within the realm of active or inquiry-based instruction. Archivists are beginning to assess their instruction sessions using reaction assessments, learning assessments, performance assessments, and blended approaches; gathering data to illustrate the efficacy of the instruction pedagogy employed and thereby shedding light on how archives contribute in meaningful ways to student learning. These studies tend to focus on the cognitive rather than the affective impact of instruction, frequently use methods that depend on written course assignments for their analysis (e.g., citation analysis), and are never comparative. Employing a classic experimental design as well as transformational teaching theory and the ARCS (Attention, Relevance, Confidence, and Satisfaction) Model as a foundation, this study seeks to fill this gap in the archival literature. By assessing the affective impact of two different instruction techniques (show-and-tell vs. inquiry-based) and applying a non-discipline-specific approach, this research suggests that undergraduate students at the Environmental Design Archives who participated in inquiry-based instruction felt significantly more confident handling archival materials; excited by the materials; comfortable contributing to the discussion; and appreciative of the archival materials they encountered when compared to their peers in the control group.

The use of primary resources in the undergraduate classroom is not a new phenomenon. This teaching strategy has received a great deal of attention during the past twenty years, due in part to a Carnegie Foundation–commissioned report published in the late 1990s entitled Reinventing Undergraduate Education: A Blueprint for America's Research Universities.1 Reinventing Undergraduate Education recommends creating opportunities for undergraduates to engage in research with primary sources. This report articulates inquiry-based learning as an effective strategy for teaching students early in their undergraduate careers, claiming increased engagement, improved critical-thinking skills, and greater confidence among students using primary sources within an archival repository.2 Responding to this push for including primary resources in the classroom, archivists transitioned from perceiving themselves as solely facilitating access to primary resources, to educators with pedagogical strategies behind the instructional support they provide.3 As a result, archival instruction pedagogy is shifting from a traditional lecture-based approach to hands-on strategies falling within the realm of active or inquiry-based instruction.

Archivists are beginning to assess their instruction sessions through reaction assessments,4 learning assessments, performance assessments, and blended approaches.5 In archival literature, these studies tend to focus on the cognitive impact of instruction,6 frequently use methods that depend on written course assignments for their analysis,7 and are never comparative. This experimental study addresses this gap in the archival literature by examining the affective impact of two different instruction techniques (show-and-tell8 vs. inquiry-based9) for one-shot archival instruction sessions10 among undergraduate students at the UC Berkeley Environmental Design Archives (EDA).

The EDA is a self-supporting research unit within the College of Environmental Design that collects and preserves the work of the San Francisco Bay region's historically significant architects, landscape architects, designers, and architectural photographers. The archives provides primary source materials for scholarly research, teaching support, preservation, and public service. Over the course of an academic year, the archives leads instructional sessions primarily for undergraduate classes in a variety of disciplines, grade levels, and divisions: architecture, landscape architecture, art history, and UC Berkeley extension courses. Most students who participate in the EDA's instruction sessions produce drawings and models of design projects rather than written assignments such as term papers. Common forms of impact assessment, such as citation analysis of written course assignments,11 are not applicable for analyzing the impact of instruction sessions involving the undergraduate students served by the EDA.

The diverse academic backgrounds of the student population served by the EDA and lack of text-based assignments make it an ideal repository to develop and test a non-discipline-specific assessment tool that measures the affective impact of two different instruction pedagogies. The assessment tool created for this study enabled the exploration of the following research questions: Does instruction technique affect students'

  • confidence in finding and using the materials in the archives?

  • perception of engagement?

  • satisfaction with the archival experience?

Comprehending the shift from passive teaching paradigms to more active ones requires an understanding of educational theory; more specifically the instruction paradigm, learning paradigm, and Bloom's taxonomy.

Paradigm Shift: From Teaching to Learning

The mid-1990s saw a subtle yet profound paradigm shift in the realm of American higher education. Universities went from perceiving themselves as “institution[s] that exists to provide instruction” characterized as the “instruction paradigm” to “institution[s] that exists to produce learning” or the “learning paradigm.”12 Under the instruction paradigm, the lecture-based instruction method is the primary function and product of the institution.13 The learning paradigm emphasizes the process of learning, where lecturing becomes one of many possible techniques employed in the classroom, all evaluated on their ability to create significant learning experiences.14 This shift is a result of scholarship, primarily in the realm of education, that illustrates the inefficacies of the instruction paradigm, which Alan Guskin characterizes as “contrary to almost every principle of optimal settings for student learning.”15 Substantial research illustrates that lecture-based instruction fails to help students “retain information after a course is over, develop an ability to transfer knowledge to novel situations, develop skill in thinking or problem solving, and achieve affective outcomes, such as motivation for additional learning or a change in attitude.”16

Active Learning, Transformational Theory, and the ARCS Model

As educators shift away from the instruction paradigm, they increasingly move toward active learning models that are part of a broad approach to classroom instruction called transformational teaching. This approach “involves creating dynamic relationships between teachers, students, and a shared body of knowledge to promote student learning and personal growth.”17 Teachers using this approach view themselves as facilitators who establish a vision for the course and create learning experiences that “transcend the boundaries of the classroom” by “promoting ample opportunities for preflection and reflection.”18 Studies show that students who experience active learning demonstrate increased engagement, better conceptual understanding, and greater persistence in comparison to traditional lecturing.19 Transformational theory focuses on a student's disposition toward learning, emphasizing the affective domain (emotional growth or attitude) rather than the cognitive domain (mental skills or knowledge) of Bloom's taxonomy, a foundational theoretical framework for student learning that divides learning behaviors into three categories: cognitive, affective, and psychomotor.20 The focus of this theory lies in how teaching changes the attitudes, emotions, interests, motivation, self-efficacy, and values of the students involved (affective domain) and less on acquiring or retaining specific facts, concepts, and principles (cognitive domain).

Transformational teaching aims at “improving students' self-regulatory capabilities, instilling in students self-directed learning skills, enhancing students' learning-related attitudes and values, or promoting students' beliefs about their capability to acquire, synthesize, analyze, and use knowledge in a way that is meaningful to their lives.”21 John Keller describes this affective, or behavioral, component to learning in his ARCS (Attention, Relevance, Confidence, and Satisfaction) Model, published in 1984. Keller argues that attention, relevance, confidence, and satisfaction—four essential human characteristics and their associated motivational dynamics—create a critical foundation that enables and stimulates impactful learning.22 Keller explains that instruction must obtain and sustain the student's attention, present relevant materials to the student's studies, design the learning materials and environment to establish and foster the student's confidence, and enable students to experience satisfaction with the learning experience (resulting from extrinsic and intrinsic factors).23

The Paradigm Shift's Influence on Library and Archival Standards and Frameworks

Despite library and information professionals' recognition of the importance of the affective domain, it lacks a presence in many of the professional standards and frameworks for instruction and information literacy. Standards, frameworks, and how-to guides such as the “Association of College & Research Libraries (ACRL) Framework for Information Literacy for Higher Education”; Guidelines for Primary Source Literacy; Visual Literacy for Libraries; TeachArchives.org; Using Primary Sources; Teaching with Primary Sources; and Past or Portal? Enhancing Undergraduate Learning through Special Collections and Archives all advocate active learning.24 These writings that frame and influence our profession address the cognitive skills that are byproducts of active learning, but they largely do not incorporate affective competencies or the “emotional abilities that students must acquire to successfully navigate the research process”—which are arguably just as important.25 Conducting more impact-based studies that focus on how instruction affects students' attitudes, emotions, interests, motivation, confidence, self-efficacy, and values would allow the affective domain and associated competencies to gain more prominence in library and archival science.

Practice of Assessment

Since the 1980s, the archival profession has urged its members to study their users.26 Despite this, user-based evaluation of archival instruction remains limited. Magia Krause's 2008 study, for which she interviewed twelve leading scholars of teaching with primary sources to provide insight on pedagogical strategies and assessment practices, illustrates why.27 Krause found that although the participants possessed no formal educational training, they held strong opinions about the benefits of active or inquiry-based instruction and were aware of effective ways to integrate these techniques into their teaching.28 Despite being acutely aware of the importance and benefits of collecting feedback, the archivists who participated in the study undertook no formal assessment practices due to busy schedules and limited staff.29

Studies that Focus on Affective Impact

Despite being viewed as a “nebulous and an unwieldy topic,” some scholars in the field of library and information science focus on the affective domain, most prominently Carol Kuhlthau, Constance Mellon, and Sharon Bostick. Their research suggests that the affective domain plays a significant role in directing cognition and action throughout the student learning experience in the context of information literacy within a library setting.30 In the archival setting, the body of literature on instruction is growing. Assessment techniques are rarely part of the reported experiences,31 although this is changing.32 In their article, Anne Bahde and Heather Smedberg categorize and reflect on several common evaluation techniques employed to assess the impact of archival instruction, including reaction assessments, learning assessments, performance assessments, and blended approaches.33 The first category addressed, reaction assessments, are studies that focus on evaluating instruction sessions from the students' perspective.34 Reaction assessments focus on the cognitive domain, which deals with “recall or recognition of knowledge and the development of intellectual abilities and skills.”35 These studies give students the opportunity to appraise the most and least useful aspects of the instruction sessions and assess comprehension of concepts like understanding the difference between primary and secondary sources.36

More recent reaction assessments, beginning with the Archival Metrics Project's student module, include both the cognitive and affective domains.37 Student confidence and satisfaction are the most common affective variables that archivists aim to measure. In addition to the cognitive competencies articulated above, archivists also assess students' perceptions of the archival experience by measuring satisfaction with the session, confidence with using archival collections and finding aids, and willingness to return.38 These studies suggest that as a result of archival instruction, students possess more cognitive skills concerning how to use an archives and analyze primary sources as well as affective competencies such as increased confidence as researchers.39 Surprisingly, very few studies measure students' perceptions of engagement during the instruction session. Students and Faculty in the Archives, one of the first studies to explicitly look at the engagement of students using pre- and postassessments, found that participants experienced increased engagement and excitement about their coursework as a result of the active learning pedagogy employed.40

The emphasis of assessment is shifting from solely the cognitive domain to include aspects of the affective domain, namely confidence and satisfaction. One of the most significant components of active learning, engagement, however, is frequently ignored.41 This study seeks to fill this gap in the archival literature by employing a classic experimental research design as well as transformational teaching theory and the ARCS Model as a foundation.

True or classic experimental design was employed to identify the cause-and-effect relationship and to assess the magnitude of the effects produced. This choice of research method involved precise guidelines for the selection of the study population and careful design of the survey instrument and instruction protocol.

Study Population: Selection and Recruitment

This study examines the affective impact of two different instruction techniques among undergraduate students enrolled in architecture and landscape architecture courses in the College of Environmental Design at UC Berkeley. The reviewing of undergraduate course listings in the Landscape Architecture and Architecture Departments of the College of Environmental Design took place at the beginning of the fall 2016 semester. The reference archivist solicited faculty teaching courses relevant to archival materials held by the EDA. Responding faculty met with the reference archivist, and together they scheduled instruction sessions and selected collection materials. The reference archivist taught instruction sessions using two different pedagogical techniques and assessed affective impact by administering pre- and postinstruction questionnaires.

Table 1.

Sample Population by Class

Sample Population by Class
Sample Population by Class

Out of 11 faculty members solicited via email, 3 agreed to participate—2 taught landscape architecture courses, Plants in Design and Fundamentals of Landscape Design, and 1 taught an architecture course: Introduction to Environmental Design. Across 3 classes, 81 students participated in the pre-instruction questionnaire, the instruction session, and the postinstruction questionnaire—40 students in the control group (show-and-tell instruction) and 41 students in the treatment group (inquiry-based instruction).42

A classic experimental design was chosen as a framework to assess the affective impact of two different instructional techniques: show-and-tell vs. inquiry-based. The learning objectives constructed with the professors or graduate student instructors informed the content of each instruction session. The control group received show-and-tell lecture-based archival instruction that took the form of a lecture on the materials with time at the end of the session for questions. The treatment group received inquiry-based instruction that included two interactive exercises, one requiring each student to interpret archival objects (guided by a set of questions) and another requiring use of a finding aid. Pre- and postinstruction questionnaires were administered to both the control and the treatment groups to assess the impact of each instruction technique. Professors or graduate student instructors, who were not aware of the premise of the study, divided each class into two groups (control and treatment43). Professors and graduate student instructors received no guidelines stipulating how they should divide each class, a limitation of this study discussed later in this article.

Pre- and postinstruction questionnaires were administered in person to yield a better response rate and asked students only for their first and last initials to enable the matching of pre- and postinstruction questionnaires. After matching and assigning a unique ID to each survey set (e.g., 1A for the pre-instruction questionnaire and 1B for the post), the initials recorded on each survey were removed and destroyed. A key code was created to anonymize and track the questionnaires.44 Before they were administered to students, both pre- and postinstruction questionnaires were piloted and approved by the Institutional Review Board (IRB).

Survey Design

The survey design informed what statistical tests could be executed on the data collected, causing synchronous development of the research and survey questions. I selected three emotional states (confidence, engagement, and satisfaction) that focused on affective, not cognitive, impact and defined each variable with three indicators (or observations). The three variables and associated indicators are reflected in the research question and based on transformational theory, the ARCS Model, and variables used in previous studies.45 Each indicator became a survey question in the form of a forced-choice Likert scale. This choice of scale, from 1 to 4 (1 = strongly disagree, 2 = disagree, 3 = agree, 4 = strongly agree) prevented respondents from selecting a neutral response between agree and disagree.

The pre-instruction questionnaire consisted of three demographic questions and three statements with corresponding forced-choice Likert scales to measure the students' confidence level in using the archives. The postinstruction questionnaire consisted of fourteen questions total, five demographic questions, and nine statements (one statement for each indicator). (See  Appendix A for pre- and postinstruction questionnaires.)

Instruction Protocol: Design

Control and treatment groups within each class were shown the same set of primary sources (see  Appendix B). Creating instruction protocols for both techniques (show-and-tell vs. inquiry-based) helped mitigate confounding variables between the control and treatment groups. The protocols developed for each type of instruction consisted of instruction objectives and outcomes, a list of archival materials used in the instruction session, and a mechanics section that outlined the instruction session itself.46

Table 2.

Research Question and Corresponding Variables, Indicators, and Definitions

Research Question and Corresponding Variables, Indicators, and Definitions
Research Question and Corresponding Variables, Indicators, and Definitions

Instruction Objectives and Outcomes

Each instruction session used the “ACRL Visual Literacy Competency Standards for Higher Education” as a basis; more specifically standards 3 and 4 outlined in Table 3.47 Building upon the visual literacy framework, I tailored each instruction session to the objectives and outcomes distilled from one-on-one meetings with the professor or graduate student instructor of each class, reflected in the mechanics document created for the control and treatments groups (see Table 4).

Table 3.

ACRL Visual Literacy Standards 3 and 4

ACRL Visual Literacy Standards 3 and 4
ACRL Visual Literacy Standards 3 and 4
Table 4.

Mechanics for Control and Treatment Groups

Mechanics for Control and Treatment Groups
Mechanics for Control and Treatment Groups

Data analysis for this study consisted of two stages: descriptive statistics and top-level analysis, and inferential statistics in the form of two-sample t-tests.48 The first step involved entering the raw data into Excel and grouping the 81 pre-/postinstruction questionnaires' data by control (40 students) and treatment groups (41 students). The second step involved calculating summary statistics to illuminate larger trends in the data set.

Finally, two-sample t-tests were performed using the statistical software SPSS Version 24 and a level of significance of 0.05 (α = 0.05) to determine the existence of a significant difference in means between the control and treatment groups.

Table 5.

Comparison of Summary Statistics

Comparison of Summary Statistics
Comparison of Summary Statistics

Demographics: Gender, Major, Year in Studies, Prior Archival Experience

The pre-instruction questionnaire asked participants about their self-identified gender, major, year in studies, and prior archival experience. The control and treatment groups' gender distributions exhibited slight variation. Both groups consisted of more females than males.

Fields of study or academic majors varied considerably between the control and treatment groups. The most common majors among both groups include architecture, landscape architecture, sustainable environmental design, and urban studies.

Participating students consisted of only undergraduates. The year in studies distribution possessed similarities between the control and treatment groups. First-year students represented the largest subset of participating students, 45% in the control group and 46% in the treatment group. Third-year students made up the second largest group, with fourth-year students following close behind.

FIGURE 3.

Year in studies

FIGURE 3.

Year in studies

The majority of students in both the control and treatment groups possessed no prior archival experience (80% of control group, 95% of treatment group). Students in the treatment group reported less archival experience than their counterparts. From the control group, 8 students had experienced an archives (7 students visited an archives once, and 1 student visited three times). Only 2 students in the treatment group possessed archival experience, both students only experiencing an archives once before visiting the EDA.

Pre-instruction/Postinstruction Comparison within Treatment and Control Groups

Determining statistical significance between the three pre- and postconfidence indicators involved executing paired two-sample t-tests—within the control and treatment groups respectively. Both the control and treatment groups experienced a statistically significant difference in average scores for all three confidence indicators: 1) navigating the EDA's website to determine relevant materials for project/research, 2) interpreting a finding aid, and 3) properly handling primary source materials. This difference suggests that both instruction techniques positively affected students who participated. For the finding aid and handling indicators, the treatment group exhibited lower p-values49 than the control group, which suggests the efficacy of inquiry-based instruction. For the website indicator, the control group exhibited a lower p-value when compared to the treatment group, as seen in Table 6, illustrating that lecture-based instruction is potentially more effective at conveying skills associated with navigating the EDA's website.

FIGURE 4.

Prior archival experience

FIGURE 4.

Prior archival experience

Table 6.

Pre/Postinstruction T-tests for Confidence Indicators within Control and Treatment Groups

Pre/Postinstruction T-tests for Confidence Indicators within Control and Treatment Groups
Pre/Postinstruction T-tests for Confidence Indicators within Control and Treatment Groups

Postinstruction Comparison Control vs. Treatment: Confidence, Engagement, and Satisfaction

Implementing two-sample t-tests assuming unequal variances enabled the determination of statistical significance for the difference in means between the control and treatment groups' post scores for the nine indicators.

Confidence

The difference in average scores given for the statement “I feel comfortable handling archival materials properly” was the only statistically significant indicator, with a p-value of 0.03867. In the treatment group, 100% of students scored a 3 or 4 on the Likert scale as their response, creating a polarized histogram in Figure 5. There was no statistically significant difference in confidence using a finding aid or navigating the EDA's website.

FIGURE 5.

Confidence handling materials

FIGURE 5.

Confidence handling materials

Table 7.

T-test Output: Confidence Handling Materials

T-test Output: Confidence Handling Materials
T-test Output: Confidence Handling Materials

Engagement

Engagement, or a student's perceived affective reactions and sense of connectedness to the instruction sessions, reflected 1) excitement about materials, 2) attention sustained by the instruction session, and 3) comfort with contributing to the discussion. T-tests revealed a statistically significant difference between averages for two of the three indicators: excitement about materials and comfort with contributing to the discussion.

FIGURE 6.

Excitement about materials

FIGURE 6.

Excitement about materials

Table 8.

T-test Output: Excitement about Materials

T-test Output: Excitement about Materials
T-test Output: Excitement about Materials

The difference in means between reported scores for the attention sustained indicator exhibited no statistical significance. Instruction technique did not significantly affect students' attention. Students “agreed” or “strongly agreed” that the session sustained their attention in both the control group (90% of students gave scores of 3 or 4 on the Likert scale) and treatment (92% of students gave scores of 3 or 4 on the Likert scale).

FIGURE 7.

Comfort contributing to discussion

FIGURE 7.

Comfort contributing to discussion

FIGURE 8.

Attention sustained

FIGURE 8.

Attention sustained

Table 9.

T-test Output: Comfort Contributing to Discussion

T-test Output: Comfort Contributing to Discussion
T-test Output: Comfort Contributing to Discussion

Satisfaction

Satisfaction, or a student's perceived personal emotional reaction to the instruction session, reflected 1) enjoyment of experience, 2) appreciation for the materials shown, and 3) eagerness to return on one's own. T-tests revealed a statistically significant difference between averages for only the appreciation indicator.

Instruction technique did not affect students' enjoyment of the archival experience or their eagerness to return. Only 1 student in each group reported not enjoying the experience. In both the control and treatment groups, 97% of students felt enjoyment (marking 3 or 4 on the Likert scale). Of the control group, 93%, and 95% of the treatment group felt eager to return.

Regardless of pedagogy, students indicated being positively affected by the instruction sessions they experienced. Examining the raw data for the five indicators found not to be statistically significant, students overwhelming gave positive scores of 3, “agree,” or 4, “strongly agree,” for navigating the EDA's website, knowing how to use a finding aid, sustaining attention, enjoying the archival experience, and eagerness to return—in both the treatment and control groups. Students felt they learned these concepts. The positive competencies outlined above are possibly due to the relevance of materials presented, the communication skills of the reference archivist, or a plethora of other variables.

FIGURE 9.

Appreciation for materials

FIGURE 9.

Appreciation for materials

Table 10.

T-test Output: Appreciation for Materials

T-test Output: Appreciation for Materials
T-test Output: Appreciation for Materials

Four out of nine indicators exhibited statistical significance, illustrating the efficacy of active learning. Despite more students in the control group having prior experience at the EDA (8 more than in the treatment group), students who received inquiry-based instruction felt significantly more confident handling archival materials; excited by the materials presented to them; comfortable contributing to the discussion; and appreciative of the archival materials they encountered. Within the context of this experimental study, creating a learning environment that encourages students to use inquiry to reach critical, creative, and dialogical thinking, proves more effective than lecture-based show-and-tell instruction.

Twenty-one surveys were discarded during the pre- and postinstruction questionnaire matching process, as some students were present for the pre-instruction questionnaire and not the postinstruction questionnaire or vice versa. The results of the study, therefore, cannot claim to represent the larger under-graduate student population; it only describes the 81 students who participated, making this study exploratory. Professors or graduate student instructors did not divide students into two groups using simple random sampling in the form of a lottery method to ensure that each person within a particular class had the same likelihood of being chosen for each group, consequently introducing a potential for internal biases. Use of forced-choice Likert scales could have also affected the results. Forcing respondents to choose between “agree” and “disagree” without providing a neutral middle ground may annoy and prevent participants from revealing the truth about their views.50 Variations in the demographics, namely academic majors, between the treatment and control groups possibly affected the results. Landscape architecture majors, for example, in the Plants in Design course may have scored higher than undeclared majors, because they possess a greater incentive to learn or exhibit more interest in the materials.

Future research will consist of replicating this study at other universities among students from a variety of academic disciplines and will address the following potential limitations: sample size, the selection of students for treatment and control groups, use of forced-choice Likert scales, and variation in demographics. This study was created for students at the university level; adapting this tool for a K–12 audience is possible with little modification.

Assessing and understanding the information needs of archival users is critical in providing essential data to inform practices and programs at archival institutions, and such data can also be used to advocate on behalf of a repository, especially in a climate of limited resources. Despite the essentiality of this kind of data, archivists are only beginning to evaluate their instruction sessions using reaction assessments, learning assessments, performance assessments, and blended approaches. Gathering data to illustrate the efficacy of the instruction pedagogy employed and thereby shedding light on how archives contribute in meaningful ways to student learning is imperative. Existing studies tend to focus on the cognitive rather than the affective impact of instruction, largely use methods that depend on written course assignments for their analysis (e.g., citation analysis), and are never comparative. This comparative study fills a gap in the archival literature by using a data collection tool independent of a written assignment and focusing on the affective impact of instruction. In doing so, this experimental study exemplifies both the affective and cognitive competencies acquired by students as a result of the archival one-shot instruction session. More specifically, it illustrates that inquiry-based learning provides students with the basic human characteristics and associated motivational dynamics—attention, confidence, and satisfaction—critical to instill the motivation to learn.51 The establishment of this emotional foundation makes the educational experience richer, and students, as a result, learn more effectively.52 The active learning pedagogy gives students more agency, allowing them to perceive themselves as contributors to their scholarship rather than solely consumers of it, in turn providing a new found confidence in their abilities. The results of this experimental study show this emotional growth. Across the board, regardless of pedagogy, the instruction sessions students experienced positively affected them. However, the students who participated in inquiry-based instruction experienced far more positive results than did their peers.

Chris Marino is the curator of the Environmental Design Archives (EDA) at UC Berkeley, directing a full archival program for architecture, landscape architecture, and planning collections. Previously, she served as reference and outreach archivist at the EDA and at UC Santa Barbara's Architecture and Design Collection at the Art, Design & Architecture Museum. Marino received her master of library and information science with an archival studies specialization from UCLA and a bachelor's degree in ethnic studies from UCSD.

Chris Marino is the curator of the Environmental Design Archives (EDA) at UC Berkeley, directing a full archival program for architecture, landscape architecture, and planning collections. Previously, she served as reference and outreach archivist at the EDA and at UC Santa Barbara's Architecture and Design Collection at the Art, Design & Architecture Museum. Marino received her master of library and information science with an archival studies specialization from UCLA and a bachelor's degree in ethnic studies from UCSD.

Appendix A: Pre- and Postinstruction Questionnaires

Environmental Design Archives: Student Pre-Test

You are invited to take part in a research survey about instruction sessions at the Environmental Design Archives. Your participation will require approximately 2 minutes, and is completely voluntary. If you have questions about this survey, contact ________________________

Please provide your first and last initials only:

  1. What is your gender?

    • □ Female

    • □ Male

    • □ Do not want to disclose

  2. What is your Major?

    Please answer in the space provided below:

  3. What year are you in your studies?

    Mark only one

    • □ First year

    • □ Second year

    • □ Third year

    • □ Fourth year

    • □ Fifth year

    • □ Graduate Student

Thank you for completing the survey.

You are invited to take part in a research survey about instruction sessions at the Environmental Design Archives. Your participation will require approximately 3 minutes, and is completely voluntary. If you have questions about this survey, contact ______________________________

Please provide your first and last initials only: ________________

  1. What is your gender?

    • □ Female

    • □ Male

    • □ Do not want to disclose

  2. What is your Major?

    Please answer in the space provided below:

  3. What year are you in your studies?

    Mark only one

    • □ First year

    • □ Second year

    • □ Third year

    • □ Fourth year

    • □ Fifth year

    • □ Graduate Student

  4. Prior to coming to the Environmental Design Archives, had you ever been to an Archives before?

    • □ Yes

    • □ No

  5. If yes, how many times have you been to an Archives?

    Please answer in the space provided below:

Thank you for completing the survey.

Appendix B: Station Descriptions for Three Classes Surveyed

LD Arch 101: Fundamentals of Landscape Design

LD 111: Plants in Design

ED1: Introduction to Environmental Design

1“Reinventing Undergraduate Education: A Blueprint for America's Research Universities” (Stoney Brook, N.Y.: Boyer Commission on Educating Undergraduates in the Research University, 1998), https://eric.ed.gov/?id=ED424840.

2“Reinventing Undergraduate Education.”

3Peter Carini, “Archivists as Educators: Integrating Primary Sources into the Curriculum,” Journal of Archival Organization 7, nos. 1–2 (2009): 41–50, https://doi.org/10.1080/15332740902892619.

4These are questionnaires, surveys, and evaluations that evaluate instruction sessions from the students' perspective.

5Anne Bahde and Heather Smedberg, “Measuring the Magic: Assessment in the Special Collections and Archives Classroom,” RBM: A Journal of Rare Books, Manuscripts, and Cultural Heritage 13, no. 2 (2012): 153, https://rbm.acrl.org/index.php/rbm/article/view/380/380.

6Magia G. Krause et al., “Undergraduates in the Archives: Using an Assessment Rubric to Measure Learning,” American Archivist 73, no. 2 (2010): 507–34; Eleanor Mitchell, Peggy Seiden, and Suzy Taraba, eds., Past or Portal? Enhancing Undergraduate Learning through Special Collections and Archives (Chicago: Association of College and Research Libraries, a division of the American Library Association, 2012); Wendy Duff et al., “The Development, Testing, and Evaluation of the Archival Metrics Toolkits,” American Archivist 73, no. 2 (2010): 569–99, https://doi.org/10.17723/aarc.73.2.00101k28200838k4; Eilean Hooper-Greenhill, “Measuring Learning Outcomes in Museums, Archives and Libraries: The Learning Impact Research Project (LIRP),” International Journal of Heritage Studies 10, no. 2 (2004): 151–74, https://doi.org/10.1080/13527250410001692877; Bahde and Smedberg, “Measuring the Magic”; Wendy M. Duff and Joan M. Cherry, “Archival Orientation for Undergraduate Students: An Exploratory Study of Impact,” American Archivist 71, no. 2 (2008): 499–529; Morgan Daniels and Elizabeth Yakel, “Uncovering Impact: The Influence of Archives on Student Learning,” Journal of Academic Librarianship 39, no. 5 (2013): 414–22.

7Lorrie A. Knight, “Using Rubrics to Assess Information Literacy,” Reference Services Review; Bradford 34, no. 1 (2006): 43–55; Stacey Knight-Davis and Jan S. Sung, “Analysis of Citations in Undergraduate Papers,” College & Research Libraries 69, no. 5 (2008): 447–58; Krause, “Undergraduates in the Archives,” 507–34; Chris Leeder, Karen Markey, and Elizabeth Yakel, “A Faceted Taxonomy for Rating Student Bibliographies in an Online Information Literacy Game,” College & Research Libraries 73, no. 2 (2012): 115–33; Michelle McCoy, “The Manuscript as Question: Teaching Primary Sources in the Archives—The China Missions Project,” College & Research Libraries 71, no. 1 (2010): 49–62; Anne Middleton, “An Attempt to Quantify the Quality of Student Bibliographies,” Performance Measurement and Metrics; Bradford 6, no. 1 (2005): 7–18; Beth A. Mohler, “Citation Analysis as an Assessment Tool,” Science & Technology Libraries 25, no. 4 (2005): 57–64; Thomas L. Reinsfelder, “Citation Analysis as a Tool to Measure the Impact of Individual Research Consultations,” College & Research Libraries 73, no. 3 (2012): 263–77.

8These are lecture-based instruction techniques in which the reference archivist presents materials from the collection and talks to the students about the history and significance of each object, leaving time to answer questions when students have them.

9This is an instruction technique in which the reference archivist creates learning exercises surrounding the materials presented. In this model, the reference archivist avoids giving the answer but encourages students to actively question the materials, ideas, and content to be learned as a group.

10“One-shot archival instruction session” within the context of this paper refers to a one-time, hour-long presentation and discussion during which a reference archivist introduces students from a particular course to archival policies/procedures and presents relevant primary source materials.

11Bahde and Smedberg, “Measuring the Magic,” 162.

12Robert B. Barr and John Tagg, “From Teaching to Learning—A New Paradigm for Undergraduate Education,” Change: The Magazine of Higher Learning 27 (November 1995): 13, https://doi.org/10.1080/00091383.1995.10544672.

13Barr and Tagg, “From Teaching to Learning,” 13.

14Barr and Tagg, “From Teaching to Learning,” 15.

15Alan E. Guskin, “Restructuring the Role of Faculty,” Change: The Magazine of Higher Learning 25, no. 5 (1994): 18.

16L. Dee Fink, Creating Significant Learning Experiences: An Integrated Approach to Designing College Courses, rev. and updated edition, Jossey-Bass Higher and Adult Education Series (San Francisco: Jossey-Bass, 2013), 3.

17George M. Slavich and Philip G. Zimbardo, “Transformational Teaching: Theoretical Underpinnings, Basic Principles, and Core Methods,” Educational Psychology Review 24, no. 4 (2012): 569, https://doi.org/10.1007/s10648-012-9199-6.

18Slavich and Zimbardo, “Transformational Teaching,” 569.

19Peter Armbruster et al., “Active Learning and Student-Centered Pedagogy Improve Student Attitudes and Performance in Introductory Biology,” CBE Life Sciences Education 8, no. 3 (2009): 203–13, https://doi.org/10.1187/cbe.09-03-0025; Donna Dahlgren et al., “Do Active Learning Techniques Enhance Learning and Increase Persistence of First-Year Psychology Students?,” Journal of the First-Year Experience & Students in Transition, no. 1 (2005); Louis Deslauriers, Ellen Schelew, and Carl Wieman, “Improved Learning in a Large-Enrollment Physics Class,” Science 332, no. 6031 (2011): 862–64, https://doi.org/10.1126/science.1201783; Scott Freeman et al., “Prescribed Active Learning Increases Performance in Introductory Biology,” CBE Life Sciences Education 6, no. 2 (2007): 132–39, https://doi.org/10.1187/cbe.06-09-0194; David C. Haak et al., “Increased Structure and Active Learning Reduce the Achievement Gap in Introductory Biology,” Science 332, no. 6034 (2011): 1213–16, https://doi.org/10.1126/science.1204820; Jennifer K. Knight and William B Wood, “Teaching More by Lecturing Less,” Cell Biology Education 4, no. 4 (2005): 298–310, https://doi.org/10.1187/05-06-0082; Ralph W. Preszler, “Replacing Lecture with Peer-Led Workshops Improves Student Learning,” CBE Life Sciences Education 8, no. 3 (2009): 182–92, https://doi.org/10.1187/cbe.09-01-0002; Michael Prince, “Does Active Learning Work? A Review of the Research,” Journal of Engineering Education 93, no. 3 (2004): 223–31, https://doi.org/10.1002/j.2168-9830.2004.tb00809.x; Bryan K. Saville et al., “A Comparison of Interteaching and Lecture in the College Classroom,” Journal of Applied Behavior Analysis 39, no. 1 (2006): 49–61, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1389610/; Catherine Ueckert, Alison Adams, and Judith Lock, “Redesigning a Large-Enrollment Introductory Biology Course,” CBE Life Sciences Education 10, no. 2 (2011): 164–74, https://doi.org/10.1187/cbe.10-10-0129.

20Benjamin S. Bloom, ed., Taxonomy of Educational Objectives, the Classification of Educational Goals, by a Committee of College and University Examiners. Handbook I (New York: David McKay Co., Inc., 1956), 28. Created in 1956 by a committee under the leadership of educational psychologist Dr. Benjamin Bloom, this theoretical framework delineates three domains of learning: cognitive (mental skills or knowledge), affective (emotional growth or attitude), and psychomotor (manual or physical skills). Instructional designers, educators, and trainers use this taxonomy frequently for learning assessment.

21Slavich and Zimbardo, “Transformational Teaching,” 575.

22John M. Keller, “Strategies for Stimulating the Motivation to Learn,” Performance + Instruction 26, no. 8 (October 1, 1987): 1–7, https://doi.org/10.1002/pfi.4160260802.

23Keller, “Strategies for Stimulating the Motivation to Learn,” 1–2.

24D. Muller, “Framework for Information Literacy for Higher Education,” Association of College & Research Libraries (February 9, 2015), http://www.ala.org/acrl/standards/ilframework; “Guidelines for Primary Source Literacy—FinalVersion—Summer 2017,” https://www2.archivists.org/sites/all/files/Guidelines%20for%20Primary%20Souce%20Literacy%20-%20FinalVersion%20-%20Summer2017_0.pdf; Nicole E. Brown et al., Visual Literacy for Libraries: A Practical, Standards-Based Guide (American Library Association, 2016); “Our Teaching Philosophy,” TeachArchives.org, http://www.teacharchives.org/articles/our-teaching-philosophy/; Anne Bahde, Heather Smedberg, and Mattie Taormina, eds., Using Primary Sources (Santa Barbara, Calif.: Libraries Unlimited, 2014); Lisa Janicke Hinchliffe and Christopher J. Prom, Teaching with Primary Sources (Chicago: Society of American Archivists, 2016); Mitchell, Seiden, and Taraba, Past or Portal?

25Robert Schroeder and Ellysa Stern Cahoy, “Valuing Information Literacy: Affective Learning and the ACRL Standards,” Portal: Libraries and the Academy 10, no. 2 (2010): 127, https://doi.org/10.1353/pla.0.0096.

26Elsie T. Freeman, “In the Eye of the Beholder: Archives Administration from the User's Point of View,” American Archivist 47, no. 2 (1984): 111–23.

27Magia G. Krause, “‘It Makes History Alive for Them’: The Role of Archivists and Special Collections Librarians in Instructing Undergraduates,” Journal of Academic Librarianship 36, no. 5 (2010): 401–11, https://doi.org/10.1016/j.acalib.2010.06.004.

28Krause, “‘It Makes History Alive for Them’,” 406.

29Krause, “‘It Makes History Alive for Them’,”407–8.

30Char Booth, Reflective Teaching, Effective Learning: Instructional Literacy for Library Educators (Chicago: American Library Association, 2011); Sharon L. Bostick, “The Development and Validation of the Library Anxiety Scale” (Wayne State University, 1992); James R. Davis and Bridget D. Arend, Facilitating Seven Ways of Learning: A Resource for More Purposeful, Effective, and Enjoyable College Teaching (Sterling, Va.: Stylus Publishing, 2012); Marika Cifor and Anne J. Gilliland, “Affect and the Archive, Archives and Their Affects: An Introduction to the Special Issue,” Archival Science 16, no. 1 (2016): 1–6, https://doi.org/10.1007/s10502-015-9263-3; Carol Kuhlthau, Seeking Meaning: A Process Approach to Library and Information Services (Westport, Conn.: Libraries Unlimited, 2004); Constance A. Mellon, “Library Anxiety: A Grounded Theory and Its Development,” College & Research Libraries 47, no. 2 (1988).

31Bahde and Smedberg, “Measuring the Magic,” 153.

32Members of the SAA Reference, Access, and Outreach Section's Teaching with Primary Sources Working Group maintains a bibliography that compiles resources focusing on the use of primary sources in elementary, secondary, and collegiate classrooms, https://www.zotero.org/groups/76402/teaching_with_primary_sources/items/collectionKey/2BKBRTH8/?.

33Bahde and Smedberg, “Measuring the Magic,” 156.

34Bahde and Smedberg, “Measuring the Magic,” 156.

35Bloom, Taxonomy of Educational Objectives, the Classification of Educational Goals, by a Committee of College and University Examiners. Handbook I, 7.

36Shan Sutton and Lorrie Knight, “Beyond the Reading Room: Integrating Primary and Secondary Sources in the Library Classroom,” Journal of Academic Librarianship 32, no. 3 (2006): 320–25, https://doi.org/10.1016/j.acalib.2006.03.001; Krause, “Undergraduates in the Archives: Using an Assessment Rubric to Measure Learning.”

37“Student Researchers—Archival Metrics,” https://sites.google.com/a/umich.edu/archival-metrics/home/the-toolkits/student-researchers. The Archival Metrics Project began in 2008 and sought to promote a culture of assessment by creating standardized user-based evaluation tools and other performance measures.

38“Student Researchers—Archival Metrics”; Duff et al., “The Development, Testing, and Evaluation of the Archival Metrics Toolkits”; Duff and Cherry, “Archival Orientation for Undergraduate Students: An Exploratory Study of Impact”; Emily Rimland, “Assessing Affective Learning Using a Student Response System,” Portal: Libraries and the Academy 13, no. 4 (2013): 385–401, https://doi.org/10.1353/pla.2013.0037.

39“Student Researchers—Archival Metrics”; Duff et al., “The Development, Testing, and Evaluation of the Archival Metrics Toolkits”; Duff and Cherry, “Archival Orientation for Undergraduate Students: An Exploratory Study of Impact”; Emily Rimland, “Assessing Affective Learning Using a Student Response System.”

40“Our Findings,” TeachArchives.org, http://www.teacharchives.org/articles/our-findings/.

41TeachArchives.org is the exception.

42Twenty-one surveys were discarded during the pre- and postinstruction questionnaire matching process, as some students were present for the pre- and not the postinstruction questionnaire, or vice versa.

43The EDA can only accommodate fifteen students per instruction session due to limited space. I used space as the reason for splitting each class into two groups, not instruction technique.

44The key code will be retained for three years after the completion of the study. During the course of the three years, the key code will be kept in the same locked file cabinet. Three years after the completion of the study, the key code will be shredded.

45Keller, “Strategies for Stimulating the Motivation to Learn”; Slavich and Zimbardo, “Transformational Teaching.”

46The instruction protocols created as a part of the study are modeled on those presented in Bahde, Smedberg, and Taormina, Using Primary Sources.

47American Library Association, ACRL Visual Literacy Competency Standards for Higher Education (October 27, 2011), http://www.ala.org/acrl/standards/visualliteracy.

48A t-test is a statistical test of the differences between sample populations, assessing how data about the sample population differ from what is observed in the actual population. Similar to a z-test, the findings of a t-test tell researchers at what value(s) on the normal curve the null hypothesis can be rejected, indicating a change in the sample population greater than what can be expected by chance. However, there is always a difference between what researchers observe and what occurs in the actual population, generating a standard error. The ability for error increases when there is a small sample size (N). Definition taken from M. Allen, The Sage Encyclopedia of Communication Research Methods, vols. 1–4 (Thousand Oaks, Calif.: SAGE, 2017).

49The p-value, or probability value, is the probability for a given statistical model that, when the null hypothesis is true, the sample mean difference between two compared groups would be the same as or of greater magnitude than the actual observed results. Definition taken from Ronald L. Wasserstein and Nicole A. Lazar, “The ASA's Statement on p-Values: Context, Process, and Purpose, The American Statistician 70, no. 2 (2016): 129–33.

50Arlene Fink, How to Conduct Surveys: A Step-by-Step Guide, 4th ed. (Los Angeles: SAGE, 2009), 52.

51Keller, “Strategies for Stimulating the Motivation to Learn,” 1–2.

52Hinchliffe and Prom, Teaching with Primary Sources, 38.

53M. W. Gallagher, “Self-Efficacy,” Encyclopedia of Human Behavior (Academic Press, 2012), https://doi.org/10.1016/B978-0-12-375000-6.00040-9.

54Jeremy D. Finn, “Withdrawing from School,” Review of Educational Research 59, no. 2 (1989): 117–42.