Objective

We report the differences in final examination scores achieved by students at the culmination of two different teaching strategies in an introductory skills course.

Methods

Multiple choice examination scores from six consecutive academic calendar sessions over 18 months (n = 503) were compared. Two groups were used: Cohort A (n = 290) represented students who were enrolled in the course 3 consecutive academic sessions before an instructional change and Cohort B (n = 213) included students who were enrolled in 3 consecutive academic sessions following the instructional change, which included a more active learning format. Statistical analyses used were 2-tailed independent t-test, one-way ANOVA, Tukey's honestly significant difference (HSD), and effect size.

Results

The 2-tailed independent t-test revealed a significant difference between the two groups (t = −3.71, p < .001; 95% confidence interval [CI] 1.29–4.20). Significant difference was found in the highest performing subgroup compared to the lowest performing subgroup in Cohort A (F = 3.343, p = .037). For Cohort A subgroups 1 and 2, Tukey's HSD was p < .028. In Cohort B, no difference was found among subgroups (F = 1.912, p = .150, HSD p > .105).

Conclusion

Compared to previous versions of the same course taught by the same instructor, the students in the new course design performed better, suggesting that using active learning techniques helps improve student achievement.

Instructors often face two distinct challenges in the classroom: the constant effort to improve teaching and improving student learning. There is clear research evidence that best practices in higher education involve active and collaborative instructional strategies.1,2  These practices have a solid foundation in learning theory, and are more effective than traditional lecture and discussion across most, if not all, dimensions of student learning. Well known among the many authors who have summarized the best practices in higher education are Pascarella and Terenzini,1  Kuh et al.,2  and Fairweather.3  Fairweather states,

“Improvement of teaching may be judged by the degree of fit between what instructors do in the classroom with instructional approaches found in the literature to improve student learning rather than on a direct assessment of student learning outcomes.”3 

In the current educational literature, several definitions of active and collaborative learning are found. Meyers and Jones described active learning as techniques that increase student engagement with material, and are aligned with student learning outcomes.4  Meyer and Jones posit that active learning is a theory that derives from two basic assumptions: that learning is an active endeavor and that people learn in a variety of ways.4  Others have defined that active learning occurs when students are engaged in more activities than just listening, and are involved in dialog, debate, writing, and problem solving, as well as higher-order thinking.5  Small group work, as described by McKeachie, may consist of presentations and debates, journaling, role playing, learning games, filed experiences, case studies, class discussions, and simulations.6 

According to Fairweather, instructors often seek to improve from an 80% to a 95% course level of instructional efficiency when making changes to instructional methods.3  However, determining if there are increases in student learning that result from adopting a learner centered paradigm as an instructional reform is quite challenging.79 

At our institution, faculty members are encouraged to foster meaningful student-faculty interactions using a variety of active and collaborative learning methods discussed in detail by several investigators.1,1014  Faculty development and training in these techniques are the foundation of the university's quality enhancement plan. Expectations for student performance are to be communicated plainly and set at attainable levels. Once active learning techniques are developed and implemented in courses, it also is expected that faculty members will assess how the change in teaching has affected student learning, using formative and summative assessments.

Accordingly, this study describes a reformation of teaching to the “unfolding case” and reports the findings of a retrospective appraisal of student learning outcomes using final examination scores of students in a clinical course.

The study was approved by the Life University institutional review board for human subjects. Students who were enrolled in a chiropractic clinical skills course from six consecutive academic calendar sessions over 18 months (n = 503) were identified. Cohort A (n = 290) were enrolled in the course three consecutive academic sessions before an instructional change (subgroups 1–3), while Cohort B (n = 213) were enrolled in three consecutive academic sessions following the instructional change (subgroups 3–6).

This course is a required course in the program and is offered at the end of the second year just before students enter internship. The learning environment was a moderate size lecture hall with class sizes ranging from 50 to 130 students. The course met weekly for a two-hour lecture and four hours of lab.

The original course instruction consisted of:

  1. 1

    Weekly homework of individual reading assignments with specific questions to be completed in preparation for “lecture” each week.

  2. 2

    In the classroom, students were required to complete a “prelecture” individual readiness assessment test (iRAT), in multiple choice and True/False format.

  3. 3

    Following the iRAT, the instructor lectured to the class using a PowerPoint (Microsoft Corp, Redmond, WA)–based presentation on the reading assignments.

At the midpoint of these six consecutive academic sessions, there were three course modifications made as the instructional reform:

  1. 1

    The entire preparatory lecture–based presentations were modified with a recorded Voice Over PowerPoint (VOP) using Camtasia Studio (TechSmith Corp, Okemos, MI) and placed on Blackboard (Blackboard Inc, Washington, DC), the online course management system. The students were assigned to view the VOP in addition to their reading assignments/question sets in preparation for class. The VOP required 15 to 30 minutes of student study time outside of structured class time, intended as a review of the required reading material.

  2. 2

    The iRATs also were modified to test information from the VOP as well as the reading assignments.

  3. 3

    Most importantly, the third modification was providing an opportunity for active learning in the classroom through case-based group activity.

Since the instructor no longer provided PowerPoint lecture in the classroom, more class time to focus on active learning through case-based methods was made available. Students were instructed to form groups for graded and nongraded case-based activities. The groups were given case scenarios in which the students were to identify either all key clinical components in an “unfolding” process (described below) or simply identify several of the specific steps in a “mini” case vignette.

The workload for the students in the new course design did not increase significantly after adding the VOP preparation, since the VOP were intended for review and reinforcement of the reading assignments. The laboratory portion of the course was unchanged; the focused instructional change was adding active learning strategies to the lecture setting, as the laboratory environment already is student centered active learning activities.

For the “unfolding” case process, students were given the facts of the case in a series of steps and a specific amount of time to complete the group work, and then the instructor facilitated discussion with students reporting their findings.

The assessment of student learning outcomes in this course was done through use of multiple choice question (MCQ) examinations. Though the delivery of information was changed in the new design of the course, the assessment of student learning did not – student examination scores were used for both cohorts. The final examination was cumulative, evaluated higher order thinking of Bloom's taxonomy and used integrative questions with clinical vignettes. The same type of examination was given to all students. The questions on the final examinations for each cohort were of similar content, though not identical stems or distractors. The final examinations consisted of 50 MCQs with a choice of correct answer from four possible choices. Each question on the examination was valued at two points each for a total of 100 points. The student must have scored 70% or greater on the written final examination to pass the course.

De-identified data analysis was performed for pre-post comparisons of examination scores to assess for any change in examination mean scores and in course pass rates. Data were stored on a password-protected institutional computer in the instructor's office.

The most common statistical terms for course and program assessment include means, standard deviations and frequency distributions of student scores.8  PAR Score Analysis (Scantron Corp, Eagen, MN) was used. Additional statistical analyses using SPSS v19 (IBM Corp, Armonk, NY) included independent t-tests, ANOVA, and effect size.

PAR Score Analysis demonstrated similar reliability coefficients (KR20 and point-biserial) for the examinations. Cohort B, the active learning group, showed significantly higher final examination scores, as compared to Cohort A, the traditional instruction group (Table 1). Grade frequency distributions (Fig. 1) provided grade comparisons of Cohorts A and B. For each group, mean and median scores were equal, and the shapes of the distributions of each cohort were similar. Cohort B had a higher percentage of A grades, while Cohort A had higher percent of F grades, and the percentage of Bs and Cs were similar in both cohorts.

Table 1.

Descriptive Statistics of Final Examination Scores Over Six Academic Terms

Descriptive Statistics of Final Examination Scores Over Six Academic Terms
Descriptive Statistics of Final Examination Scores Over Six Academic Terms
Figure 1.

Frequency distribution scores of student achievement between groups.

Figure 1.

Frequency distribution scores of student achievement between groups.

The 2-tailed independent t-test with equal variances was significant (t = −3.71, p < .001, 95% confidence interval [CI] 1.29–4.20). One-way ANOVA was used to determine whether the subgroups of each cohort were similar (Cohort A subgroups 1–3 and Cohort B subgroups 4–6). Significant difference was found in the highest performing subgroup compared to the lowest performing subgroup in Cohort A (F = 3.343, p = .037). For Cohort A subgroups 1 and 2, Tukey's honestly significant difference (HSD) was p < .028. In Cohort B, no difference was found among subgroups (F = 1.912 p = .150, Tukey's HSD p > .105). Finally, Cohen's d was calculated and used to determine the effect size (d = 0.337).

Compared to previous versions of the same course taught by the same instructor, students in the new course design performed better: we found a significant increase in student learning when we compared the mean scores between the two cohorts. This study demonstrated lower failure rates, higher total exam points, and higher scores on similar final examinations in Cohort B, those who received case-based instruction. The results were interpreted as a modest improvement in student learning overall from a change in instructional strategies seeking to promote use of best practices in higher educational instruction.

An effect size of 0.337 was obtained. According to Cohen's interpretation, this is a small-to-medium effect.15  According to McMillan and Foley, The What Works Clearinghouse, a division of the U.S. Department of Education's Institute of Education Sciences, has suggested that a “small” effect per Cohen is “probably meaningful and important for practice.”16  The use of effect size helps to demonstrate better the magnitude and importance of the results. “The effect size transforms abstract statistical significance testing into concrete measures of the relationship or difference.”16  Thus, in our study, the use of case-based learning activities in the course appeared to have a low-to-moderate practical significance, which means that the average student who received case-based activities in class (e.g., unfolding cases) was more proficient at answering the integrative multiple choice test questions than the students who did not receive the case-based activities.

The use of MCQ style examinations as the assessment of student learning for this course remained consistent for all academic terms in this report. This consistency allowed a retrospective evaluation on the impact of the new teaching method on student learning. Well written MCQs are an effective assessment of student learning and have been used to test students of all health professions.17,18  Although it has been postulated that this type of testing could assess only lower recall thinking in Bloom's taxonomy rather than deeper knowledge, there is evidence that well constructed MCQs with extended multiple choice questions or clinical vignettes in the stem can be used to evaluate higher-level analytical thinking in students.1719  The literature also identifies that MCQ can test higher order thinking especially when coupled with active learning techniques. Yoder and Hochevar20  as well as McConnell et al.,21  in two different reports, demonstrated that students who engaged in active learning activities had higher scores, decreased variability among groups, and greater retention with the use of MCQ examinations compared to groups for which active learning was not used.

Much of the research on active learning indicates that it works well in the classroom setting to improve student learning. Studies have shown active learning produces higher achievement and more positive relationships among students. McKeachie6  and Silberman22  have written that retention levels are enhanced when active learning methods are used. Johnson et al. found that students reported better relations with each other and even healthier psychologic adjustment when active learning was implemented.23  Accordingly, research findings by Sousa24  and Stice25  suggested increased retention of information with active learning, while little of lecture information was retained when students were unengaged. This was supported further by the National Science Foundation Engineering Education's coalition where a visual model called the “Cone of Learning” was developed, which supports the findings of Sousa,24  Stice,25  and others26  that students tend to retain information when they are involved in “doing the real thing.” Longitudinal studies by Felder et al. showed that cohorts of students instructed using active learning techniques (open-ended questions to small group work) performed much better than a comparison group on a variety of measures, including retention, graduation, and pursuit of graduate study.27  Springer et al. conducted a meta-analysis of multiple studies in the fields of science, math, engineering, and technology (SMET), which resulted in their interpretation that when students are involved in “well structured small groups,” the results were higher academic achievement, generally more favorable attitudes toward learning,and increased persistence in SMET courses and programs.28 

Michael reports that active learning indeed works, although some critics feel the importance of the evidence for active learning is overstated and, in fact, that a different problem is to decide when an improvement is significant practically.29  Collifer et al.30  revisited the study of Springer et al.28  and their interpretation is quite different. Their findings indicate the meta-analysis does not support the application of small-group learning in medical education and it raises questions about meta-analysis in education with implications for evidence-based education.30  Prince reviewed and examined the evidence for the effectiveness of active learning, and found broad uneven support for core elements of active, collaborative, cooperative, and problem-based learning.31  Furthermore, Andrews et al. reported that in a typical college biology course, the active learning only superficially resembles strategies used by educational researchers and “lacks constructivists' elements necessary to improve learning.”32  The term “constructivists elements” relates directly to a brief explanation of learning theories herein.

The constructivist principles include four key elements: knowledge is constructed from experience, learning results from personal interpretation of knowledge, and learning is an active and collaborative process.3336  In the 1950s, Skinner posited that knowledge is not simply “transmitted unchanged from teacher to student,” but it is an active process.33  The educational theorist, John Dewey, described the importance of the individual and collaborative experiences for learning ahead of his time.34  He redesigned the learning environment for students to be engaged collaboratively with the teacher as a facilitator and guide. What we read about Jean Piaget is his theory about “knowing the world by building new experiences on old experiences” and he also is famous for coining the phrase “constructivist view of learning.”35  Notably, Lev Vygotsky, Piaget's student, added the view that students learn better by engaging with “more capable others.”36  Finally, Benjamin Bloom, a known visionary 20th century theorist, viewed education as goal attainment, rather than competition.37  He said it was important to acknowledge individual differences and that the learning environment provided was crucial for effective learning. He found that high achieving successful adult learners resulted from a relationship with mentors. The three “Domains of Learning” developed by Bloom remains important in education today: Cognitive, Affective, and Psychomotor. Also, Bloom's “Taxonomy of Learning for Higher Order Thinking Skills” is well known to educators: evaluation, synthesis, analysis, application, comprehension, and knowledge.37  Another area of learning theory relates to the “learning style,” which Claxton and Ralston have defined as “a student's consistent way of responding to and using stimuli; an inclination towards a particular learning modality.”38  Ways to evaluate or measure learning styles include the visual, auditory, kinesthetic model; Kolb's learning style inventory; and Meyers Briggs type indicator.3941  Instructors in higher education must present information using all three styles: kinesthetically, visually, and auditory by lectures or presentations. This presents all learners with the opportunity to become involved and it allows a learner to be exposed to the other two methods of reinforcement.

The goal of teaching, then, is to improve student learning by using “best practices” in teaching strategies that help cultivate independence in learning (self-learning), inspire natural curiosity, and encourage self-reflection. Active learning strategies that are useful and that have shown medium/low impact on resources, yet create an environment of student engagement within the context of a lecture, include one-minute paper, think-pair-share, guided discussion, and “muddiest point.”4244  Problem based learning (PBL) provides interactive and clinically integrated learning, and increases evidence-based practice in medical and dental education. However, PBL has had equivocal success in many healthcare settings.4548  Closely related to PBL is use of cases in the classroom or “case-based learning,” which has been shown to be successful in improving student learning in a variety of settings.4952  Engagement, observation and critical reflection are valued aspects of clinical decision-making skills that lend focus to teaching strategies using a case-based method. Using cases in the classroom yields significant opportunity for discussion with other students and for peer tutoring, and are well grounded in the constructivist model.35  Real-world clinical tasks should be provided using group work and practical feedback.52  It is well known that a traditional lecture-based format does not prepare students for clinical care. Healthcare providers are expected to think autonomously, solve problems and cope with the unexpected. Active learning allows students to engage in activities that force them to reflect upon how they are using and building their clinical skills.32,5355 

The chiropractic educational literature provides several reports on improved learning outcomes using a variety of active learning strategies. One study demonstrated that an integrated case-based teaching style compared to a traditional approach improved student learning environment with fewer contact hours in lectures and practical sessions yet students reached the same level of ability to answer fact-based and problem solving questions.49  Another report showed that using a combination of course notes, more formative tests, non-MCQ examinations, and specific and detailed feedback in a timely fashion improved knowledge retention.55  Academic performance reportedly was improved by using a structured self-study guide, classroom clinical simulations, course management software-based learning, and self-assessment tasks in another study.56  Collaborative testing is another method that has shown statistically significant increases in course performance, attitudes of confidence, and critical thinking in students.57,58  Finally, Good reported that writing case reports was an example of independent, self-directed learning that improved clinical reasoning, the ability to integrate information, and use of the literature to support patient care.59 

This is summed up best by Fairweather, who further describes the “fuzzy” nature of assessing the impact of instructional change:3 

We clearly recognize that student learning is not a singular goal (e.g., content knowledge, synthesis, and problem-solving), nor is it necessarily limited to a single classroom setting. Improving teaching and learning in the classroom clearly is aligned with many of the more complex learning objectives, such as the retention of knowledge over time, the application of knowledge to solve unfamiliar problems, and commitment to lifelong learning, which must be assessed in subsequent learning experiences rather than in the immediate classroom environment.3 

Our study describes the relationship between the cohorts, but cannot provide a definitive answer as to why one cohort has higher scores. There are many factors that may play into this relationship, such as outside work schedules, other examination schedules, or even individual attitudes and learning styles. It may be possible to compare these students' performances using grade point averages as well as benchmark tests (e.g., objective structure clinical examinations and/or board examinations) outcomes in approximately two years.

The literature on best practices in higher education encourages active learning strategies to improve student learning. Our report on a retrospective analysis of student examination scores showed that students who were provided regular practice via prescribed (graded and nongraded) active-learning exercises using case-based group activities performed better than students who received traditional lecture without active learning activities.

The authors thank Brent Russell, MS, DC for his help using the statistical software.

The authors have no conflicts of interests to declare.

1
Pascarella
E
,
Terenzini
P
.
How College Affects Students: A Third Decade of Research
.
San Francisco, CA
:
Jossey-Bass;
2005
.
2
Kuh
G
,
Kinzie
J
,
Schuh
,
J
,
Witt
E
.
Student Success in College: Creating Conditions That Matter
.
Washington, DC
:
Association for the Study of Higher Education;
2005
.
3
Fairweather
J
.
Linking evidence and promising practices in science, technology, engineering, and mathematics (STEM) undergraduate education. A status report for the National Academies National Research Council board of science education
.
Michigan State University. Science [internet]. August 2004, cited August 6, 2011; 304(23
). .
4
Meyers
C
,
Jones
TB
.
Promoting Active Learning: Strategies for the College Classroom
.
San Francisco, CA
:
Jossey-Bass;
1993
.
5
Bonwell
CC
,
Eison
JA
.
Active Learning: Creating Excitement in the Classroom
.
ASHE-ERIC Higher Education Report No. 1 (ERIC Clearinghouse on Higher Education). Washington, DC. George Washington University
;
1991
.
6
McKeachie
WJ
.
Teaching Tips, 10th ed
.
New York, NY
:
Houghton Mifflin;
1999
.
7
Barr
R
,
Tagg
J
.
From teaching to learning: a new paradigm for undergraduate education
.
Change
.
1995
;
27
:
12
15
.
8
Critical review: Analyzing data and interpreting-results [internet]
.
Cited July 21, 2011. Available from http://educationforthe21stcentury.org/2011/01/critical-review-analyzing-data-and-interpreting-results/
.
9
Doyle
T
.
Helping Students Learn in a Learner-Centered Environment, 1st Ed
.
Sterling, UK
:
Stylus Publishing LLC
;
2008
.
10
Astin
AW
.
Assessment for Excellence: The Philosophy and Practice of Assessment and Evaluation in Higher Education
.
New York, NY
:
American Council on Education/Macmillan;
1991
.
11
Chickering
AW
,
Gamson
ZF
.
Seven principles for good practice in undergraduate education
.
AAHE Bull
.
1987
;
39
(
7
):
3
7
.
12
Chickering
A. W
.
Reisser
L
.
Education and Identity, 2nd ed
.
San
Francisco
,
CA
:
Jossey-Bass
;
1993
.
13
Kuh
G.D
,
Schuh
,
J
,
Whitt
E
.
Involving Colleges
.
San Francisco, CA
:
Jossey-Bass;
1991
.
14
Pascarella
ET
.
Identifying excellence in undergraduate education: are we even close?
Change
.
2001
;
33
(
3
):
19
23
.
15
Cohen
J
.
Statistical Power Analysis for the Behavioral Sciences
.
Hillsdale, NJ
:
Erlbaum;
1988
.
16
McMillan
JH
,
Foley
J
.
Reporting and discussing effect size: still the road less traveled?
Pract Assess Res Eval
.
2013
;
16
(
14
):
1
12
. .
17
Azer
,
SA
.
Assessment in a problem-based learning course: twelve tips for constructing multiple choice questions that test students' cognitive skills
.
Biochem Mol Biol Educ
.
2003
;
31
(
6
):
428
434
.
18
Fellenz
,
MR
.
Using assessment to support higher level learning: the multiple choice item development assignment
.
Assess Eval High Educ
.
2004
;
29
(
6
):
703
719
.
19
Velan
GM
,
Jones
P
,
McNeil
,
HP
,
Kumar
RK
.
Integrated online formative assessments in the biomedical sciences for medical students: benefits for learning
.
BMC Med Educ
.
2008
;
8
:
52
.
20
Yoder
J
,
Hochevar
C
.
Encouraging active learning can improve students' performance on examinations
.
Teach Psychol
.
2005
;
32
(
2
):
91
94
.
21
McConnell
D
,
Steer
D
,
Owens
K
,
Assessment and active learning strategies for introductory geology courses
.
J Geoscience Educ
.
2003
;
51
(
2
):
205
216
.
22
Silberman
M
.
Active Learning: 101 Strategies to Teach Any Subject
.
Needham Heights, MA
:
Allyn and Bacon;
1996
.
23
Johnson
DW
,
Johnson
RT
,
Smith
K
.
Active Learning: Cooperation in the College Classroom
.
Edina, MN
:
Interaction Book Company;
1991
.
24
Sousa
DA
.
How the Brain Learns: A Classroom Teacher's Guide
.
Thousand Oaks, CA
:
Corwin Press;
2000
.
25
Stice
JE
.
Using Kolb's learning cycle to improve student learning
.
J Eng Educ
.
1987
;
77
(
5
):
291
296
.
26
The National Science Foundation (NSF) Engineering Educations Coalition Program [internet]
.
Grant number 9802942
.
2002. Cited May 15, 2013. Available from: http://www. foundationcoalition.org
.
27
Felder
RM
,
Felder
GN
,
Dietz
EJ
.
A longitudinal study of engineering student performance and retention versus comparisons with traditionally taught students
.
J Eng Educ
.
1998
;
87
(
4
):
469
480
.
28
Springer
L
,
Stanne
ME
,
Donovan
SS
.
Effects of small-group learning on undergraduates in science, mathematics, engineering, and technology: a meta-analysis
.
Rev Educ Res Spring
.
1999
;
69
:
21
51
.
29
Michael
J
.
Where's the evidence that active learning works?
Adv Physiol Edu
.
30
(
4
):
159
167
.
30
Colliver
JA
,
Feltovich
PJ
,
Verhulst
SJ
.
Small group learning in medical education: a second look at the Springer, Stanne, and Donovan meta-analysis
.
Teach Learn Med
.
2003
;
15
(
1
):
2
5
.
31
Prince
M
.
Does active learning work? A review of the research
.
J Eng Educ
.
2004
;
93
(
3
):
223
231
.
32
Andrews
TM
,
Leonard
MJ
,
Colgrove
CA
,
Kalinowski
ST
.
Active learning not associated with student learning in a random sample of college biology courses
.
CBE Life Sci Educ
.
2011
;
10
:
394
405
.
33
Skinner
BF
.
Are theories of learning necessary?
Psychol Rev
.
1950
;
57
:
193
216
.
34
Dewey
J
.
Experience and Education
.
New York, NY
:
Collier Books;
1938
.
35
Piaget
J
.
The Science of Education and the Psychology of the Child
.
New York, NY
:
Grossman;
1970
.
36
Vygotsky
LS
.
Mind in society: the development of higher psychological processes
.
In
:
Cole
M
,
Gauvain
M
,
eds
.
Readings on the Development of Children, 2nd ed
.
Cambridge, MA
:
Harvard University Press;
1971
.
37
Bloom
BS
.
Taxonomy of Educational Objectives: The Classification of Educational Goals: Handbook I, Cognitive Domain
.
New York, NY
:
McKay;
1956
.
38
Claxton
CS
,
Ralston
Y
.
Learning Styles: Their Impact on Teaching and Administration
.
Washington, DC
:
American Association for Higher Education;
1978
.
39
Fleming
ND
.
I'm different; not dumb. Modes of presentation (VARK) in the tertiary classroom. In
:
Zelmer
A
,
ed
.
Research and Development in Higher Education, Proceedings of the 1995 Annual Conference of the Higher Education and Research Development Society of Australasia (HERDSA)
.
HERDSA;
1995
;
18
:
308
13
.
40
Kolb
DA
.
Experiential Learning: Experience as the Source of Learning and Development
.
Upper Saddle River, NJ
:
Prentice Hall;
1984
.
41
Briggs-Myers
I
,
McCaulley
MH
,
Quenk
NL
,
Hammer
AL
.
MBTI® Manual: A Guide to the Development and Use of the Myers-Briggs Type Indicator® Instrument, 3rd ed. [internet
]
1998
:
3
44
.
Available from https://www.cpp.com
42
Russell
AT
,
Comello
RJ
,
Wright
DL
.
Teaching strategies promoting active learning in healthcare education
.
J Educ Hum Dev
2007
;
1
(
1
):
1
8
.
43
Faust
JL
,
Paulson DR. active learning in the college classroom
.
J Excell Coll Teach
.
1998
;
9
(
2
):
3
24
.
44
Angelo
TA
,
Cross
KP
.
Classroom Assessment Techniques, a Handbook for College Teachers, 2nd ed
.
San Francicso, CA
:
Jossey-Bass;
1993
.
45
Raza
A
,
Coomarasamy
A
,
Khan
KS
.
Best evidence continuous medical education
.
Arch Gynecol Obstet
.
2009
;
280
(
4
):
683
688
.
46
Polyzois
I
,
Claffey
N
,
Mattheos
N
.
Problem-based learning in academic health education. A systematic literature review
.
Eur J Dent Educ
.
2010
;
14
(
1
):
55
65
.
47
Gwee
MCE
.
Problem-based learning: a strategic learning system design for the education of healthcare professionals in the 21st century
.
Kaohsiung J Med Sci
.
2009
;
25
:
231
239
.
48
Fink
LD
.
Creating Significant Learning Experiences: An Integrated Approach to Designing College Courses
.
San Francisco, CA
:
Jossey-Bass;
2003
.
49
Gemmell
HA
.
Comparison of teaching orthopaedics using an integrated case-based curriculum and a conventional curriculum: A preliminary study
.
Clin Chiropr
.
2007
;
10
(
1
):
36
42
.
50
Irby
D
.
Three exemplary models of case-based teaching
.
Acad Med
.
1994
;
69(1)2:947–953
.
51
Manning
B
.
The case for cases
.
Paper presented at: Professional and organizational network in higher education annual conference; October 17
,
1997
;
Haines City, Florida
.
52
Herreid
CF
.
“Clicker” cases: Introducing case study teaching into large classrooms
.
J Coll Sci Teach
.
2006
;
36
(
2
):
43
47
.
53
Taylor
M
.
Teaching Generation neXt. Pedagogy for Today's Learners, A Collection of Papers on Self Study and Institutional Improvement, 26th ed
.
The Higher Learning Commission
,
2010
:
192
196
. .
54
Hoiriis
KT
,
McAulay
B
.
A summary of pedagogical practices that encourage life long learning skills
.
Paper presented at: World Federation of Chiropractic, Educational conference Proceedings; October 14–18
,
2010
;
Madrid, Spain
.
55
Bruno
PA
,
Ongaro
A
,
Fraser
I
.
Long-term retention of material taught and examined in chiropractic curricula: its relevance to education and clinical practice
.
J Can Chiropr Assoc
.
2007
;
51
(
1
):
14
18
.
56
Jamison
JR
.
Teaching diagnostic decision making: student evaluation of a diagnosis unit
.
J Manipulative Physiol Ther
.
2006
;
29
:
315.e1
e9
.
57
Meseke
CA
,
Bovée
ML
,
Gran
DF
.
Impact of collaborative testing on student performance and satisfaction in a chiropractic science course
.
J Manipulative Physiol Ther
.
2009
;
32
(
4
):
309
314
.
58
Meseke
CA
,
Nafziger
RE
,
Meseke
JK
.
Student course performance and collaborative testing: A prospective follow-up study
.
J Manipulative Physiol Ther
.
2008
;
31
(
8
):
611
615
.
59
Good
CJ
.
Student-generated case reports
.
J Chiropr Educ
.
2009
;
23
(
2
):
165
173
.

Author notes

This article was received October 19, 2012, revised May 24, 2013 and June 4, 2013, and accepted June 13, 2013.