Objective

This study compares the results of an objective structured clinical examination (OSCE) between 2 groups of students before an internship and after 6 months of clinical practice in an internship.

Methods

Seventy-two students participated, with 36 students in each cohort. The OSCEs were performed in the simulation laboratory before the participants' clinical practice internship and after 6 months of the internship. Students were tested in 9 stations for clinical skills and knowledge. The same procedures were repeated for both cohorts. The t test was used for unpaired parametric samples and Fisher's exact test was used for comparison of proportions.

Results

There was no difference in the mean final score between the 2 groups (p = .34 for test 1; p = .08 for test 2). The performance of the students in group 1 was not significantly different when performed before and after 6 months of clinical practice, but in group 2 there was a significant decrease in the average score after 6 months of clinical practice.

Conclusions

There was no difference in the cumulative average score for the 2 groups before and after 6 months of clinical practice in the internship. There were differences within the cohorts, however, with a significant decrease in the average score in group 2. Issues pertaining to test standardization and student motivation for test 2 may have influenced the scores.

The growth of the chiropractic profession internationally has raised important questions about health care.1,2  Chiropractic is a relatively new profession in Brazil and is growing steadily. Although there has been fast growth and consistent training of professionals in the country, there is still no official regulation of the profession. The Brazilian Chiropractic Association has strived to regulate the chiropractic profession in Brazil since 2001, but has had little success. A critical mass of qualified chiropractors has developed in the country after 15 years of chiropractic training programs in Brazil, which justifies the claim for formal recognition of the profession by federal governing laws.3 

These factors contribute to a growing concern about education, especially in the area of clinical training, which needs to be constantly improved in order to establish credibility for the profession.4,5  The training of chiropractors in Brazil follows clinical guidelines that recommend extensive clinical practice under supervision.3  Recent efforts have emphasized the importance of education and interprofessional collaboration for the reform of the health system in different countries.69  However, there is a lack of literature concerning the quality of higher education in chiropractic and its relevance to clinical practice.1,5,7 

The chiropractic program at University Anhembi Morumbi–Laureate International Universities (UAM), located in São Paulo, Brazil, is a 4½-year-long program (9 semesters), with more than 5000 hours of study, of which over 1000 hours are dedicated to the clinical internship. To ensure patient safety and qualified professional practice of chiropractic, there is a need for a system of evaluation of students' cognitive and practical skills. To evaluate the interns' evolution, the university uses the objective structured clinical examination (OSCE).

The OSCE aims to overcome the limitation of the commonly used subjective theoretical or practical tests that do not allow the examiner to assess the student's cognitive and practical skills. In the OSCE, it is possible to evaluate in the same test a student's skills in the areas of interview, physical examination, diagnosis, and therapeutics.10,11  The OSCE, introduced by Harden and colleagues in 1975,12 is a practical method of assessment. The OSCE is currently an evaluation method that is accepted in several medicine programs. This is mainly because the objective evaluation of the student can be emphasized rather than a subjective assessment as is usually done in other forms of testing.9  Its applicability has been widely adapted in North America since the 1990s as the primary method for assessing clinical skills in the United States of America and Canada. It is also used in countries such as Australia, New Zealand, Japan, Germany, the United Kingdom, and India.12,13 

The exam consists of 3 steps: (1) definition of the clinical condition that will be understood by the interns; (2) definition of the tasks to be performed, depending on each condition; and (3) incorporation of these tasks within simulation-based stations.14  The number of stations can vary from 18 to 20. Normally, each station has a short duration (from 3 to 20 minutes), and it has specific instructions, depending on the condition to be addressed. The examiner evaluates these conditions using a standardized checklist, which relates to the skills that each student should demonstrate at each station. This tool must accurately assess clinical reasoning skills, history taking, physical examination, diagnostic approach, patient positioning, and other procedures.15 

The OSCE has been used since 2004 at UAM to assess students enrolled in the 7th semester of the chiropractic program (before starting the internship) and after 6 months of clinical practice during the internship of the 8th semester. This study aims to report the results of the application of OSCE to chiropractic students by comparing the results of 2 groups of students, before the internship and after 6 months of clinical practice.

Participants

Students that had taken the courses Integrated Clinic (7th semester) and Internship I (8th semester) as part of the bachelor of chiropractic degree program at the UAM were included in this study. Two different classes of students participated in the test, referred to hereafter as group 1 (n = 36) and group 2 (n = 36). Students who were approved (scored minimum of 6.0 points) in an initial OSCE were allowed to enroll in the 8th semester of the chiropractic program and start the Internship I course. After 6 months, these interns were tested again with an OSCE. In the 2nd OSCE test, all procedures were similar to the 1st test for the students and examiners, and the stations were essentially the same (Fig. 1). The ethics committee of the UAM approved this observational study.

Figure 1.

Flowchart of the turnover of the OSCE.

Figure 1.

Flowchart of the turnover of the OSCE.

Close modal

Facilities

The university simulation laboratory was used for both OSCEs. The laboratory was equipped with 12 rooms containing audio- and video-recording devices, each room separated by a mirrored window (Fig. 2). This setting allowed examiners to view and listen to all the procedures performed by the students during the OSCE without interference. Each room was prepared for the test so that all necessary examination or treatment tables, sphygmomanometers, reflex hammers, stethoscopes, etc. were present for the student to use to complete the tasks asked during the examination.

Figure 2.

Student during the assessment performing the examination while the examiner stands outside the room behind mirrored glass.

Figure 2.

Student during the assessment performing the examination while the examiner stands outside the room behind mirrored glass.

Close modal

Stations

In each station, the reasoning and clinical skills were assessed using clinical cases divided into different topics: clinical interview, physical examination, cervical spine, thoracic spine, lumbar spine, lower limbs, upper limbs, imaging, and clinical report. Each station had different questions, depending on the case. The patient was trained to answer all relevant questions. At each station room, there was a written instruction containing the necessary guidelines to use to perform the procedures required. During the assigned procedure at each station, students could make notes of their findings; these notes were for their own use and not inspected or rated by the examiners. Each station allowed 10 minutes for the student to perform the required procedure. To evaluate the student in each station, an assessment checklist was prepared by each examiner.

Logistics

The students were divided into 4 groups of 9, and the 1st small group started the test while the other students remained in an isolated room under the supervision of a coach; they had no communication with the students who were already being tested. There was a monitor responsible for controlling the time of the test, and every 10 minutes the monitor authorized the students to enter their station rooms and a start the test. After 5 minutes in the station, the monitor warned the students of the time remaining and did so again when there were 2 minutes left. The test was composed of 9 stations lasting 10 minutes each. Thus, the length of each complete circuit was 90 minutes.

After the completion of each circuit, in which each student was tested in the 9 stations, the student left the laboratory and did not have contact with the remaining group. The monitor then called the next group of 9 students to start the test. This routine was repeated until all the students were tested. The evaluation coordinator collected the final scores for each station and calculated the final score for the examination for each student by calculating the average. Scores ranged from a minimum score of 0 to a maximum of 10; a minimum passing score was set at 6.0. The same testing procedures were performed for both group 1 and group 2, using the same standards.

Examiners

The professors of the university chiropractic program were responsible for the OSCE student assessment. The examiner responsible for each station previously formulated a checklist for assessing student performance. While the students were performing the procedures required in each station, the professors evaluated them, registering the score on the previously designed evaluation form. Each professor was responsible for 1 station. After evaluating all the students, each examiner gave the scores to the OSCE coordinator, who was also a chiropractic program professor. The same procedures were repeated in group 1 and group 2.

Simulated Patients

Each examiner was responsible for selecting a person to participate as the simulated patient for the station. This volunteer was previously trained in a standardized manner and was able to mimic the signs and symptoms predetermined by the examiner and according to the station's procedures required related to each clinical case. The same procedures were repeated in group 1 and group 2.

Evaluation Criteria

The evaluation of the student was based on the checklist for each station. Each examiner distributed the points of the station according to the importance of the elements deemed necessary in each topic (anamnesis, general physical examination, specific physical examination, diagnostic hypothesis, and treatment plan). The final score was composed by the arithmetic mean of the 9 stations. The same evaluation procedures were used in both group 1 and group 2.

Statistical Analysis

Statistical analysis was performed using SPSS 11.0 software (SPSS Inc, Chicago, IL). The t test was used for unpaired parametric samples (p < .05). Fisher's exact test was used for comparison of proportions (p < .05). Two stations (clinical report and clinical interview) were excluded for the data analysis because the questions were not standardized, and thus it was not possible to compare before and after 6 months and between groups.

Seventy-two students participated in the OSCE test. Both groups had 36 subjects. In group 1, 25 of 36 students (80.6%) were approved with an average score of 7.20 points (Table 1). The highest average was at the physical examination station (8.8 points) and lowest averages were at the thoracic spine and radiology stations (6.4 points) (Tables 2 and 3). After 6 months, the same 25 students underwent the 2nd OSCE test with an average score of 7.0 points. The highest average was at the physical examination station (8.10 points) and lowest average was at the radiology station (4.5 points) (Tables 2 and 3). The average score from 1st to 2nd test was not significantly different for this group (p = .424) (Table 4).

Table 1.

Percentage of Students Approved on the OSCE

Percentage of Students Approved on the OSCE
Percentage of Students Approved on the OSCE
Table 2.

Highest and Lowest Scores by Station

Highest and Lowest Scores by Station
Highest and Lowest Scores by Station
Table 3.

Average Scores of Students in Each Station in Test 1 and Test 2 in Both Groups

Average Scores of Students in Each Station in Test 1 and Test 2 in Both Groups
Average Scores of Students in Each Station in Test 1 and Test 2 in Both Groups
Table 4.

Comparison Between the Average Score in Test 1 and Test 2

Comparison Between the Average Score in Test 1 and Test 2
Comparison Between the Average Score in Test 1 and Test 2

In group 2, the results of the 1st test showed that 25 of 36 students (80.60%) were approved with an average of 7.5 points (Table 1). The highest average was at the physical examination station (8.18 points) and lowest average was at the cervical spine station (6.5 points) (Tables 2 and 3). After 6 months, the same 25 students underwent the 2nd OSCE test for an average of 6.61 points. The highest average was at the thoracic spine station (7.74 points), and lowest average was at the lower extremities station (5.10 points) (Tables 2 and 3). The average score from 1st to 2nd test was statistically different for this group (p = .002). There was no significant difference between average grades of the 2 groups (p = .338 for the 1st test; p = .185 for the 2nd test) (Table 4).

OSCE is a versatile tool of examination that has been used for many levels of education, including undergraduate, postgraduate, continuing education, and certifying exams.11 

The OSCE provides a structured test process that involves the evaluation of skills, conditions, procedures, and scenarios. It is considered an impartial evaluation method because it is standardized, thus the student is not tested subjectively, and it is rated by more than 1 examiner.16,17  The OSCE includes feedback to the student in its protocol, which avoids favoritism and the personal opinion of the examiner. The traditional method of assessment leads to variation in the analysis of the student's skills and complicates the evaluation of the student during the internship.18,19  Similar to traditional methods of clinical evaluation, the OSCE has 3 variables: the student, the patient, and the examiner. However, as the OSCE is a more objective and standardized test, the results can be compared more easily from 1 year to another.19 

In the present study, we observed that the average score in the 1st test was higher in both groups when compared with the scores after 6 months of internship. This may be due to the fact that the 1st test is part of the discipline covered in the class Integrated Clinic, and this discipline is a prerequisite for the class Curricular Stage I. Thus, students who do not achieve the minimum score of 6.0 cannot start the internship. For this reason, the students tend to expend more effort in the 1st OSCE. When we observed the stations separately, it was noted that in both groups the students had the highest average scores at the physical examination station in both test 1 and 2. However, the lower scores were observed in test 1 at the imaging station, indicating that more effort from the students is needed for this subject (Table 2). Data showed that imaging deserves special attention both in teaching and in students' efforts to review the content.

Group 1 improved the scores for the stations testing cervical spine, lumbar spine, and upper limbs when comparing test 1 and test 2. However, group 2 had better scores in test 2 at the cervical spine, lumbar spine, and thoracic spine stations when compared to test 1 (Table 3). Even the variations were significant between the 2 tests. The OSCE can objectively evaluate students' achievement, and we believe that the variations were due to the difficulty inherent in each test and the influence of the stress the students feel at the moment of the assessment. In addition, the improvement in test 2 may be related to clinical practice.

In general, the OSCE method may be considered one of the best ways to evaluate students' skills because, during the assessment, the examiner uses the same criteria and checklists to evaluate all students' performances, which makes the final score more reliable and objective.20  One of the most important disadvantages of this type of assessment is the time needed for its preparation. As with many educational advances, the benefits are achieved in part by greater dedication by educators. However, this commitment requires time from the examiners during both the preparation of the test and during its administration.18  Another possible disadvantage is that this approach can create in the student the feeling that the knowledge and the skills are being evaluated separately and that he or she is being discouraged from looking at the patient as a whole.19  In addition, the emotional stress that the students are submitted to before and during the evaluation can be considered a limitation in the application of an OSCE because the students know that their acceptance into the internship depends on this evaluation and that not achieving the minimum score will culminate in failing to be accepted.21 

The OSCE is intended to measure clinical skills. Although not emphasized, cognitive abilities are also assessed during the OSCE. Students must know the right questions to ask, the proper examinations to perform, and the correct information to share in the context of particular clinical case scenarios.22 

With the completion of this study, we feel that there may be a need for more studies to evaluate the development of the students. For example, an OSCE conducted in a 3rd opportunity, during the 2nd quarter of Internship II, can potentially assess more complex educational processes. We propose that in the 1st test, only clinical reasoning should be evaluated. In the 2nd test, the student should be evaluated for the ability to integrate clinical reasoning and the treatment program. In the 3rd test, the combination of clinical reasoning, treatment program, and managing the patient should be evaluated.

Limitations

There are several limitations to this study. One limitation is related to the lack of standardization on the checklists used for the examination stations. While we feel that the level of difficulty was similar in all stations, the questions and checklists were different because the cases were not the same. Thus, there is a need to standardize the checklists and the cases in each station. The standardization can reduce subjectivity in the students' evaluation, but obtaining agreement on standardization will require agreement by all the professors responsible for each station.

Another limitation is the level of student commitment to the 2nd test. Student commitment is obvious and varied in this study because the 1st test is a high-stakes examination; if students fail in this test, they will need to wait 1 year to be part of the next group of interns. However, if they fail the 2nd test,there is not as much at stake. The 2nd test is part of a group of scores, and failure in this examination does not mean they will lose a year of study.

The performance of the chiropractic students on the OSCE in group 1 was not significantly different when performed before and after 6 months of clinical practice in the internship cycle. However, in group 2, there was a significant decrease in the average score after 6 months of clinical practice. There was no significant difference in the mean final score between the 2 groups.

The authors thank University Anhembi Morumbi–Laureate International Universities for the support to accomplish this research.

This research was conducted as an internal study at the institution and was funded in full by University Anhembi Morumbi–Laureate International Universities. The authors declare that there are no conflicts of interest to disclose.

1
Mayer
JM.
The quality of chiropractic college education: a survey of practicing chiropractors
.
J Chiroprac Educ
.
1999
;
13
:
131
136
.
2
Riva
JJ
,
Lam
JM
,
Stanford
EC
,
Moore
AE
,
Endicott
AR
,
Krawchenko
IE.
Interprofessional education through shadowing experiences in multi-disciplinary clinical settings
.
Chiropr Osteopat
.
2010
;
2
:
18
:
31
.
3
Bracher
ESB
,
Benedicto
CC
,
Facchinato
APA.
Quiropraxia/chiropractic
.
Rev Med
.
2013
;
92
:
173
182
.
4
Winterstein
JF.
The search for intra-professional harmony
.
J Chiropr Humanit
.
1996
;
6
:
2
10
.
5
Organização Mundial da Saúde
.
Diretrizes da OMS sobre formação básica e a segurança em quiropraxia
.
Novo Hamburgo
:
Feevale
;
2006
.
6
Wong
JJ
,
Di Loreto
L
,
Kara
A
,
et al
.
Assessing the attitudes, knowledge and perspectives of medical students to chiropractic
.
J Can Chiropr Assoc
.
2013
;
57
:
18
31
.
7
Karim
R.
Building interprofessional frameworks through educational reform
.
J Chiropr Educ
.
2011
;
25
:
38
43
.
8
Gatterman
AAM.
The state of the art of research on chiropractic education
.
J Manipulative Physiol Ther
.
1997
;
20
:
179
184
.
9
Winterstein
J.
Philosophy of chiropractic: a contemporary perspective
.
J Am Chiropr
.
1994
;
38
(
pt 2
):
64
71
.
10
Sandoval
GE
,
Valenzuela
PM
,
Monge
MM
,
et al
.
Analysis of a learning assessment system for pediatric internship based upon objective structured clinical examination, clinical practice observation and written examination
.
J Pediatr (Rio J)
.
2010
;
86
:
131
136
.
11
Argawal
A
,
Batra
B
,
Sood
AK
,
et al
.
Objective structured clinical examination in radiology
.
Indian J Radiol Imaging
.
2010
;
20
:
83
:
8
.
12
Harden
RM
,
Stevenson
M
,
Downie
WW
,
Wilson
GM.
Assessment of clinical competence using objective structured examination
.
Br Med J
.
1975
;
1
:
447
451
.
13
OSCE home
.
What is Objective Structured Clinical Examination, OSCEs?
.
14
Berkenstadt
H
,
Ziv
A
,
Gafni
N
,
Sidi
A.
Incorporating simulation-based objective structured clinical examination into the Israeli National Board Examination in anesthesiology
.
Anesth Analg
.
2006
;
102
:
853
858
.
15
Hickling
,
FW.
A comparison of the objective structured clinical examination results across campuses of the University of the West Indies
.
W Indian Med J
.
2005
;
54
:
139
143
.
16
Amaral
FTV
,
Troncon
LEA.
Participation of medical students as examiner in an objective structured clinical examination
.
Rev Bras Educ Med
.
2007
;
31
:
81
89
.
17
Smee
S.
ABC of learning and teaching in medicine: skill based assessment
.
BMJ
.
2003
;
326
:
703
706
.
18
Morag
E
,
Lieberman
G
,
Volkan
K
,
Shaffer
K
,
Novelline
R
,
Lang
EV.
Clinical competence assessment in radiology: introduction of an objective structured clinical examination in the medical school curriculum
.
Acad Radiol
.
2001
;
8
:
74
81
.
19
Stevenson
HM
,
Downie
WW
,
Wilson
GM.
Medical education: assessment of clinical competence using objective structured examination
.
BMJ
.
1975
;
1
:
447
451
.
20
Megale
L.
Evaluation Processes in Medical School: Student Performance against the Competencies in Pediatrics and Its Significance for Teaching
.
Belo Horizonte
:
Universidade Federal de Minas Gerais Faculdade de Medicina;
2011
.
21
Troncon
LEA.
Clinical skills assessment: limitations to the introduction of an “OSCE” (objective structured clinical examination) in a traditional medical school
.
São Paulo Med J
.
2004
;
122
:
12
17
22
Prislin
MD
,
Fitzpatrick
CF
,
Lie
D
,
Giglio
M
,
Radecki
S
,
Lewis
E.
Use of an objective structured clinical examination in evaluating student performance
.
Fam Med
.
1998
;
30
:
338
344
.

Author notes

Ana Paula Facchinato is the chiropractic course coordinator and an associate professor at University Anhembi Morumbi (Almeida Lima, 1134, Mooca, 03164-000, São Paulo, Brazil; [email protected]). Camila Benedicto is an associate professor at University Anhembi Morumbi (Almeida Lima, 1134, Mooca, 03164-000, São Paulo, Brazil; [email protected]). Aline Mora is a chiropractic resident at University Anhembi Morumbi (Almeida Lima, 1134, Mooca, 03164-000, São Paulo, Brazil; [email protected]). Dayane Cabral is an associate professor at University Anhembi Morumbi (Almeida Lima, 1134, Mooca, 03164-000, São Paulo, Brazil; [email protected]). Djalma Fagundes is an associate professor at University Anhembi Morumbi (Almeida Lima, 1134, Mooca, 03164-000, São Paulo, Brazil; [email protected]). Address correspondence to Ana Paula A. Facchinato, Anhembi Morumbi University, Rua Doutor Almeida Lima, 1134, Mooca, 03164-000, São Paulo, Brazil; [email protected]. This article was received April 25, 2011; revised April 30, 2012, April 7 and October 10, 2014; and accepted October 26, 2014.

Concept development: APF, CCB, DMC. Design: APF, CCB, DJF. Supervision: APF, CCB, DMC, DJF. Data collection/processing: APF, CCB, DMC. Analysis/interpretation: APF, AGM, DMC, DJF. Literature search: APF, CCB, AGM. Writing: APF, CCB, AGM, DJF. Critical review: APF, AGM, DMC, DJF.