Abstract
Appropriate methods for evaluating clinical proficiencies are essential in ensuring entry-level competence.
To investigate the common methods athletic training education programs use to evaluate student performance of clinical proficiencies.
Cross-sectional design.
Public and private institutions nationwide.
All program directors of athletic training education programs accredited by the Commission on Accreditation of Allied Health Education Programs as of January 2006 (n = 337); 201 (59.6%) program directors responded.
The institutional survey consisted of 11 items regarding institutional and program demographics. The 14-item Methods of Clinical Proficiency Evaluation in Athletic Training survey consisted of respondents' demographic characteristics and Likert-scale items regarding clinical proficiency evaluation methods and barriers, educational content areas, and clinical experience settings. We used analyses of variance and independent t tests to assess differences among athletic training education program characteristics and the barriers, methods, content areas, and settings regarding clinical proficiency evaluation.
Of the 3 methods investigated, simulations (n = 191, 95.0%) were the most prevalent method of clinical proficiency evaluation. An independent-samples t test revealed that more opportunities existed for real-time evaluations in the college or high school athletic training room (t189 = 2.866, P = .037) than in other settings. Orthopaedic clinical examination and diagnosis (4.37 ± 0.826) and therapeutic modalities (4.36 ± 0.738) content areas were scored the highest in sufficient opportunities for real-time clinical proficiency evaluations. An inadequate volume of injuries or conditions (3.99 ± 1.033) and injury/condition occurrence not coinciding with the clinical proficiency assessment timetable (4.06 ± 0.995) were barriers to real-time evaluation. One-way analyses of variance revealed no difference between athletic training education program characteristics and the opportunities for and barriers to real-time evaluations among the various clinical experience settings.
No one primary barrier hindered real-time clinical proficiency evaluation. To determine athletic training students' clinical proficiency for entry-level employment, athletic training education programs must incorporate standardized patients or take a disciplined approach to using simulation for instruction and evaluation.
Of 3 commonly used evaluation methods for student performance of clinical proficiencies (real time, simulations, standardized patients), simulations were used most frequently.
Opportunities for real-time evaluation were greater in high school and collegiate athletic training rooms than in other settings. Orthopaedic clinical examination and diagnosis, therapeutic modalities, conditioning and rehabilitative exercise, and risk management were the content areas most often evaluated in real time.
Athletic training education programs should either incorporate the use of standardized patients or take a disciplined approach to using simulation in clinical proficiency instruction and evaluation.
The fourth edition of the Athletic Training Educational Competencies1 contains the clinical proficiencies for effective preparation of the entry-level athletic trainer. Proficient is defined in the fourth edition as “performing with expert correctness and facility.”1(p3) The clinical proficiencies represent “a listing of the student's clinical training before entering the profession” and guide decision making and skill integration.1(p3) The proficiencies should be a measure of “real-life” application.1(p3) The successful development of clinical proficiencies must represent a significant focus of the student's clinical experience,2 and the proficiencies must be organized in such a way that faculty and staff of the athletic training education program (ATEP) can evaluate and monitor student progress over time.1
Certainly, then, the primary goal of clinical education is to aid in the acquisition, development, and mastery of these clinical proficiencies.3 It is important that clinical proficiencies be evaluated in manners similar to their applications in real life. For instance, inexperienced surgeons make surgical errors that could be avoided if their skills were first evaluated (and then corrected) in a scenario that mimicked surgery performed on an actual patient.3 Similarly, the athletic training clinical proficiencies must be evaluated in a realistic fashion. Although a certified athletic trainer (AT) is thought to be competent upon passing the Board of Certification examination,4 current testing methods do not necessarily evaluate clinical proficiencies. Rather, this responsibility lies chiefly with the accredited ATEPs.2 However, we found no investigations in the literature concerning how this responsibility was met. The purpose of our study was to investigate the common methods ATEPs use to evaluate student performance of clinical proficiencies. The following research questions guided this investigation:
What common methods (eg, real time, simulations, standardized patients [SPs]) are used to evaluate student performance of clinical proficiencies?
What athletic training education proficiency content areas lend themselves more easily to real-time clinical proficiency evaluation?
Do barriers exist that generally hinder the common methods of clinical proficiency evaluation?
Are there sufficient opportunities in a variety of clinical education settings for real-time clinical proficiency evaluation?
Are there differences between the demographics/characteristics of an ATEP and the methods, content areas, settings, and barriers regarding clinical proficiency evaluation?
METHODS
Respondents
All directors of ATEPs (except at the researchers' institution) accredited by the Commission on Accreditation of Allied Health Programs as of January 2006 (n = 337) were solicited via postal mail to participate in this study. The program directors (PDs) were to complete an institutional survey and distribute the Methods of Clinical Proficiency Evaluation in Athletic Training (MCPEAT) survey to the person most responsible for coordinating clinical proficiency evaluation at their institution. If the PD was primarily responsible for this, then that person also completed the MCPEAT survey. A total of 201 PDs (59.6%) completed the institutional survey. A total of 199 programs (59.19%) returned the MCPEAT survey, which was primarily completed by PDs (n = 148, 74.4%) and coordinators of clinical education (n = 42, 21.1%). Respondents represented all National Athletic Trainers' Association districts and were affiliated with either the National Collegiate Athletic Association (NCAA) or National Association of Intercollegiate Athletics. Respondent demographics are presented in Table 1.
Procedures
Institutional review board approval was obtained before the study began. Survey packets contained the following items: a cover letter providing instructions and the need and purpose for the study; 2 survey instruments; a complimentary pen (to stimulate interest and to improve response rate); and an addressed, postage-paid return envelope. Program directors were instructed to complete the institutional survey. The MCPEAT survey was to be distributed by the PD to the individual most responsible for coordinating clinical proficiency evaluation within the ATEP. If no such individual served in this role, then the PD also completed this survey. The PD was instructed to return both completed surveys in the enclosed envelope within a 3-week period. Informed consent was implied upon completion and return of the institutional and MCPEAT surveys. Both surveys were coded to track participating institutions. A reminder e-mail was sent to the PD at the beginning of the week in which the surveys were to be returned. The institutions that had not responded received follow-up e-mails and phone calls for an additional 2 weeks. All principal investigators were blinded as to who returned completed surveys. All data entry, coding, and follow-up emails and phone calls were completed by a graduate assistant not directly associated with the investigation.
Instrumentation
Two structured focus groups were held (one at the 2005 Great Lakes Athletic Trainers' Association Winter Meeting and Clinical Symposium and the other at the 2005 National Athletic Trainers' Association Annual Meeting and Clinical Symposia) to determine which constructs were appropriate for evaluation of clinical proficiencies. With the information obtained through the focus group discussions, we developed 2 instruments. The institutional survey consisted of 11 items regarding institutional and program demographics and characteristics (eg, town or city population, NCAA division, number of Approved Clinical Instructors [ACIs], financial reimbursement of ACIs). The 14-item MCPEAT survey consisted of 9 items regarding demographic characteristics of the respondent (eg, primary title, years as an ACI) and 3 common evaluation methods, including definitions (ie, real time, simulation, SP). In addition, 4 Likert-scale items (range, 1 = strongly disagree to 5 = strongly agree) assessed respondents' perceptions regarding opportunities for real-time clinical proficiency evaluations in various clinical education settings (eg, collegiate athletic competition, corporate/industrial setting, high school athletic practice) relative to the educational content areas (eg, risk management and injury prevention, pharmacology, conditioning and rehabilitative exercise), and barriers to real-time clinical proficiency evaluation (eg, inadequate volume of injuries, insufficient number of ACIs, patient health care is often a priority). Item 14 consisted of the qualitative comments. Note that at the time this study was conducted, the third edition of the Athletic Training Educational Competencies5 was being used.
Respondents were invited to provide comments for the 2 questions on sufficient opportunities to engage in real-time clinical proficiency evaluations and other barriers to real-time evaluations of clinical proficiencies. The 3 methods of clinical proficiency evaluation examined in this research were defined in the MCPEAT survey (Table 2).
Five PDs reviewed both surveys for clarity and format, and improvements were made accordingly. Test-retest reliability was conducted for 18 programs. We computed ϕ correlation coefficients to determine the measure of agreement on questions that were dichotomous in nature. The median coefficients were .787 and .609 for the institutional survey and MCPEAT survey, respectively. For nondichotomous data, Pearson product moment coefficients of correlation were used to determine the test-retest reliability on applicable questions. The median coefficients were .954 and .635 for the institutional and MCPEAT surveys, respectively. Although the reliability measures for the institutional survey were high, the measures for the MCPEAT survey were lower. This finding could be due in part to the nature of estimating how clinical proficiencies are evaluated, because no evaluation standards currently exist. To obtain an additional reliability measure, each survey contained 3 identical questions regarding the method of clinical proficiency evaluation used by that ATEP. We calculated ϕ correlation coefficients to measure the responses between the PD and the individual responsible for clinical proficiency evaluation at each institution (if different from the PD). The median coefficient for this measure was .720.
Data Analysis
Descriptive statistics were computed on all items from both surveys. We used an analysis of variance to analyze differences between select demographics and characteristics of the ATEPs (eg, population of town, number of students in the professional phase of ATEP) and the barriers, methods, content areas, and settings regarding clinical proficiency evaluation. In addition, an independent-samples t test was calculated to analyze the differences between select demographics and characteristics of the ATEP (eg, compensation for ACIs, number of ACIs associated with the ATEP) and the methods, settings, and opportunities for feedback regarding clinical proficiency evaluation. The α level was set at .05, and Bonferroni corrections were used for multiple comparisons. The minimum target sample size of respondents was 30, which yielded a power of .92 for detecting a large effect. Sample sizes of 25 and 20 yielded powers of .86 and .76, respectively. Data analysis was performed using SPSS (version 13.0; SPSS Inc, Chicago, IL).
Although this study was not qualitative in nature, a sufficient number of comments were provided to warrant qualitative analysis. Written data were collected from 2 MCPEAT survey questions: (1) Do you feel that your students engage in a sufficient number of real-time clinical experiences to adequately prepare them as entry-level ATs? (2) List other barriers that may hinder real-time evaluation in your ATEP.
We used interpretative coding to analyze all qualitative data.6 This process involved taking each individual comment (coding) and developing categories of concepts, which focused on respondents' perspectives, issues, and concerns. The concept categories then were organized into themes using pattern analysis,6 in which labels were assigned to the themes to capture their meaning. Three analysts evaluated the data to ensure trustworthiness and accurate interpretation.
RESULTS
Institutional Results
According to 153 of the respondents (76.1%), the PD primarily coordinated the overall plan for clinical proficiency evaluation at the institution. Most respondents (n = 194, 97.5%) still tracked the completion of clinical proficiencies on paper, whereas a few (n = 24, 12.1%) used an online matrix or Web site. Most respondents (n = 168, 83.6%) did not track whether clinical proficiencies were evaluated in real time.
Nearly all respondents (n = 185, 93%) first required evaluation of clinical proficiencies in a controlled classroom or laboratory setting. A total of 48 respondents (24.1%) stated that their students were required to have these same clinical proficiencies reevaluated during clinical experiences within the same semester, whereas 71 (35.7%) reported that students were reevaluated by the end of the next semester.
Institutions providing compensation to their ACIs numbered 64 and those not providing compensation, 137. Of those providing compensation (n = 64), a total of 35 (54.7%) stated that the level of compensation was considered adequate, and 29 (45.3%) described the compensation as inadequate. Regarding the methods used for clinical proficiency evaluation, an independent-samples t test revealed no difference between respondents from institutions that provided compensation for ACIs and those from institutions that did not. According to 72 (39.6%) of the respondents, their students were required to have these same clinical proficiencies reevaluated during clinical experiences within the same semester (of these, 8.8% [n = 6] indicated within a week, 4.4% [n = 3] indicated within a month), whereas 71 (39.0%) noted that students had to be reevaluated during clinical experiences by the end of the next semester. Furthermore, 39 (21.4%) of the respondents noted that clinical proficiencies were reevaluated either by the end of the last semester or according to other timelines. Similarly, regarding the methods used for clinical proficiency evaluation, an independent-samples t test revealed no difference between those institutions providing designated release time for ACIs and those that did not. Most respondents (89.6%, n = 180) had fewer than 20 ACIs associated with their ATEP. An independent-samples t test revealed a difference between those ATEPs with 10 or more ACIs and those with fewer than 10 ACIs in terms of having more opportunities for real-time clinical proficiency evaluation at high school athletic practices (t176 = 4.035, P < .001) and competitions (t178 = −3.113, P = .002).
Methods of Clinical Proficiency Evaluation in Athletic Training Survey Results
Descriptive statistics for real-time, simulated, and SP methods of clinical proficiency evaluation are presented in Table 3. Approximately 178 (89.4%) of the respondents evaluated clinical proficiencies in real time, whereas only 48 (27.0%) evaluated more than 50% of all clinical proficiencies in real time. This finding indicates that other methods are used more than half the time for evaluation of clinical proficiencies. Half of the respondents (n = 100) indicated that their students engaged in a sufficient number of real-time clinical proficiency evaluations to prepare them for entry-level practice.
Respondents were asked to provide comments as to whether they felt their students engaged in a sufficient number of real-time clinical experiences. Representative comments for the themes and subthemes emerging from these data are presented in Figure 1. These 2 themes were (1) Students engage in a sufficient number of real-time evaluations (2 subthemes). (2) Students do not engage in a sufficient number of real-time evaluations (2 subthemes). Theme 1, “Students do engage in sufficient number of real-time evaluations,” describes how students regularly engage in real-time clinical proficiency evaluations. Its first subtheme, quality of clinical education, included comments that students are presented with real-time experiences daily and that attempts are made each day to incorporate those encounters into a student's clinical experience. The second subtheme, qualifying comments, included observations that although the number of real-time clinical experiences is sufficient, not all of those experiences occur at the appropriate time based on students' learning needs. For example, a student who recently learned how to properly secure an individual to a spine board may not have a timely clinical experience in which this skill can be practiced and evaluated. Theme 2, “Students do not engage in a sufficient number of real-time evaluations,” described how real-time evaluations are not feasible due to either time demands of the ACI or lack of specific clinical proficiency evaluation opportunities. The first subtheme, ACI role strain, addressed the time demands and the various roles and duties (eg, patient care, administrative tasks, student education) of ATs who are serving as ACIs. The second subtheme, insufficient opportunities for real time, described the insufficient occasions for real-time experiences.
Concerning other methods of clinical proficiency evaluation, most of the respondents (n = 186, 93.5%) used simulated clinical proficiency evaluations. Of these respondents, 96 (51.6%) used these evaluations more than half the time, and 162 (81.4%) used scenarios in which students integrate skills to solve clinical problems. Furthermore, 113 (56.8%) of the respondents used SPs to evaluate clinical proficiencies; 40 (35.4%) used this approach to conduct clinical proficiency evaluations more than 50% of the time.
Respondents had sufficient opportunities to provide feedback during and after real-time, simulated, and SP clinical proficiency evaluations. We noted sex differences regarding perceptions as to whether opportunities to provide this feedback were sufficient. Compared with male ACIs, female ACIs more often had sufficient time and opportunity to provide meaningful feedback after real-time and simulated (independent-samples t tests: t187 = −3.589, P < .001, and t189 = −2.638, P = .009, respectively) clinical proficiency evaluations.
Educational Content Areas and Clinical Proficiency Evaluation
Descriptive statistics regarding the 12 educational content areas and respondents' perceptions as to whether opportunity was sufficient for real-time clinical proficiency evaluation in each area are presented in Table 4. The Orthopedic Clinical Examination and Diagnosis (4.37 ± 0.826), Therapeutic Modalities (4.36 ± 0.738), Conditioning and Rehabilitative Exercise (4.28 ± 0.775), and Risk Management and Injury Prevention (4.21 ± 0.763) educational content areas were scored the highest, with more than 85% of respondents agreeing or strongly agreeing that sufficient opportunities existed in each of these content areas for real-time clinical proficiency evaluations. The Nutritional Aspects of Injury and Illness (2.93 ± 1.008) and Psychosocial Intervention and Referral (2.76 ± 1.045) educational content areas were scored the lowest, with 40% of respondents disagreeing or strongly disagreeing that sufficient opportunities exist in each of these content areas for real-time clinical proficiency evaluations.
Clinical Experience Settings
Descriptive statistics on clinical experience settings and their ability to provide sufficient opportunities for real-time clinical proficiency evaluations are presented in Table 5. The collegiate or high school athletic training room (4.25 ± 1.010), collegiate athletic practice (4.03 ± 1.063), and high school athletic practice (3.99 ± 1.068) settings scored the highest, with more than 70% of respondents agreeing or strongly agreeing that these settings provided sufficient opportunities for real-time clinical proficiency evaluations.
Respondents reported more opportunities for real-time clinical proficiency evaluations in collegiate athletic practice (t191 = 3.551, P = .008), collegiate athletic competition (t190 = 3.364, P = .001), and high school athletic competition (t179 = 2.601, P = .010), compared with other clinical experience settings (eg, college/high school athletic training room, corporate/industrial setting, rehabilitation clinic) using independent-samples t tests. A 1-way analysis of variance revealed no difference between the population of the town in which the ATEP was located and the opportunities for real-time evaluations among the clinical education settings.
Barriers to Real-Time Clinical Proficiency Evaluation
Descriptive statistics for barriers to real-time clinical proficiency evaluations are presented in Table 6. Most respondents (n = 150, 75.4%) either agreed or strongly agreed that a barrier to real-time clinical proficiency evaluation was that the actual occurrence of an injury or condition does not conveniently coincide with the evaluation timetable established for a particular clinical proficiency. In addition, 78.4% (n = 156) of the respondents agreed or strongly agreed that an inadequate volume of injuries or conditions was a barrier to real-time evaluation. We also noted that 24.6% (n = 49) of the respondents agreed or strongly agreed that a coach or administrator who provided minimal support for clinical education was a barrier to real-time evaluation. A 1-way analysis of variance revealed no difference between the population of the town in which the ATEP was located and barriers to clinical proficiency evaluation.
Respondents also were asked to comment about other barriers they believed hindered real-time evaluation in their ATEPs. Representative comments regarding the 2 themes and various subthemes that emerged from these data are presented in Figure 2. The 2 themes were ACI priorities (3 subthemes) and opportunities for clinical education in collegiate athletics. The ACI priorities included comments regarding the various job-related responsibilities (eg, patient care, administrative tasks, student education) of the ACI-AT. The first subtheme, ACI attitudes toward clinical proficiency evaluation, indicated that some ACIs were not willing to allow students to perform real-time clinical proficiency evaluations or were not dedicating time to student evaluation. The second subtheme was ACI role strain, which in this case referred to the strain of providing both patient care (listed higher priority) and student education. Comments described how ATs feel that they already are overworked with job responsibilities, and although they are interested and willing to serve as ACIs, the time and effort needed to evaluate clinical proficiencies is often a problem. The third subtheme was lack of ACI interest, which described how patient care is the primary interest of the ACI-AT and student education is of less interest. The second theme, opportunities for clinical education in collegiate athletics, represented the high importance of health care provision in collegiate athletics. Collegiate athletes (particularly in high-profile sports) are often considered “superstars,” with the expectation that they will receive their care only from ATs or physicians. These expectations diminish clinical experiences for students and related real-time opportunities for clinical proficiency evaluations.
DISCUSSION
Institutional Survey
Regarding the methods used for clinical proficiency evaluation, we found no difference between the ATEPs that provided their ACIs with release time and compensation and those that did not. Perhaps those factors associated with role strain cannot be superseded by monetary or even time compensation. Of those that did provide compensation, 29 (n = 64, 45.3%) reported inadequate compensation. More research is needed to understand the relationship between compensation and release time provided to these ACIs and performance of their duties relative to various common methods of clinical proficiency evaluation. It appears that students in ATEPs with 10 or more ACIs have increased opportunities for real-time evaluations (particularly in collegiate and high school athletic training rooms, high school athletic practices and competitions, and orthopaedic sports medicine clinics). We are unsure as to whether the difference between the number of ACIs and real-time clinical proficiency evaluations is due to more real-time opportunities at those particular settings, more ACIs overall to evaluate students, or a combination of both.
Very few (n = 33, 16.4%) of the ATEPs tracked the various methods by which clinical proficiencies are being evaluated. Presently, medical school clinical education standards require that the types of patients (real or simulated) students encounter be quantified.7 This monitoring helps to ensure that medical students have adequate clinical education experiences. We believe that ATEPs also should track how clinical proficiencies are being evaluated. With this information, the educational content areas and clinical proficiencies that need to be more carefully evaluated via simulations or with SPs can be determined. A need for comprehensive monitoring of clinical proficiency evaluations in ATEPs today is evident. Our findings demonstrated that 19.6% (n = 39) of the ATEPs did not require reevaluation of clinical proficiencies during the same or the next semester. We wonder, then, which criteria are being used to determine student progression in the ATEP.
Methods of Clinical Proficiency Evaluation
Real-time clinical proficiency evaluation was defined as the time when an athletic training student was engaged directly with an actual patient or athlete. Although a majority of ATEPs (89.4%) reported that clinical proficiencies are evaluated in real time, only 24% reported that real-time evaluations of clinical proficiencies are used more than half of the time. This finding indicates that most respondents are more often using methods other than real-time evaluation. At least half of those responding would prefer more real-time clinical proficiency evaluations for their students. It is unlikely that opportunities will be sufficient for real-time clinical proficiency evaluations, regardless of how many clinical hours per week a student engages in, due to the unpredictability of these opportunities occurring at the right place and at the right time. This partially explains why other methods (simulations, SPs) are used for clinical proficiency evaluations.
A simulation was defined as a scenario or clinical situation in which a student evaluates a mock patient or athlete who portrays a fake injury or condition (eg, shoulder pain, acute cervical spine injury). The mock patient or athlete is an individual (typically a peer student or ACI) who has had no training to portray the injury or condition in a standardized and consistent fashion. A vast majority (94%) of the respondents reported using simulations at some point to evaluate clinical proficiencies. Simulations were used more than 50% of the time to evaluate clinical proficiencies by a little more than half of the respondents. No studies have been published on evaluating clinical proficiencies, although some athletic training education literature does present the use of simulations as a teaching and learning tool. For instance, one group8 concluded that videotaped simulations are useful in developing athletic training students' critical thinking during an injury evaluation. Students in a senior-level clinical laboratory course were provided with medical documentation on a specific injury (eg, glenoid labrum tear) from an actual patient. After reading through the documentation, one student developed and acted out a “script” regarding the injury, while a fellow student completed the evaluation of that injury. The evaluation was videotaped and was viewed by the class. The students then offered the diagnosis and differentials, including supporting rationales. They also provided the student who completed the evaluation on the mock patient or athlete with summative and formative feedback. Other authors9 have described how bleeding control, wound care, and blister care simulations can be used to challenge students. Using fake blood (ie, catsup) the authors discuss the benefits of the simulation, including minimizing the exposure to bloodborne pathogens that can occur in a real-time situation. Constructing quality simulations requires that they be inherently meaningful and at the appropriate level of “real life” for the student.10
The last method of clinical proficiency evaluation we investigated was the use of SPs. A standardized patient was defined as an individual who has undergone training to more formally portray an injury or illness in a consistent fashion to multiple students. More than half (57%) of the respondents reported using SPs. Of those, more than one third (36%) reported using them more than 50% of the time. This finding was unexpected. Although evaluations with SPs are used and reported in the medical and allied health literature (eg, medical education,11,12 nursing education,13,14 physical therapy education15), we appear to be the first to mention them in the athletic training education literature. However, a movement toward greater use of SPs in athletic training education does appear to be occurring. The fourth edition of the Athletic Training Educational Competencies1 stated that if actual patients are not available for assessment of the clinical proficiencies, then standardized or simulated patients or scenarios should be used to evaluate students.
Given the apparent resemblance of SPs to simulations, respondents likely confused these 2 evaluation methods, despite the definitions provided for both on the MCPEAT survey instrument. Again, a simulation involves a mock patient or athlete who has had no formal training in a case and is not expected to portray the case in a consistent fashion to multiple students.16 An SP encounter is different in that a case must be carefully developed and the individual must be trained to accurately and consistently portray that case.
A case template or uniform document is most often used in medical schools to develop the cases an SP will portray (eg, migraine headache due to domestic violence, hypertension, giving bad news in the form of a cancer diagnosis).16 Each SP case, optimally derived from a real-life condition, is developed by a team of individuals (eg, physician, faculty member, SP trainer). Once the case is developed, an SP is found or recruited who fits the age, sex, and physical characteristics needed for the case. That individual then undergoes individual or group training with an SP trainer (an individual who is experienced or trained to work with SPs). The formal training for a specific case can last anywhere from 30 minutes to more than 4 hours, depending on the characteristics and complexity of the case. Training an SP typically consists of the SP trainer verbally reviewing the content of the case (eg, SP name, social history, medical history) with the SP. The SP also reviews a script or written document that explains the case and how the SP should answer certain questions (eg, Have you had this condition before? Are you married?). Any physical findings that need to be portrayed, such as pain, fear, and anxiety, are practiced. For example, an SP who is being trained in an appendicitis case would be taught to display the proper characteristics of pain for that particular case. If the SP also is going to evaluate the student (eg, Did the student palpate the abdomen? Did the student ask your name?), then proper procedures for completing the written evaluation also are included in the training. Once the initial training is complete, the SP may return at a later date for a “tune-up” or short practice with the SP trainer just before the encounter with the student.
Substantial evidence exists in the medical literature that SPs are widely accepted to assess the clinical competence and performance of medical students.11,12 The author17 of a literature review commented on the realism of SP encounters. Based on research in which SPs were sent into physicians' offices unannounced, the conclusion was that well-trained SPs are difficult to differentiate from real patients.17 Over the past 30 years, SPs have been used in medical education to evaluate (and teach) students' clinical skills.18 The SPs are used in medical education to ensure that students accurately and realistically experience a variety of clinical situations before practicing them on actual patients. Recently, other allied health care professionals, such as those in nursing and physical therapy, are beginning to investigate the effect of SPs in their professional preparation programs. Ebbert and Connors13 described the implementation of SP experiences in their nursing curriculum; students agreed that SP experiences were realistic and that feedback from the SP was helpful. In an investigation with nursing students,14 SP encounters were compared with traditional teaching methods (lecture and laboratory practice with a model) to determine their effects on patient evaluation skills. Students exposed to SPs were more effective in identifying patient needs, performing clinical skills, and communicating with patients. In another study,15 physical therapy students were exposed to SPs after a 7-week module on diabetes. The instruction and SP experience improved first-year students' attitudes toward diabetes, which likely resulted in better patient care.
Certainly one of the benefits of SPs is that they are more available and convenient than traditional educational methods for teaching and evaluating students.11 Athletic training students (like medical, nursing, and physical therapy students) cannot be reasonably exposed to the plethora of injuries and conditions for which they will need to be prepared. As in medical clinical education, athletic training students' real-time clinical proficiency evaluation (and instruction) is limited by the timely occurrence of an injury or condition. For example, our research revealed that 40% of the respondents in this study disagreed or strongly disagreed that sufficient opportunities exist in the Nutritional Aspects of Injury and Illness and Psychosocial Intervention and Referral content areas. In contrast, the Orthopedic Clinical Examination and Diagnosis, Therapeutic Modalities, Conditioning and Rehabilitative Exercise, and Risk Management and Injury Prevention content areas often provided opportunities for real-time evaluations. The SPs certainly could provide students with enhanced experiences regarding the Nutritional Aspects of Injury and Illness and Psychosocial Intervention and Referral content areas. Without authentic patient encounters, proper development and evaluation of students' clinical judgment and confidence are at risk.
Barriers to Clinical Proficiency Evaluation
It appears from our data that no one barrier primarily hindered real-time clinical proficiency evaluation. Respondents either strongly agreed or agreed (78%) that a barrier to real-time clinical proficiency evaluation was that the actual occurrence of an injury or condition does not conveniently coincide with the evaluation timetable associated with that particular clinical proficiency (eg, student needs to perform a knee evaluation, but a knee injury did not occur while the student was in the clinical education setting). Half the respondents (51%) disagreed or strongly disagreed that numbers of ACIs were insufficient to spend adequate time with students who needed to complete clinical proficiency evaluations. This finding indicates that although some ATEPs appear to have a sufficient number of ACIs, the timely occurrence of an injury or condition continues to be a barrier to real-time clinical proficiency evaluation, regardless of the number of ACIs. It is interesting that no difference was noted regarding barriers to real-time clinical proficiency evaluation relative to the population of the town in which the ATEP was located. We assumed that ATEPs housed in towns with larger populations would report more real-time clinical proficiency evaluations due to the likelihood of having more clinical education sites.
The ACI's willingness or availability to complete real-time clinical proficiency evaluations particularly seems to affect the incidence of this method of evaluation. More than half of the respondents indicated that patient or athlete health care is a priority over student clinical education. This finding supports the position of Weidner and Henning19 that it may be increasingly difficult for today's collegiate AT to find adequate time to accept extra responsibility for teaching and evaluating athletic training students' clinical proficiencies. The general trend is toward increased workloads to provide medical care coverage for expanding sport seasons and off-season conditioning, practice, and competition schedules, with fewer resources and more pressures. All of these issues are exacerbated by the unsupportive bureaucracy of collegiate athletics.20 Greater responsibility for the teaching, supervising, and assessing of students may often be unrealistic. Similar to what has occurred in nursing,21 athletic training clinical instructors are encountering role strain when balancing the needs of the athlete or patient and the needs of the student. In this situation, accountability to the patient takes precedence.22
IMPLICATIONS
Because few clinical proficiency evaluations (and likely instruction) occur during real time, we wonder if ATEPs can realistically accomplish what has been prescribed for them. Are athletic training students truly becoming clinically proficient for entry-level employment? The ATEPs must take a disciplined approach to clinical proficiency instruction and evaluation. Certainly we can learn much from our nation's medical schools and their decades of experience regarding the use of SPs. A limiting factor for ATEPs, however, will be the resources (primarily personnel) to meet the requirements of taking this approach. Realistically, ATEPs will need to take a creative and modified approach. Perhaps more SP encounters can be used to expose students to more realistic clinical encounters: SPs can be used in teaching clinical skills as well as evaluating them.
Our study revealed that ACI role strain seems to be a central issue to real-time clinical proficiency evaluation, which cannot be solved simply by providing already strained ACIs with additional compensation; the challenges seem enormous. Perhaps real-time clinical proficiency evaluation could be improved through education of ACIs. Many opportunities for real-time evaluation are missed. This may be due to lack of recognition by ACIs of the value of real-time evaluation and, consequently, not taking advantage of real-time opportunities when they do occur. Only 16% of the respondents tracked how clinical proficiencies are being evaluated. With such a low percentage, the results need to be interpreted carefully. We expect, however, that respondents have a general idea as to how students are being evaluated through their communication with students and ACIs. In addition to tracking specific methods of clinical proficiency evaluation, it would also behoove ATEPs to quantify who is evaluating clinical proficiencies. Are they more often evaluated by clinical staff ACIs or by teaching faculty ACIs? This information could assist the ATEP in taking an informed and systematic approach to clinical proficiency evaluation. Certainly, in an effort to align classroom and laboratory instruction with clinical experiences, readily available injury data could be used to help determine the clinical placements of athletic training students. For example, a student enrolled in an upper extremity evaluation course could be placed in a specific clinical assignment in which acute upper extremity injuries are more likely to occur (eg, softball).
CONCLUSIONS
Athletic training students' clinical proficiencies were being evaluated primarily via simulations. Orthopaedic clinical examination and diagnosis, therapeutic modalities, conditioning and rehabilitative exercise, and risk management are the content areas most likely to be evaluated by real-time methods. The collegiate or high school athletic training room, collegiate athletic practice, and high school athletic practice are the primary settings for real-time clinical proficiency evaluations. Barriers such as timely injury occurrence and the ACI's willingness or availability to complete real-time evaluations and experiences seem to affect the incidence of real-time clinical proficiency evaluations. In order for athletic training students to become clinically proficient for entry-level employment, it seems imperative that ATEPs incorporate SPs or take a disciplined approach to using simulation in clinical proficiency instruction and evaluation. We recommend the following regarding further research:
Ask ACIs to complete the MCPEAT instrument used for this investigation. This research focused on the perceptions of the PD and/or other individual who is primarily responsible for the oversight of clinical proficiency evaluation in the ATEP. The same MCPEAT instrument should be completed by ACIs to determine their perceptions of the methods being used in the evaluation of clinical proficiencies. This perspective may identify other barriers to real-time, simulation, or SP evaluations.
Determine the reliability and validity of the various methods of clinical proficiency evaluation to predict professional competency.
Explore the effect of simulations and SPs on athletic training students' confidence and communication skills.
Identify which factors and barriers determine when the different methods of clinical proficiency evaluation (real-time, simulation, SP) are being used.
Acknowledgments
The Great Lakes Athletic Trainers' Association provided funding for this study.
REFERENCES
Author notes
Stacy E. Walker, PhD, ATC; Thomas G. Weidner, PhD, ATC, FNATA; and Kirk J. Armstrong, EdD, ATC, contributed to conception and design; acquisition and analysis and interpretation of the data; and drafting, critical revision, and final approval of the article.