The U.S. News & World Report college ranking system is used to describe the best graduate programs in the country. Rankings of graduate programs are based solely on perceived ratings of quality by directors and/or deans. Athletic training is not listed by U.S. News & World Report; however, the Commission on Accreditation of Athletic Training Education (CAATE) reports key metrics such as the Board of Certification pass rate and program graduation rate, which could be helpful to create rankings.
To evaluate and rank CAATE-accredited professional athletic training (PAT) programs using 2 models: (1) perceived rating of academic quality by program directors (PDs) and (2) CAATE outcome data.
Cross-sectional with survey and retrospective data.
Web-based survey.
One hundred fifty-five PDs and 230 CAATE-accredited PAT programs.
The perceived rating survey for the PDs resembled the U.S. News & World Report system using a 5-point Likert scale to assess the academic quality of each program. For the CAATE outcome data, we used publicly available information for each PAT program on the CAATE website. We ranked all PAT programs using the data from each model. A Cohen κ was performed to explore the agreement between the PDs’ perceived ranking and the CAATE outcome data rankings.
No agreement was found between the perceived peer assessment and CAATE outcome data rankings (κ = −0.003, P = .401).
Perception by PDs did not align with objective data reported by CAATE. The lack of agreement between the 2 ranking systems highlights concerns about using the U.S. News & World Report system for graduate health programs. We suggest exploring a more robust and comprehensive formula including overall pass rate and graduation rate to identify top-ranked programs in athletic training.
Key Points
Athletic training is one of the few health care professions that does not have a ranking system to identify top educational programs.
The U.S. News & World Report methodology was not an accurate model for ranking professional athletic training programs.
A comprehensive formula that includes CAATE outcome data combined with public perceptions should be considered to effectively rank athletic training programs.
INTRODUCTION
In recent decades, the profession of athletic training has made educational advancements through the skills that are taught, the concepts that are learned, and the expectations that are held. Modern-day athletic training has changed dramatically from the internship or apprenticeship models that included a set number of required hours of practice to earn the necessary credentials. Today, students are held to a robust, structured curriculum with diverse clinical experiences and skills outlined in the Commission on Accreditation of Athletic Training Education (CAATE) standards to be eligible for certification as an athletic trainer.1–5 The educational advancements in athletic training also require the delivery of professional training at the postbaccalaureate level, which has increased the academic requirements to sufficiently develop skilled, comprehensive, and successful students in their future careers.4–6
With the changing landscape come different admission and recruiting mechanisms. A variety of factors can influence graduate students when selecting an institution that best fits their academic needs. Students interested in athletic training tend to explore a program’s Board of Certification (BOC) examination pass rate, its accreditation status, and the clinical sites offered.7,8 The CAATE website, as of March 2022, stated that “the CAATE recognizes the importance for students to select a program that provides the best opportunity to pass the exam and enter into professional practice.”9 The recruiting process for many professional athletic training (PAT) programs focuses on promoting aspects of their program such as BOC examination preparation methods and outcomes.7 Students of professional postbaccalaureate programs have shared that the BOC examination pass rate for a PAT program became a central tenant when they considered applying for admission.7 Additionally, institutions with successful BOC examination pass rates and accreditation status may have the ability to attract competitive students and ultimately improve program sustainability.8
Historically, the BOC examination first-time pass rate was introduced by the CAATE as a bright-line rule for an assessment measure in the 2012 CAATE Standards.10 The integration was in response to the requirements from the Council of Higher Education Accreditation, which is the accrediting body that accredits CAATE.11 Specifically, the 2012 CAATE Standard 11 stated that programs must report their 3-year aggregate measures of program success, and Standard 13 stated that programs that do not have a 3-year aggregate BOC examination first-time pass rate of 70% must develop an action plan for correction.10 Similar to the 2012 CAATE Standards, the 2020 CAATE Standards 6 and 7 required programs to meet or exceed a 3-year aggregate of 70% first-time BOC examination pass rate with the potential to lose their accreditation if that goal was not met.11,12 Recent literature has noted the concerns, discrimination, and limitations that a bright-line rule continues to have on accredited higher education institutions and programs.13 In response, the CAATE Board of Commissioners announced on March 1, 2022, to immediately vacate Standard 6 (2020 CAATE Standards) and Standard 11 (2012 CAATE Standards), as well as the first-time pass rate accreditation action algorithm.12 The change tasked the CAATE with identifying outcome measures that promote quality assurance and quality improvement specific to each program’s goals.
The potential success of a program to develop an individual to pass their BOC examination and begin working and earning as a certified athletic trainer is critical to the new applicant. In other graduate-level health care programs, students have remarked on other factors of interest such as the national ranking of the program.14–16 A ranking system can play an important role in recruiting students and demonstrating the success of previous students and faculty.6,15,17 Currently, the CAATE categorizes programs as either seeking initial accreditation, initial accreditation, or continuing accreditation for those that uphold the required program standards. However, programs lacking evidence of standards may be placed on administrative probation or voluntary withdrawal of accreditation. The current method fails to separate and spotlight the more successful programs in athletic training education.
The concept of ranking education programs in athletic training is not new. The concept was first discussed and published by Voll et al6 in 1999. In this article, the authors explained how a ranking process could be valuable to not only the academic components of a program but also many areas of the athletic training society.6 For example, university administrators can use this ranking to recruit faculty. Faculty can then use the ranking to compare with other programs and recruit new students. Rankings can also play a key part in a student’s decision-making process when choosing a graduate institution, as well as highlight areas of uniqueness for students to identify potential interests.6 When assessing academic quality and ranking the success of PAT education programs, both quantitative and qualitative factors will be critical in analyzing, classifying, and determining the top programs. Ranking professional education programs can be extremely influential for an institution, as observed through a variety of other academic disciplines including business schools, medical schools, law schools, and health education programs.14,15,18,19 Both quantitative criteria (eg, test scores, grade point averages, pass rates) and qualitative measures (eg, university diversity, research interests, faculty) have been identified and used to determine the academic quality of university programs.18,20,21
A multitude of professions across the world, including most health care fields, claim a unique ranking system to recognize the top institutions and programs in their separate area of study, with the most common ranking system being the U.S. News & World Report system.14,18–22 Although PAT programs are evolving to the graduate level, there is an apparent lack of an educational ranking system specific to the health care discipline.6 Based on peer professional programs, it would be suggested that creating a ranking system focused on evaluating individual program values and assessing academic quality to identify premier organizations would be helpful to improve recruitment efforts in athletic training by aiding students in finding the best fit for their educational experience.6 As no ranking system currently exists, the first step would be to explore the perceived and outcome data rankings of programs. Therefore, the purpose of this study was to evaluate and rank PAT programs accredited by CAATE using 2 different models and comparing the 2 ranking outcomes. The first model aimed to explore perceived ratings by PAT program directors (PDs) of the academic quality of PAT programs, whereas the second model used the CAATE outcome data to evaluate and objectively rank the PAT programs.
METHODS
Study Design
We used a cross-sectional study design to explore the research aims. The setting for this descriptive study included both an online survey (Qualtrics) to assess the first model and a retrospective database analysis through publicly available information to assess the second model.23 The University of South Carolina Institutional Review Board deemed this study exempt for the PD survey; CAATE outcome data did not require review or approval because of the public availability of the data. We adhered to the STROBE checklist24 (version 4) for reporting cross-sectional and observational studies. The Figure provides the concurrent model assessment procedures for the study.
Flow diagram. Abbreviations: CAATE, Commission on Accreditation of Athletic Training Education; N/A, no CAATE data reported between the years 2018 and 2021; PAT, professional athletic training; PD, program director.
Flow diagram. Abbreviations: CAATE, Commission on Accreditation of Athletic Training Education; N/A, no CAATE data reported between the years 2018 and 2021; PAT, professional athletic training; PD, program director.
Participants
For this study, the research team gathered and compiled a list of current PDs for all PAT programs regardless of status, using data listed on the CAATE website in April 2022. The list included the PD email, institution name, and program information, such as degree type and CAATE standing. An email explaining the study and steps for data collection was sent to PDs of CAATE-accredited PAT programs that were classified as “active,” “seeking accreditation,” or on “probation.” Exclusion criteria included any professional bachelor’s program with a pending degree change and any professional master’s program that was voluntarily withdrawing accreditation. Postprofessional athletic training programs, including residency, doctoral, and fellowship programs, were also excluded from the study. In total, 230 athletic training programs and their respective PDs were recruited and included in the study.
Instruments
Model 1: Perceived Peer Assessment
To assess academic quality, we used the Peer Assessment of Professional Athletic Training Programs tool. The tool was adopted from U.S. News & World Report and followed the same template as the Peer Assessment of Doctoral Programs in Physical Therapy, a tool used to assess physical therapy doctoral programs. The tool was obtained via email to the senior author from the chief data strategist from U.S. News & World Report (Robert J. Morse, MBA, written communication, September 2021). The tool included a list of all 230 PAT programs as of April 2022 by the institution and was organized by state in alphabetical order. The tool (Table 1) asked the PD to complete a 1-item rating assessment for each institution to assess each program’s academic quality using a 5-point Likert scale (5 = outstanding, 4 = strong, 3 = good, 2 = adequate, 1 = marginal, x = no answer). After the completion of the survey ranking, the PDs were asked to explain what specific factors they considered for each institution they rated as outstanding or marginal. As the methodology used for Best Health Schools Rankings uses the same format annually, which is a 1 to 5 scale based on peer reputation surveys, we felt it was important for us to replicate the U.S. News and World Report methodology for the purpose of this study. However, we feel it is important to note that validity and reliability information from U.S. News & World Report for this tool are not publicly accessible.
Model 2: CAATE Outcome Data
The second assessment model used publicly available data from the CAATE website. From the available data between the 2018 and 2021 academic years, the research team gathered and analyzed specific outcomes based on the recent standard changes that went into effect on March 1, 2022. These changes included the immediate removal of Standard 6 (2020 Standards) and Standard 11 (2012 Standards), as well as the first-time pass rate accreditation action algorithm.12 In light of this change, more emphasis has now been reflected on Standard 5 (2020 Standards) including program graduation rate, program retention rate, and the pass rate on the BOC examination. From these changes, the research team also gathered and analyzed these specific data to determine program rankings using a factor code formula. The formula converted the first-time BOC examination pass rate to a percentage, followed by an overall pass rate percentage conversion, as well as the graduation rate. We created the factor code percentage score using the following formula: (Number of Students Who Took the BOC Exam − Number of Students Who Graduated) + First-Time BOC Pass Rate Percentage + Overall BOC Pass Rate Percentage/200].
Procedures
Model 1: Perceived Peer Assessment
After the tool was adapted to athletic training, it was distributed via email to the 230 PDs of PAT programs at the time of the study. The reason PDs were solicited for the peer assessment was to align with the U.S. News & World Report Peer Assessment of Doctoral Programs in Physical Therapy methodology, which sends the survey to deans and administrators for the health programs at the institutions. The responses were anonymous, with no demographic information gathered on the PD. The survey remained open for 8 weeks with reminders sent every Tuesday for incomplete responses. Data were collected between April 4 and May 23, 2022. Of the 230 PDs recruited, 155 PDs completed a portion of the tool (67% access rate) and had responses used for analysis.
Model 2: CAATE Outcome Data
The second research aim was assessed by gathering and downloading the CAATE program outcomes from the 2018 to 2021 academic years. We calculated 3-year aggregates for the number of students who graduated, the number of students who took the BOC examination, the percentage of students who passed their BOC examination on the first attempt, and the percentage of students who passed their BOC examination regardless of number of attempts. In the March 2023 CAATE Town Hall Update,25 it was noted that the average number of students admitted per graduate program has increased from 7.7 to 9.08. Using the number of students who graduated, we classified PAT programs as either small (0–30 graduates over 3 years) or large (31 or more graduates over 3 years). Therefore, the program classification was based on this information, assuming that larger classes average greater than 30 students in 3 years, whereas smaller programs tend to have fewer than 30 graduates in the same period. After aggregating the data by institution, the research team inputs the data into the factor code formula to determine the factor code used for ranking. Of the 230 programs, 199 PAT programs had data for all categories that allowed for the factor code to be analyzed.
Data Analysis
Model 1: Perceived Peer Assessment
Upon receiving the PDs’ responses, we compiled, coded, and evaluated the data to create rankings based on the perceived quality of PAT programs. The peer assessment data were analyzed by calculating a mean score and standard deviation (outstanding = 5.0, marginal = 1.0) for each program using all 155 collected responses. If PDs reported no answer because of unfamiliarity, the peer assessment was excluded for that institution and not considered in the overall average peer assessment score. The research team then used the average perceived peer assessment score to rank the 230 PAT programs as each program received a peer assessment from a PD. Each school was provided with a response by at minimum 75 PDs to be included in the final analysis. The rankings were sorted in numerical order, with any programs that had a tie ranked as the same number. Finally, each PD had the opportunity to leave open-ended responses and share the qualities they identified when determining outstanding and marginal programs. The responses were downloaded. The primary and senior investigators (S.E.B., Z.K.W.) coded the responses individually and then compared their coding into common qualities for both outstanding and marginal responses.
Model 2: CAATE Outcome Data
The research team used the factor code percentage to sort the PAT programs (high = 100%, low = 0%) and assign each program an individual ranking. If a program scored 100, it was ranked 1, then followed in numerical order of factor code until all programs were ranked. Data were then separated by program size (small and large programs) for an additional ranking evaluation. To achieve the primary purpose of the study, the 2 ranking models for each program were compared based on their determined program rankings using reliability agreement tests. We performed a Cohen κ coefficient to measure the agreement between the 2 instruments.
RESULTS
Perceived Peer Assessment
The PDs’ overall average rating of all 230 programs was 3.0 ± 0.6. Of those responses, the highest rating awarded was 4.16 out of 5, and the lowest rating awarded was 1.25 out of 5. From the top-10–ranked perceived peer assessment ratings, 6 were considered large programs, 3 were considered small programs, and 1 program reported no CAATE data during the designated period. The open-ended responses were categorized into 10 qualities, including visibility of faculty, faculty expertise, BOC exam pass rate, reputation and marketing, available resources, graduate placement, cohort size, preceptor feedback, didactic and clinical education, and graduate interaction. Tables 2 and 3 provide extracted statements for what PDs considered as outstanding and marginal attributes of a program.
CAATE Outcome Data
Of the 230 PAT programs, 113 (49.1%) were considered small, 86 (37.4%) were considered large, and 31 (13.5%) reported no CAATE data between the years 2018 and 2021. On average, programs graduated a total of 30 ± 18 students during the 3 years. The average first-time BOC examination pass rate percentage was 76% ± 16%, and the overall pass rate percentage was 92% ± 9%. Using the factor code, the average factor score was 84, with scores ranging from 12 to 100. Ten PAT programs received a perfect score (100% first-time pass rate, 100% overall pass rate, and 100% graduation rate), creating a 10-way tie for the top-ranked program according to CAATE outcome data. Of these 10 programs, all were considered small programs graduating between 1 and 30 students over 3 years, suggesting better CAATE outcome data for smaller programs.
Model Comparison
When exploring the rankings using the perceived peer assessment model and the CAATE outcome data model, no agreement was found between the ranking models (k = −0.003, P = .401). In comparing the CAATE outcome data rankings with the results of the PD perception rankings, the top-rated program according to PDs (4.16) was ranked 95th in the outcome data rankings. The highest-ranked programs according to the factor code (10-way tie: 100% pass rate and 100% graduation rate) were ranked between 29th and 211th according to fellow PDs.
DISCUSSION
Program rankings can be extremely useful for institutions to identify areas of individual program achievement and pinpoint areas for necessary growth and improvement.6 Rankings can be used to assist students in selecting a program to best fit their academic and personal goals, while also allowing programs to market and recruit by highlighting unique program characteristics.6,14,18 To do this, the ranking system needs to be inclusive and encompass a variety of program qualities that can indicate program success. Our data suggest that using peer reputation surveys, such as U.S. News & World Report does for health science programs, to rank academic programs may not align with outcome data related to certification exam success and graduation rates.
Peer Assessment
In an article posted by U.S. News & World Report, the author26 outlined the criteria that were calculated and considered when creating the annual Best Colleges rankings. The article26 emphasized the dramatic changes in the ranking process from the first published copy in 1983 to the most recent published copy in 2015. In the first edition, the only criterion to be considered was academic reputation, as reported by college presidents. In the most recent edition, several other factors were included, including graduation and retention rates, faculty resources, student selectivity, financial resources, graduation rate performance, and alumni giving.26
Despite that knowledge, when ranking individual departments, including health programs, U.S. News & World Report focuses on one facet alone: academic reputation.26 To do this, the organization sends a survey out to all PDs in their respective categories and asks them to rank each program on a scale from 1 (marginal) to 5 (outstanding), much like the survey used in this study. The PDs are instructed to base their ranking on their knowledge of the academic quality of each program. The survey instructs the participants to consider a variety of factors when determining program rank, including curriculum, the record of scholarship, and the quality of faculty and graduates. Specific instructions can be found in Table 1. Any program with which the PD is not familiar is identified by selecting no answer. With these results, U.S. News & World Report then take the average score and creates a rank for each program based on the provided answers.
As seen in the results of our study, this process is an inaccurate way to rank and measure the academic quality of programs when compared with CAATE outcome data. Basing rankings purely on reputation can be extremely dangerous to the future of athletic training education. In the survey, the final 2 questions allowed the participants to provide specific factors that were considered when identifying programs as outstanding or marginal. Specific comments are outlined in Tables 2 and 3. When answering these questions, several participants took this opportunity to provide feedback and express some concerns regarding this ranking methodology. Some PDs voiced their thoughts regarding the potential harm of this study and the worry of ranking programs purely on reputation or popularity. Interestingly, one program that was ranked in the top 10 based on PD perceptions had yet to graduate its first cohort from its PAT program (as of 2021). The finding suggests that some of the PDs’ perception and knowledge could have been based on an institution’s postprofessional athletic training program rather than its PAT program. We believe this specific example highlights the importance of objective data being used to support rankings and academic quality, rather than just using perceptions from fellow PDs.
Ranking based on peer assessment alone fails to gather the full picture, including other variables that can outline program success. Reputation and popularity should not be the only factors considered when identifying successful programs and institutions. Various other academic factors can play a major role in analyzing program quality and prestige, including graduation rate, retention rate, job placement rate, and BOC examination pass rates. In addition, several elements that are not captured by the CAATE can also contribute to the quality of a program, including program faculty and reputation with research and publications. It is important to note that the Carnegie classification of an institution, as well as the hiring track (professional track or tenure track), may impact the likelihood for the faculty member to engage in research. Some athletic trainers in academia are solely researchers and increase the prestige of the university, but do not teach in the PAT program, whereas some programs may have a faculty that is solely responsible for instructional delivery, with research not required for their jobs. We share these thoughts as consideration of research productivity may not align with the ranking of an athletic training program, although it does increase the name visibility of the institution.
CAATE Outcome Data
When sorting and analyzing the objective CAATE and BOC data, the top 10–ranked schools all receive a score of 100. All 10 programs were considered small, graduating fewer than 31 students in the designated 3-year window. Of these, the smallest program included 1 student in 3 years, and the largest top-rated program graduated 29 students in the same period. The top large program received a score of 99 and was ranked 13th overall. It can be argued that large programs may have more of a challenge obtaining high BOC examination pass rates and graduation rate percentages when compared with smaller programs.
In 2022, the CAATE announced the immediate removal of 2020 Standard 6 and 2012 Standard 11, otherwise known as the bright-line standard. This standard required programs to meet or exceed a 3-year aggregate of a 70% first-time BOC examination pass rate. With this removal, programs can shift their main priority and focus from a single factor to analyzing their program more holistically through a set of outcome measures. When looking at the data, several institutions benefited from taking a comprehensive look at their program, rather than just identifying first-time BOC examination pass rates. Programs that fail to meet the updated standards and requirements will be required to complete a program analysis and create an action report to identify areas of strengths, weaknesses, and potential improvements.
The bright-line standard change has taken the emphasis off passing the BOC examination on the first attempt and shifted the focus to other components that can make a program successful.12 Interestingly, the BOC examination first-time attempt pass rate was 61.6% in 2020–2021, making it the lowest first-time pass rate year in the 2011 to 2021 historical BOC examination count data.27 However, the data captured during the 2018 to 2021 academic years were greatly impacted by the COVID-19 pandemic. The CAATE and the BOC both identified that BOC examination performance was down because of changes in teaching strategies, availability of clinical education, and specific institutional mandates to combat the pandemic.28
Amid this change, the CAATE Standards Committee was tasked with developing standards related to program-defined benchmarks for outcomes such as graduation rate, program retention rate, graduate placement rate, and pass rate on the BOC examination (Standard 5, 2020 Standards).12 With this shift, programs can analyze their outcomes holistically and pinpoint specific strengths and weaknesses. The factor code used in our study analyzes several program-defined benchmarks, including graduation rate, overall BOC examination pass rate, and first-time BOC examination pass rate. For example, a specific program reported only a 30% first-time pass rate on the BOC examination but reported an overall pass rate of 80%. When incorporating these data, as well as the number of students who graduated, into the factor code, the program was provided an overall score of 60. The case example demonstrates how the factor code can explore multiple facets of program outcomes while also identifying areas that need improvement.
Academic outcome data can often be seen as a measure of program success. However, when analyzing and ranking programs, a variety of outcomes should be considered. No single outcome fully encompasses overall program quality, so to rely on one program feature alone is a discredit to PAT programs and prospective students. To accurately analyze and rank the academic success of these programs, several outcomes that can attest to program quality should be considered, including faculty expertise and knowledge, diversity rates, clinical experiences and opportunities, and location.
In our results, the lowest-ranked program according to the CAATE outcome data was a minority-serving institution. After doing further research, we discovered a 2022 article by Harris and Eberman29 identifying distinct achievement gaps between White students and students who are part of historically marginalized groups when taking the BOC examination. Harris and Eberman29 also noted that students who are unable to successfully pass the BOC on the first attempt may experience a variety of financial, psychological, and professional challenges. Specific research should be completed to investigate the relationship between program success and program diversity rates. Exploring the relationship between program success and program diversity rates can help to determine the influence of diversity in PAT programs for students of all socioeconomic, racial, or ethnic backgrounds.
Comparing Models
When comparing the 2 observed models, no agreement or correlation was found. Neither the method used by U.S. News & World Report nor the objective data were able to accurately assess programs on their own. These methods account for program reputation and academic performance outcomes but fail to consider several other factors that contribute to the success of a program. Students find both factors important, but also choose to recognize institution qualities like diversity rates, location, cost, extracurricular activities, accessibility, clinical opportunities, and several other characteristics to best match each individual’s personal and professional goals.7,8
Regardless of the chosen principles, rankings can have a significant effect on an organization by altering its actions to conform to the standards and expectations of the ranking methodology. Hasbrook and Loy15 found that program rankings and the perceived prestige of the institution can affect the types of students, funding, and faculty they attract. Because of this, more prestigious programs tend to receive more funding and appeal to higher-quality students and faculty, leading to elevated success. Students are also more motivated to apply and be accepted by prestigious programs to work alongside talented peers and faculty, overall providing better education. Contrastingly, less prestigious programs tend to struggle with budget and personnel cuts because they are viewed as lower-quality programs.15 Institutions may feel pressure to acquire and maintain a satisfactory reputation based on the influence of previous poor rankings, funding sources reliant on success, the conformity of school activities to ranking criteria, and reactivity from outside sources including alumni, faculty, donors, and students.22
Rankings can assist students in their college decision-making process.30 However, the ranking system must be multifactorial to gather a well-rounded understanding of the quality of the PAT program and institution. Based on these findings, it is apparent that no single facet can gather the full understanding of an institution. Finding a ranking methodology that can use and highlight a variety of qualities to allow students to identify institutions that will best match their personal, academic, and professional goals is extremely important. In a recent article31 posted by the Boston Globe in November 2022, Harvard and Yale law schools announced that they would no longer be participating in rankings conducted by the U.S. News & World Report. The schools reported that they disagreed with the methodology used to reach these rankings and explained that it does not account for the core values of the legal profession. John Manning, the dean of the Harvard Law School, stated that the rankings “emphasize characteristics that potentially mislead those who rely on them.”31 Another article32 posted by the Washington Post discussed that 3 students from the University of Southern California sued the university over falsified data provided to U.S. News & World Report, resulting in inaccurate rankings. As seen in these recent developments, it is apparent that rankings have significant leverage over institutions and the way they format their programs.31,32 With that being said, it is important to find a ranking methodology that highlights various qualities and allows students to identify institutions that will best match their personal, academic, and professional goals.
Limitations and Future Research
We have attempted to provide an understanding of what is necessary to accurately assess and rank PAT programs. However, there were some limitations. Throughout the data collection process, the research team was contacted by numerous individuals expressing concerns about the PD survey and the general ranking process. Several PDs questioned the purpose and implications of this study and therefore declined participation. Some cited that they did not have the time to review all program outcomes, therefore they did not believe they could accurately rank each institution. However, it is important to note that the survey followed the same template as the Peer Assessment of Doctoral Programs in Physical Therapy created by U.S. News & World Report and used to rank physical therapy programs. To combat this, future studies should consider providing CAATE data for PDs to review and analyze before completing the study to increase understanding and awareness of other programs.
Another limitation of the study was the natural bias that occurs with ranking programs.33 Because PDs were asked to rank their own programs, it can be assumed that natural bias occurred with the ranking process. Because of the nature of assessing and identifying program quality, bias is expected to play a role in the scoring process. Although this is an unavoidable limitation, the presence of PD bias was used to show the need for an objective, predetermined ranking system. To mitigate potential negative effects, the authors have removed institution names from this publication to protect the reputation of all programs regardless of perceived or outcome data rankings.
We were also made aware that some institutions may have reported numbers that included students from both the PAT and undergraduate athletic training programs to the CAATE from 2018 to 2021. This information could alter the overall results and outcomes that were reported throughout this time period. It is important to note that there is no way to sort or identify the number of students from each program through these overlapping years, but we could see some substantial shifts in program sizes, as well as CAATE data outcome rankings and factor code scoring, once all programs are fully reporting data from their PAT students. In addition, the CAATE outcome data will change annually with the reporting of new cohort data. The authors recommend that readers visit the CAATE website for up-to-date outcome data per institution.
Finally, the research team created the factor code used for the study. The research team was able to identify and dictate the determinants of program quality and the weight of those factors for each program ranking. Although this is one way the research team chose to conduct their study, a factor code could be created and implemented in numerous ways. To do this, research should be done on constructing the factor code to determine the best formula to receive the most accurate rankings. For the sake of this study, we chose to exclude institution graduate placement rate from our selected factor code formula. Several factors, including professional goals, location/setting preference and availability, financial situations, and personal life factors, could play a role in the choice of employment after graduation. However, it is important to note and acknowledge programs that have high matriculation rates and are encouraging students to find athletic training jobs after graduation. In the future, the CAATE should consider surveying and including student responses as to why they chose not to pursue a career in the profession after certification to capture a better understanding of a program’s graduate placement rate. Additional research should also be done to analyze program success and to determine trends and commonalities among higher- and lower-ranked programs. Identifying elements of prosperous programs and drawing connections can help guide future rankings to understanding elements of educating successful athletic trainers.
CONCLUSIONS
When analyzing PAT programs, no specific ranking methodology is currently being used. Perceptions of PAT program academic quality by PDs did not align with objective data reported to the CAATE. The lack of agreement between the 2 ranking systems highlights concerns about using the U.S. News & World Report system for graduate health programs. We suggest exploring a more robust and comprehensive formula including overall pass rate and graduation rate to identify top-ranked programs in athletic training.