Academic programs rely on outcomes assessments to determine if changes in the curriculum are necessary.Context:

To examine the overall satisfaction levels of graduates (2005–2006) of National Athletic Trainers' Association–accredited postprofessional athletic training education programs as related to the 2002 Standards and Guidelines for Development and Implementation of NATA-Accredited Post-Professional Graduate Athletic Training Education Programs.Objective:

Original survey instrument and demographic questionnaire.Design:

Online survey instrument.Setting:

Of 211 survey recipients, 123 returned surveys (58.29% response rate).Patients or Other Participants:

Demographic information and satisfaction levels in 10 standard areas (depth of learning, breadth of learning, critical thinking, instructor availability, theoretic basis, writing skills, scholarly growth, community return, leadership, and overall program satisfaction) were obtained. Satisfaction scores were categorized into 10 percentage brackets (eg, 80%–89%) for each standard area.Main Outcome Measure(s):

No differences were noted in relation to any of the standard satisfaction areas for evaluation of time off from school. However, graduates who required more than the allotted amount of time to complete their degree were less satisfied in the areas of depth of learning (P  =  .027), breadth of learning (P  =  .001), instructor availability (P  =  .005), writing (P  =  .022), and overall program satisfaction (P  =  .016).Results:

Graduates were generally satisfied across all areas of their didactic curriculum. However, satisfaction levels were affected if graduates required more than the allotted amount of time to complete their degrees.Conclusions:

    Key Points
  • In 10 standard satisfaction areas, we found no differences between males and females, graduates of 1-year or 2-year programs, and those who took or did not take time off between completing the bachelor's degree and entering the master's program.

  • Graduates who required more than the allotted amount of time to complete their degrees were less satisfied than those who completed their degrees on time in the areas of depth of learning, breadth of learning, teacher availability, writing skills, and overall program satisfaction.

The National Athletic Trainers' Association (NATA) approved its first graduate-level athletic training education program in 1972. Since then, the profession has seen vast growth in the field of athletic training education, spearheaded by the NATA Education Council, which was formed in 1994 from an educational task force created by the NATA Board of Directors.1 In 1996, the educational reform focus in athletic training shifted to graduate education. The NATA Professional Education Committee stated that curricular approval would only be given to those graduate programs incorporating research and scientific inquiry.2 

In June 1998, the NATA Board of Certification discontinued graduate education as a route to certification. Therefore, only those students who had successfully completed the requirements to take the Board of Certification examination were accepted for admission into graduate programs. Realizing that this policy would exclude any students who wished to obtain an advanced degree in athletic training but who held a bachelor's degree in another field, the Graduate Education Committee of the NATA Education Council was formed in 1997 to distinguish the standards and requirements of entry-level master's degree programs and postcertification programs.2 Establishing standard requirements for graduate education was important because this was the first time a distinction was made between an entry-level master's degree program and a postprofessional master's degree program.

The Graduate Education Committee composed the first Standards and Guidelines for Development and Implementation of NATA-Accredited Post-Professional Graduate Athletic Training Education Programs document. Most recently, in May 2002, the Graduate Education Committee (now the Post-Professional Education Committee)3 released a revised edition of the Standards and Guidelines for Post-Certification Graduate Athletic Training Education Programs, which all NATA-accredited graduate curriculums are required to follow.

At the time of this study, 12 postprofessional athletic training education programs (PATEPs) in the United States were accredited by the NATA. These programs represent 9 states: Arizona, California, Indiana (2 programs), Michigan, North Carolina, Oregon, Pennsylvania (2 programs), Tennessee, and Virginia (2 programs).4 The mission of these programs is to expand the depth and breadth of the applied, experimental, and propositional knowledge and skills of entry-level certified athletic trainers.3 Each program's mission, goals, and objectives must demonstrate the intent to provide instruction in advanced skills and knowledge; increase the student's critical thinking and writing skills; enhance the ability to function in clinical, teaching, administrative, or research environments; and prepare these students for leadership roles within the field.5,6 Perhaps most importantly, programs must provide evidence that their students are meeting the program's goals and objectives.5 

Program directors, external assessors conducting site reviews, and associated institutional committees obtain assessments on graduate programs using a number of different methods, including evaluation forms, feedback from site visitations, student achievement records, graduate employment settings, student publications, and overall classroom performance data.3 Both existing students and graduates of each education program are a crucial source of information and feedback to an educator or program director regarding the conditions of the program. Therefore, recommendations and concerns identified by these individuals should be closely examined. Curricular satisfaction evaluations completed by students are gathered by each program; however, the results are not distributed to the educational realm. A summation of overall satisfaction of PATEPs has not yet been compiled. Therefore, limited information is available to support the quality of these education programs.

Voll et al7 argued that a ranking system in some graduate programs could be valuable to an institution, potentially lending itself to increased faculty and financial resources once it gained prestige and recognition within the field. However, no authors to date have published ratings of the graduate programs or specifically addressed the quality of learning at each institution. A reason for this could be that within each of the PATEPs lie various points of distinctiveness that represent the strengths and attributes of that program. These areas can vary in specific academic courses or in the research, clinical, or teaching components (or both). Program strength and excellence are best displayed when the institution is free to determine its own objectives and to experiment in educational methods within the framework of its respective authority and responsibilities.8 Further support is provided by Seegmiller,6 whose survey results demonstrated that respondents believed programs should not be forced to teach a prescribed, curricular package of information but rather that each program should be permitted to express its own institutional autonomy.

Related allied health disciplines that also sponsor professional education programs (eg, physical therapy, nursing, and occupational health) have been exploring similar questions within their respective settings.913 The physical therapy profession has drawn the conclusion that instructional behaviors may need to change in order to meet the varying needs of its students along with the educational standards that are being altered within the field.9 Nursing is shifting from an emphasis on clinical experience to focus more on didactic knowledge.12 Nursing students are conveying notable dissatisfaction with this move, which may be furthering their financial and academic stresses. Thus, formal research should be conducted within athletic training to improve the quality and overall satisfaction of the students and professionals participating in graduate-level programs, especially with regard to the most recent update to the Standards and Guidelines.3 

Despite ongoing reform in athletic training education, little formal research has been published on graduate education student assessment of overall program satisfaction.14 Currently, no existing objective measure exists to gauge how students view their programs, aside from the exit interview conducted by each program director, and any information that may be collected stays within each program. We have no evidence demonstrating whether or not students are satisfied in their choice to pursue an advanced degree in athletic training and whether or not academic programs are meeting the expectations and desires of their students. Therefore, the purpose of our study was to examine the overall satisfaction levels of recent graduates (2005–2006) of NATA-accredited PATEPs as related to the 2002 Standards and Guidelines.3 We hypothesized that the 2005 and 2006 graduates from NATA-accredited PATEPs would be more than 80% satisfied with every aspect of their respective graduate programs with relation to the 2002 Standards and Guidelines.3 Additionally, we hypothesized that satisfaction levels would not differ by sex but that graduates of 2-year programs would be more satisfied in certain areas.

Participants

Participants included 62 females (age  =  25.93 ± 2.19 years) and 61 males (age  =  24.76 ± 1.20 years). All volunteers were recent graduates (May 2005 through August 2006) of one of the 12 (as of May 2006) NATA-accredited PATEPs (Tables 1 and 2). These specific graduating classes were included in the population because they represent students who had entered their programs when the 2002 Standards and Guidelines3 were implemented. The total population comprised 231 graduates; however, the e-mail addresses for 20 of these graduates were either not accurate or unavailable. Therefore, the overall number of survey recipients was 211.

Table 1

Respondent Demographic Data

Respondent Demographic Data
Respondent Demographic Data
Table 2

Respondents' Frequency Data

Respondents' Frequency Data
Respondents' Frequency Data

Consent for participation and release of results was assumed upon the voluntary completion and submission of the survey by the participants, and anonymity was assured to all participants. This investigation was approved by the Human Investigation Committee within the College.

Instrumentation

We constructed an online survey instrument using Inquisite 6.01 Corporate Survey Builder (Catapult Systems, Austin, TX) to gather demographic and satisfaction data from the respondents. The electronic survey was developed and implemented in order to both reduce mailing costs and encourage participation in an uncomplicated manner. The questions were all derived from the 2002 Standards and Guidelines3 and were based on standard areas outlined within. The standard areas we examined were (1) depth of learning, (2) breadth of learning, (3) critical thinking, (4) instructor availability, (5) theoretical basis of learning, (6) writing skills, (7) scholarly growth, (8) desire to disseminate knowledge back into the community, and (9) preparation for leadership roles. We also asked the participants to gauge overall program satisfaction, for a total of 10 categorical areas. These questions were a component of a larger survey that also collected data on graduate research experience and clinical satisfaction. The survey instrument was constructed after consultation with various experts in the field of athletic training and graduate education and in conjunction with related literature.

Content and overall style of the survey were reviewed by the aforementioned experts for face and content validity. Online survey experts were contacted for review and were able to provide feedback on the overall question layout, in addition to making suggestions for ways to improve the appearance of the survey. The survey was then piloted with recent graduate athletic training education students (n  =  11) to test reliability through a test-retest procedure. Each of the 11 pilot participants took the survey on 2 separate occasions 4 weeks apart, and their data were not included in the study. Survey instrument reliability measures ranged from r  =  0.602 to r  =  0.971. This range was considered acceptable based on the types of questions posed to the recipients and their respective answers.

Survey questions addressed basic demographics (eg, age and sex) as well as more content-specific items to assess student satisfaction in the areas of program components, graduate assistantships, clinical experience, and overall research exposure. Several survey questions were used solely by other researchers as part of a larger study that also used this instrument, and, therefore, results on some demographic and other questions are not reported as a part of this study. Closed-ended questions were also incorporated into the survey instrument in an effort to allow for the most accurate responses from the respondents. The closed-ended questions were drafted using a Likert scale format with 10 scale choices; however, the choices reflected quantitative, numerical responses (0%–10% satisfied, 11%–20%, 21%–30%, etc) rather than the more traditional qualitative words (extremely satisfied, dissatisfied, etc). A sample survey question for the “desire to disseminate knowledge back into the community” standard is provided in Table 3.

Table 3

Sample Survey Instrument Question

Sample Survey Instrument Question
Sample Survey Instrument Question

We determined that more than 80% was an accurate threshold for satisfaction, as it equates to the more traditional score of 4 of 5 (or satisfied) on a standard Likert scale. A score of more than 80% was represented by an answer choice corresponding with either of the highest 2 ranges (81%–90% or 91%–100%) for each of the 10 questions. Likert scales are the most widely accepted form of attitude assessment; therefore, we developed a 10-category Likert scale model because the reliability of the scale increases when the number of scaling points increases.15 We used 10 choices to coincide with the desired satisfaction scale and because an even number of answer choices forced the respondent to express a directional attitude as a result of the lack of a “middle-ground” choice.15,16 

Procedures

A listing of all graduates of the 12 programs from May 2005 through August 2006 was obtained from the administrative offices of the NATA Post-Professional Education Review Committee. Simultaneously, a list of recent graduates was also obtained from the program directors of all 12 programs as a cross-reference to ensure that no student was omitted. These individuals were then contacted via e-mail using addresses obtained from the NATA online member directory database. If an e-mail address was not listed in this database, we made other attempts (electronically though a search of public Internet search engines) to try to obtain the individual's address.

Each graduate received an e-mail letter describing the overall purpose and importance of the research study and the estimated time to complete the survey; the letter included the electronic link to the survey instrument and a request for participation. The e-mail also provided contact information for a researcher in case the graduate had comments or questions that concerned either the research study or the survey instrument.

Upon completion of the survey (indicated by clicking Submit), the information was automatically sent to the university database system. Individual responses were generated in Microsoft Excel format (version 2003; Microsoft Corp, Redmond, WA) and then matched with a file coding system to maintain confidentiality. At the end of the survey, all participants were given the option to request the survey results, along with the opportunity to enter a drawing for a chance to win 1 of 50 $5 gift certificates to various vendors. A follow-up e-mail was then sent once per week for 4 weeks after the initial e-mail to thank those who had already participated in the study and to remind those who had not yet responded. At least 2 reminder e-mails16 in addition to a monetary incentive of $2 to $5 generally increase the overall response rate.15 

Data Analysis

Upon receipt of the participants' responses, the data were compiled and analyzed to determine statistical trends and associations. We used SPSS for Windows (version 14.0; SPSS Inc, Chicago, IL) to calculate the statistical components. Descriptive statistics were gathered and analyzed for each individual question on the survey. Power analyses were conducted with regard to overall satisfaction and the minimum sample size required to achieve 80% power (200 participants per group). Length of program was associated with a 0.23 effect size, which is considered low and equates to about 30% power. Demographic survey questions were analyzed using both frequency and descriptive statistics. Separate analyses of variance were used to determine any differences related to sex and length of program in several satisfaction areas. We calculated independent-samples t tests to determine group differences in satisfaction with regard to time between undergraduate degree completion and graduate school enrollment and (additional) time taken to complete graduate degree requirements. The Levene test for equality of variances allowed for normalization of variance due to the relative inequality of participants regarding the time from bachelor's degree completion to master's degree program entry and the time to complete degree requirements. Bonferroni adjustments were not performed because of the innate differences in the standard areas and a lack of significant values. Statistical significance was set a priori at P < .05.

The overall number of survey recipients was 211, and the number of participants responding to the survey was 123, yielding a 58.29% response rate. Descriptive statistics (mean ± SD) for all standard areas are located in Table 4. Separate analyses of variance revealed no differences between the sexes or between satisfaction with 1-year or 2-year programs in any of the 10 standard areas (Figures 1 and 2). Independent t tests demonstrated no differences in any of the 10 standard satisfaction areas for time off from school (more than 6 months) between attaining a bachelor's degree and entering the master's program (Figure 3).

Figure 1

Program length differences in curricular satisfaction.

Figure 1

Program length differences in curricular satisfaction.

Close modal
Figure 2

Sex differences in curricular satisfaction.

Figure 2

Sex differences in curricular satisfaction.

Close modal
Figure 3

Time off between programs and differences in curricular satisfaction.

Figure 3

Time off between programs and differences in curricular satisfaction.

Close modal
Table 4

Satisfaction Values for Standard Areas

Satisfaction Values for Standard Areas
Satisfaction Values for Standard Areas

Independent t tests identified several differences for time taken to complete graduate degree requirements with regard to satisfaction in the 10 standard areas. Compared with graduates who completed their requirements on time, those who required more than the allotted amount of time to complete their degree were less satisfied in the areas of depth of learning (t  =  2.367, P  =  .027), breadth of learning (t  =  3.451, P  =  .001), teacher availability (t  =  3.138, P  =  .005), writing (t  =  2.467, P  =  .022), and overall program satisfaction (t  =  2.625, P  =  .016). However, no differences were noted in the areas of critical thinking, theoretic basis, scholarly growth, responsibility for community return, or leadership (Figure 4).

Figure 4

Degree completion time and differences in curricular satisfaction.

Figure 4

Degree completion time and differences in curricular satisfaction.

Close modal

We hypothesized that the 2005 and 2006 graduates from NATA-accredited PATEPs would be more than 80% satisfied with every aspect of their respective graduate program as it relates to the 2002 Standards and Guidelines3 for graduate education. However, none of the 10 standard areas had mean levels of 80% satisfaction or higher. The 3 areas of highest mean satisfaction ratings were critical thinking, overall curricular satisfaction, and depth of learning. The 3 areas of lowest standard mean satisfaction ratings were breadth of learning, desire to return and disseminate knowledge into the community, and theoretic basis of learning. Among those respondents who were more than 80% satisfied in the standard area, the 3 standard areas with the highest number of respondents were critical thinking (n  =  87, 70.0%), scholarly growth (n  =  86, 69.9%), and overall curricular satisfaction (n  =  86, 69.9%) were comparable. The 3 areas with the smallest number of respondents who were more than 80% satisfied were breadth of learning (n  =  53, 43.1%), leadership (n  =  70, 56.9%), and theoretic basis of learning (n  =  73, 59.4%).

Previous researchers12 reported that 86% of graduates were satisfied with their nursing education (38% were very satisfied, 48% were somewhat satisfied). Yet these investigators used a traditional 5-point descriptive Likert scale, as opposed to the 10-point numeric scale we used, so drawing direct comparisons between the studies is difficult.

The mission of PATEPs is “to expand the depth and breadth of the applied, experimental, and propositional knowledge and skills of entry-level certified athletic trainers, expand the athletic training body of knowledge, and to disseminate new knowledge into the discipline.”3 Interestingly, the depth-of-learning standard produced one of the highest overall mean satisfaction scores, yet respondents were least satisfied with breadth of learning. This finding is not necessarily surprising because it coincides with one of the foci of the Standards and Guidelines3 known as “points of distinctiveness.” The Standards and Guidelines allow each program freedom to vary its subject matter in order to cater to the strengths of its faculty and resources. This freedom contrasts markedly with undergraduate curricular exposure, in which students are required to prove competency in a breadth of educational areas set forth by the Board of Certification. Therapeutic modalities, pharmacology, and risk management are among the areas in which graduates of Commission on Accreditation of Athletic Training Education–accredited undergraduate programs are required to develop entry-level proficiency. When compared with undergraduate experiences, a lack of breadth seems to exist at the graduate degree level, as no standard mandates that specific areas of study must be covered in the curriculum.

The autonomy that graduate programs are given reflects a dramatic shift in emphasis to promoting diversity of curricular content and clinical experiences.17 So while one program may focus on lower extremity injury prevention programs, another may specialize in developing the athletic training educator. Sauers and Parsons18 suggested that this directed focus could be making way for implementation of specialty certifications or residency or fellowship programs. Students could then be provided with the opportunity to gain even more specialized knowledge in a certain domain through additional coursework and clinical practice and could receive additional certifications for advanced training.

We also hypothesized that graduates of 2-year programs would be more satisfied in the 10 standard areas of a PATEP than would graduates of 1-year programs. However, our results did not support this hypothesis. Although no differences were noted between 1-year and 2-year graduates' satisfaction in all 10 standards, 2-year graduates reported greater satisfaction than 1-year graduates. A total of 58% (n  =  71) of the respondents were from 2-year programs, whereas the other 42% (n  =  52) were graduates of 1-year programs. This difference in group size was relatively proportional to the number of 1-year and 2-year PATEPs at the time of the survey: 9 (66%) 2-year PATEPs and 3 (33%) 1-year PATEPs. We theorized that a longer program would provide more time for students to gain additional didactic and clinical knowledge, to complete research requirements, and to develop professionally, as well as offer more opportunities to think critically and to delve into advanced subject matter. Generally speaking, the more exposure one has to a subject, the more knowledge and experience can be gained in that area. Although length of program is among the factors students consider when selecting a graduate program, it does not necessarily have an effect on the graduates' satisfaction levels.14 

Perhaps more important than the length (quantity) of the program is the quality of the program. No authors to date have specifically addressed the quality and ranking of athletic training education programs at any level.7 In a recent survey6 of athletic training educators, respondents agreed that the greatest contributors to program quality were (1) program curriculum; (2) adequate faculty, staff, and administrative support; (3) evaluation; (4) clinical experience; and (5) research. Program evaluation through student satisfaction feedback is just one example of how the quality of a graduate program can be assessed. Typically, student assessments are also combined with evaluations from peers, faculty members, and external assessors (ie, site visitors). Yet student satisfaction as a form of self-assessment and evaluation should be a critical and necessary component for any program.

Students have various reasons for selecting a 1-year or a 2-year program; some reasons, such as acceptance into a program, are factors they cannot control. Time and cost of education are 2 of the more common reasons for selecting a 1-year program; however, a 1-year program is not the best choice for every student. Accelerated learning of both basic and advanced discipline concepts appears to be easier for those students who have received extensive undergraduate preparation in the basic sciences.17 Graduates of a nursing doctoral program had the option of academic-year courses or summer-only courses.19 Those graduates who selected the longer, academic-year coursework reported job placement in more research venues than those graduates who selected summer-only courses. Additionally, nursing doctoral students enrolled in a longer time frame of coursework tended to show more scholarly productivity. Students who take a longer amount of time to complete coursework may have more exposure to research and critical thinking application. Wilkerson et al17 concurred that the goals of these PATEPs are to expand the body of research and clinical decision-making skills, as they will ultimately lead to the development of new knowledge within our field.

Students of nursing education at both the associate and baccalaureate levels offered conflicting views regarding length of program as it relates to depth of learning. Some students questioned the amount and depth of clinical practice opportunities and, therefore, perceived a disconnection between didactic and clinical practice.12 Others felt that they were experiencing “information overload.” Yet while the attitudes of those students support a longer quantitative learning experience, other students from this same population felt that their program contained too much “busy work.”12 Although this example refers specifically to nursing programs at the undergraduate level, it is important to understand that nursing education is a professional education program that, like graduate athletic training, contains a structured, clinical experience. No apparent consensus exists as to which length of program option is the most effective, and the best choice likely depends on the individual student. Peer and Rakich1 proposed that the best way to ensure programs were able to provide quality education was through standardization via the accreditation process. Thus, ultimately it becomes the responsibility of the Post-Professional Education Review Committee and the related Standards and Guidelines3 to ensure that students are receiving the same quality of education, regardless of the focus or length of the program. No matter the overall length, programs must be able to develop points of distinctiveness relating to short-term and long-term goals and objectives. Programs must also be able to demonstrate that a plan exists for meeting these goals and that measurable outcomes result from this plan. Length of program should not be a factor, provided the goals and objectives of the program are met.

We expected that curricular program satisfaction would not differ by sex, and our results support this hypothesis. Although male and female graduates' satisfaction in defined areas was not different, females reported greater satisfaction percentages in 5 areas: critical thinking (Δscore  =  11%), theoretic basis (Δscore  =  23%), writing (Δscore  =  4%), scholarly growth (Δscore  =  42%), and the desire to return knowledge to the community (Δscore  =  35%). Males reported greater satisfaction for both depth of learning (Δscore  =  12%) and breadth of learning (Δscore  =  9%), instructor availability (Δscore  =  36%), and leadership (Δscore  =  10%). The overall sexual demographics of students enrolled in PATEPs are shifting, but program satisfaction does not appear to be affected. Our results indicate that the curriculum is having a similar effect on both males and females. In a clinical education satisfaction study10 of physical therapy students, the authors also hypothesized that no sex differences would exist because the previous literature had never supported any differences. They actually found an interaction between sex and phase of the clinical cycle (first versus fourth or fifth clinical rotation); however, they were unable to explain why these differences were present. Further, the researchers suggested that further research be conducted in an attempt to explain satisfaction differences by sex.

Compared with those graduates who immediately entered their graduate programs after attaining their bachelor's degrees, we hypothesized that those who took more than 6 months between their undergraduate and graduate courses of study would report higher satisfaction scores for all 10 standard areas. However, our results did not support this hypothesis. A total of 27 (22.0%) of the respondents reported taking more than 6 months of time off from school after earning their bachelor's degrees. Fifteen of those graduates took less than 1 year off, whereas the other 12 took more than 1 year off. One explanation for the lack of differences between the 2 groups could be that the time off from school may have been too short to produce any disparities in curricular satisfaction. Respondents were not asked to specify the reasons for taking time away from school, but we speculated that potential reasons could include educational burnout, need for employment experience, lack of desire to earn a master's degree, and rejection from the program of choice. Also, students often complete their undergraduate requirements in either August or December; thus, their entrance into graduate school would be delayed merely because most schools accept applicants in the early spring for programs that begin in June or August. Students who take time off (nontraditional students) may be more prepared and focused to handle the rigors of graduate school after taking a short respite from coursework. Nontraditional students have proven to be more flexible than traditional students in adapting to a new clinical environment; as a result, self-confidence is generated sooner.20 Self-motivation and positive personal feedback are among the other feelings reported by nontraditional nursing students.20 

Finally, we hypothesized that graduates who were able to complete their degrees in the allotted amount of time would report higher satisfaction scores on all standards compared with those who needed an extension (or additional semesters) to complete their degree requirements. Approximately 16% (n  =  20) of the survey respondents reported needing additional time to complete their degree requirements. The average additional amount of time these students required was 9.60 ± 7.29 months. Among these 20 participants, we observed no apparent differences in sex, graduation year, length of program, or site of clinical assistantship. The average number of credit hours taken was 41, average grade point average was 3.61, and mean Graduate Record Examinations score (verbal plus quantitative) was 1038. However, a difference may be associated with the type of research conducted. Fourteen of the 20 participants (70%) needing additional time to complete their degrees reported completing a thesis, whereas the remaining 6 (30%) chose the research project route. Overall, 70% (n  =  87) completed a thesis, and the rest either participated in a research project (n  =  34) or reported other (n  =  2).

Limited research exists to support the claim that completing graduation requirements within the proposed allotment of time contributes to program satisfaction. Martin and Buxton21 discussed the flexibility that education programs ought to consider in order to meet each student's individual needs. Specifically, advisement sessions, clinical experiences, and classes may all need to be offered in the evenings or on weekends in order to accommodate nontraditional students and students who need additional assistance. We emphasize that taking more time to complete a degree or developing a more flexible clinical experience does not necessarily mean that programs should be forced to lower their academic standards and expectations.

Our purpose was to examine the overall satisfaction levels of recent graduates (2005–2006) of NATA-accredited PATEPs as they relate to the 2002 Standards and Guidelines3 for graduate education. We concluded that graduates were not 80% satisfied across all the areas of their graduate education, as related to their didactic curricula; however, results in all 10 areas were within the defined satisfaction range. As discussed, every program contains points of distinctiveness that emphasize the faculty and resources available to that institution and that give programs more independence to create unique and yet fulfilling experiences for students. Yet as a result of this autonomy, it is difficult to accurately assess satisfaction across programs because every program is different.

We acknowledge that certain limitations were present in this research. We were unable to obtain accurate contact information for every participant in the population; therefore, the sample size was affected because not every graduate was able to be contacted. Although the numbers by sex and length of program in our study were nearly equal, these values do not necessarily accurately represent the population of graduates from 2005 and 2006, as most of the population was female and most of the programs were 2 years in length. Further, we are attempting to generalize results based on only 2 years' worth of graduates; hence, it would be beneficial to have more recent graduates in the study as well. We also could not be assured that every program was represented proportionately in the results, so we may be lacking critical information from some of the accredited programs. For example, a 50% response rate from a larger program could comprise 15 students, whereas 50% of a smaller program may comprise only 3 or 4 graduates.

In an attempt to explain the wide variation in reliability, we consulted the pilot data and found that 1 participant answered 0–10% satisfaction for several questions during the first pilot trial and 81–90% satisfaction for the same questions during the second trial. It is likely that this volunteer alone affected the reliability data, and, therefore, accurate instrument reliability values should be considered a limitation to this study. Finally, because we developed the survey instrument, we cannot guarantee that a closed-ended question encompassed every possible view or opinion the respondent might have. Further, the environment and time to complete the survey could not be accurately controlled.

Future authors should begin to examine the points of distinctiveness for each program, including whether students are able to accurately identify what these areas are and whether the students' perceptions of their specialized areas correlate with the specialized areas as defined by their program directors. Any associated differences between the 1-year and 2-year program options should be investigated, especially with regard to the depth in learning perceived by students. It would also be interesting to administer this same survey instrument in 5 years, after programs have had the opportunity to further develop their points of distinctiveness.

Seven years have passed since the adoption of the most recent Standards and Guidelines,3 and it is time for a review of the most recent standards to determine if revisions need to be made. Researchers should focus on both program directors and graduate students to see if and where any alterations should be made. A survey of PATEP program directors should reveal their opinions on any shortcomings in these guidelines and produce recommendations for amendments. It would also be interesting to research if PATEP students are familiarized with the Standards and Guidelines3 during their graduate education and made aware of the objectives and didactic goals of their programs.

1
Peer
,
K. S.
and
J. S.
Rakich
.
Accreditation and continuous quality improvement in athletic training education.
J Athl Train
2000
.
35
2
:
188
193
.
2
Delforge
,
G. D.
and
R. S.
Behnke
.
The history and evolution of athletic training education in the United States.
J Athl Train
1999
.
34
1
:
53
61
.
3
National Athletic Trainers' Association Education Council
Standards and guidelines for post-certification graduate athletic training education programs, 2002 ed.
4
National Athletic Trainers' Association Education Council
NATA accredited post-certification graduate athletic training education programs.
5
Knight
,
K. L.
Graduate education evolves.
NATA News
January 2002
.
37
.
6
Seegmiller
,
J. G.
Perceptions of quality for graduate athletic training education.
J Athl Train
2006
.
41
4
:
415
421
.
7
Voll
,
C. A.
,
J. E.
Goodwin
, and
W. A.
Pitney
.
Athletic training education programs: to rank or not to rank?
J Athl Train
1999
.
34
1
:
48
52
.
8
Council on Postsecondary Accreditation
Provisions and Procedures for Becoming Recognized as an Accrediting Agency for Postsecondary Educational Institutions or Programs
.
Washington, DC
Council on Postsecondary Accreditation
.
1975
.
1
3
.
9
Jarski
,
R. W.
,
K.
Kulig
, and
R. E.
Olsen
.
Clinical teaching in physical therapy: student and teacher perceptions.
Phys Ther
1990
.
70
3
:
173
178
.
10
Stith
,
J.
,
W. H.
Butterfield
,
M. J.
Strube
,
S. S.
Deusinger
, and
D. F.
Gillespie
.
Personal, interpersonal, and organizational influences on student satisfaction with clinical education.
Phys Ther
1998
.
78
6
:
635
645
.
11
Hermann
,
M. M.
The relationship between graduate preparation and clinical teaching in nursing.
J Nurs Educ
1997
.
36
7
:
317
322
.
12
Norman
,
L.
,
P. I.
Buerhaus
,
K.
Donelan
,
B.
McCloskey
, and
R.
Dittus
.
Nursing students assess nursing education.
J Prof Nurs
2005
.
21
3
:
150
158
.
13
Ribak
,
J.
,
N.
Notzer
, and
E.
Drezne
.
Evaluation of an integrative masters program in occupational health at the Tel Aviv University Medical School.
Safety Sci
1995
.
20
2–3
:
343
347
.
14
Ingersoll
,
C. A.
and
J. H.
Gieck
.
Issues and outcomes in graduate education.
Paper presented at: Athletic Training Educators' Conference; January 21–23, 2005; Montgomery, TX.
15
Wall
,
C. R.
,
M. J.
DeHaven
, and
K. C.
Oeffinger
.
Survey methodology for the uninitiated.
J Fam Pract
2002
.
51
6
:
573
.
16
Turocy
,
P. S.
Survey research in athletic training: the scientific method of development and implementation.
J Athl Train
2002
.
37
suppl 4
:
S174
S179
.
17
Wilkerson
,
G. B.
,
M. A.
Colston
, and
B. T.
Bogdanowicz
.
Distinctions between athletic training education programs at the undergraduate and graduate levels.
Athl Train Educ J
2006
.
1
April–Dec
:
38
40
.
18
Sauers
,
E. L.
and
J. T.
Parsons
.
Graduate athletic training education: academic versus health professions education.
Paper presented at: Athletic Training Educators' Conference; January 21–23, 2005; Montgomery, TX.
19
Sakalys
,
J. A.
,
M. L.
Stember
, and
J. K.
Magilvy
.
Nursing doctoral program evaluation: alumni outcomes.
J Prof Nurs
2001
.
17
2
:
87
95
.
20
Sedlak
,
C. A.
Differences in critical thinking of nontraditional and traditional nursing students.
Nurse Educ
1999
.
24
6
:
38
44
.
21
Martin
,
M.
and
B.
Buxton
.
The 21st-century college student: implications for athletic training education programs.
J Athl Train
1997
.
32
1
:
52
54
.

Author notes

Kevin J. Henry, MSEd, ATC; Bonnie L. Van Lunen, PhD, ATC; Brian Udermann, PhD, ATC, FACSM; and James A. Oñate, PhD, ATC, contributed to conception and design; acquisition and analysis and interpretation of the data; and drafting, critical revision, and final approval of the article.