Context: Accrediting bodies and universities increasingly require evidence of student learning within courses and programs. Within athletic training, programmatic assessment has been a source of angst for program directors. While there are many ways to assess educational programs, this article introduces 1 systematic approach.

Objective: This article describes the steps necessary to create an assessment plan that meets the needs of the accrediting body, the program, and the athletic training students.

Background: Assessment helps determine if the program's goals and objectives are meeting the athletic training students' needs. Program review cannot be accomplished in a manner that is helpful unless the assessment plan is systematic, planned, and ongoing.

Recommendation(s): Effective and systematic assessment plans provide a framework for program evaluation, modification, and improvement.

Conclusion(s): Assessment should be an ongoing process which creates opportunities for active learning. Clinical education needs to be included in the overall programmatic assessment, as those courses provide application of didactic learning.

Changes in higher education practices during the past 25 years have led to an increased focus on student outcomes assessment.1  In athletic training education, these changes have become particularly important as part of the program's annual review. Assessment is imperative for accreditation purposes, crucial for demonstrating that students are truly learning what an instructor intends, a valuable tool for updating and/or adapting courses and programs, and a justification for resources to maintain or improve programs.2,3  It is defined as the systematic collection and analysis of information to improve student learning,4  which for many programs, is essential to help reach goals and objectives that are national exam dependent. While didactic course outcomes are easily assessable through exams, graded rubrics, and other direct appraisals, systematic assessment of clinical education may prove more difficult, as tracking demonstrated improvement is often difficult for instructors to monitor in the clinical setting. For example, traditional exams do not assess demonstrated professionalism or the quality of patient interactions. While many instructors and program administrators discuss student progress during clinical site visits, these conversations may not be enough to determine whether a preceptor is providing an adequate learning environment. For these reasons, among others, it is imperative for clinical education instructors and program administrators to create a well-planned, systematic assessment protocol to adequately and comprehensively assess both didactic and clinical education.

Clinical Education in Assessment

In simple terms, clinical education is the student experiencing world-to-work at a site that is not in the traditional academic setting of a classroom.4  In the health care professions, this often presents as providing hands-on patient care in a hospital, clinic, or other health care facility. Clinical education offers high-impact practice, not just classroom scenarios, so the student must use higher-level decision-making skills through a final capstone experience, or in some cases, an ongoing curricular experience.5  These clinical education opportunities, whether a capstone experience or ongoing, are professional preparation oriented, where the student is immersed in the patient-care setting under the guidance of a preceptor, truly experiencing the professional duties on a daily basis.

One of the most beneficial outcomes of clinical education is the potential for authentic assessments through realistic scenarios, true-to-life evaluations, and experiences that cannot occur in the didactic courses. For example, the student may be able to follow the step-by-step patient progression from initial injury or illness diagnosis through the rehabilitation process, with all steps in between. The students might also have the opportunity to demonstrate comprehensive learning in their major through a culminating product of a performance assessment that measures the ability of the student to perform a task, such as the Clinical Integration Proficiencies found in the National Athletic Trainers' Association (NATA) Athletic Training Education Competencies.6  Not only is student knowledge being tested, but there is also an opportunity to assess patient-care skills, professionalism, and the ability to think critically in a potentially fast-paced environment.7 

Components of an Assessment Plan

An assessment plan allows the program to quantify effectiveness by evaluating specific areas of focus. This allows the program to demonstrate that its graduates are meeting desired learning goals and objectives. An effective programmatic assessment plan prevents the collection of random data that are of no use to the program or not used for a distinct purpose (ie, program or course improvement). The main components of the assessment plan include the mission, goals, outcomes, and indicators (Table). Each of these components contributes an essential factor to the overall assessment plan and is intricately involved in the program's functionality.4 

Table

Assessment Buzzwords

Assessment Buzzwords
Assessment Buzzwords

Initially, expectations for graduates must be defined. This is followed by the creation of smaller goals for them to accomplish throughout the program. In athletic training education, 1 way to create this plan is to incorporate learning over time, in which students are expected to learn a concept didactically, build that knowledge throughout the entire program, and transfer the knowledge and skills to a clinical setting. Learning over time denotes that there are different levels at which a student may demonstrate competency. For instance, when first assessed, students may be at a basic level and then become more proficient as they move throughout the curriculum. The assessment plan takes into consideration the maturation that should be taking place longitudinally, cognitively, and at a psychomotor level during the student's development.8,9  By defining expectations for the students, the mission of the program can be associated with collected data.

The mission is the purpose of the program and includes the perceived value for students, its main functions, and the stakeholders.10  It is important that the program's mission contributes to or supports the department, college/school, and university's missions as well.11  An applicable assessment plan is going to flow from and communicate the institution's various mission statements.4,11,12 

Once the mission is defined, program goals can be created. These are more specific than the mission, but still somewhat broad and long term. They should include the major roles of the program and are often closely related to the professional standards of practice. For example, 1 athletic training program goal may be that students demonstrate proficiency in creating therapeutic interventions for orthopaedic injuries.

Outcomes are more specific activities that are directed toward specific goals.13  For academic units, these are the desired student learning outcomes (SLOs). The SLOs are often referred to as objectives in the syllabus. Student learning outcomes must be linked to the program goals and mission. For example, if we continue with the program goal outlined above, the specific outcome might be that the students create a therapeutic intervention program for a specific injury as a project for class, with a minimum rubric score of 75%. Individual programs have the autonomy to determine the level of achievement for their students to be considered at a level of competence1 , with a score equivalent to a C often used.

Indicators are the measurable activities that quantify these SLOs.1  When creating indicators, there are 2 questions to consider: (a) What are the criteria for success? and (b) How will you know if the SLO has been achieved? Devising activities that address outcomes can be easy; devising objective, measurable activities that reflect effectiveness can be more difficult. An example of a poorly written indicator would be, “The student will be able to learn the correct way to use a stethoscope.” Having a student learn is a good thing, but how can it be measured? A better written indicator would be, “The student will demonstrate how to correctly use a stethoscope for auscultation of the heart and lungs.” This could be measured through a practical demonstration with the skill ranked on whether the correct sites were used and/or feedback from the model if a standardized patient was used.13 

Creating an assessment plan can be a daunting task, but there are some difficulties which can be avoided. For example, it is important to keep in mind that not all drafts of an assessment plan will be perfect. Rather than waiting for perfection, it is better to start the process, review it annually for potential areas of improvement, and modify as warranted.14,15  Another potential issue is taking measurements that are not related to goals, or collecting so many indicators that results are overwhelming.4  Keep in mind that there is not a requirement to assess every SLO or every program goal each year. Focus can be placed on different areas in various years or semesters and that some ongoing assessments (eg, alumni surveys) can also occur. This should not be a 1-person task, and getting help from others will help develop an assessment plan that is beneficial to all faculty, administrators, and students in the program.4,11,15 

How to Create and Implement an Assessment Plan

Identify Program Standards

Program standards are the building blocks for all other aspects of the assessment process. When creating program standards, the NATA Educational Standards describe the knowledge, skills, and clinical abilities to be mastered by students.6  State boards of medicine and department or college standards should also be considerations during the identification process.1,16 

Create Program Goals

The program goals describe broad learning goals and concepts of what the program expects the students to learn. These goals are best expressed in general terms, such as professional decision making or communication skills, rather than more specific terms, such as skill in differential diagnosis or the ability to speak to coaches and parents. These need to be created at the programmatic level, but should align with the department, college, and university's goals and objectives. It is important to remember that program goals should include contributions from both clinical and didactic portions of the curriculum. Clinical components may be found in the outcomes of multiple program goals.

Identify Student Learning Outcomes for Clinical Education

When creating the SLOs for syllabi, good outcomes are: (a) learner centered, (b) key to the course's mission, (c) meaningful for faculty and students, (d) representative of a range of thinking skills, and (e) measurable.2  Best practice dictates the use of Bloom's taxonomy (Figure 1).5,8  When using Bloom's taxonomy, keep in mind the base of the triangle is the lowest order of demonstrating knowledge (remembering–knowledge). Moving to the apex of the triangle demonstrates a greater order of thinking skills required to demonstrate competency (creating–synthesis).2  When identifying SLOs, the closer to the apex, the greater the cognitive abilities. Using Bloom's taxonomy helps to ensure the outcomes are objective, measureable, and incorporate learning over time. As athletic training educators, a goal should be to develop critical thinkers. Early in the program, students will be expected to recall and understand information they have been taught. Later, through practice and time on task, these students will develop more complex strategies and the ability to break down information, put ideas to work, and judge the value of evidence based on definite criteria.4  An objective for a first-year student may be, “The student will define what each part of HOPS [history, observation, palpation, special tests] means during an ankle evaluation,” whereas a second-year student's objective may be, “The student will perform an ankle evaluation using evidence-based practice.”

Figure 1. 

Bloom's taxonomy.32 

Figure 1. 

Bloom's taxonomy.32 

Close modal

Distinguish Authentic Assessment Assignments

Assignments must be created within the didactic and the clinical portions of the educational program, with those for clinical education having their own characteristics. Students are not necessarily going to be writing papers or handing in assignments; therefore, clinical education incorporates creative ways of assessing student learning and progress.

Collecting the type of evidence a program wants requires the use of direct and indirect assessment methods. In direct assessment, students demonstrate knowledge and skills on some type of instrument. This can be in the form of an objective test or a graded rubric for an essay, presentation, or practical exam.17  While common in most didactic courses, direct assessment does not always lend itself to use in the clinical setting. However, a properly formulated rubric can allow a preceptor to systematically and directly assess the student. The data from these rubrics should be shared with the students to enhance student learning. This can range from a simple introductory checklist of skills to a more holistic rubric that provides more robust feedback.7,18  The indirect assessment asks students to reflect on their learning, rather than provide a demonstration. This is done via exit surveys, interviews, journals, portfolios, or alumni surveys. Formalized indirect assessments become important in clinical education because the strengths and weaknesses of a program are tied to these assessments.

A combined use of direct and indirect assessment methods will benefit clinical education.19  A direct assessment technique in the clinical setting is the demonstration of clinical skill proficiency via a rubric or checklist. In addition, preceptors can be asked to give feedback on the student's affective, cognitive, and/or psychomotor skills that were used in the clinical setting, rather than the ones performed in the classroom. Some indirect assessment techniques could include an experiential journaling activity log or the creation of a list of skills learned and demonstrated. Students may also be asked to evaluate their clinical site or the preceptor's ability to interact and provide an effective learning environment. Programs also need to demonstrate that what is being taught in the classroom is being applied appropriately by students in a real-world situation.5  Upon graduation, surveys can ask how the program prepared the student for the workforce, from not only the student but from the employer as well.

Create Assessment Methods

Developing assessment methodology and tools can be time consuming upfront but, if done correctly, can provide valuable information for program improvement. In a clinical setting, there are several types of data that can be collected, including assessments for student performance, preceptor effectiveness, and site quality. The assessment method is the combination of tools used to collect these data for analysis and program/course improvement.11 

For clinical education assessment, types of data collection can include: (a) student self-evaluations, (b) preceptor evaluations of the student, (c) student evaluations of the preceptor, (d) evaluations of the site, (e) alumni evaluations, and (f) employer evaluations.4  Data can be collected in the form of rubrics, open-ended questions, and/or testimonials on paper, electronically, or by interview. Many learning management systems have options/features that allow instructors to create, deploy, and collect data which can be analyzed to determine if the mission, goals, and objectives are being met, leading to improvement within the program itself.11 

When constructing the assessment methods, a pilot plan should be implemented.2  Even if the data collection method was borrowed or modified from another source, it should be piloted prior to full-scale use due to variations within programs. The pilot's purpose is to make sure the information collected from the assessment method is valid3,7,16  and provides feedback in a format usable by the program and/or instructors.2  Oftentimes, the assessment questions and instructions that were clear to the writer confuse the preceptor or the student. It is important to actively seek out feedback and not rely on individuals to volunteer the information.2  Giving specific response choices to the user, rather than open-ended short-answer questions, is also important, as people are more likely to complete a survey if they do not have to write lengthy responses. However, allowing the option for short-answer questions is acceptable for those who like to expand on their answers.2 

Using Data for Program Improvement

Having an assessment plan is not effective unless it is used as intended. This requires having an overall program assessment plan that follows an assessment loop (Figure 2).9,20  The assessment loop consists of several steps. Once the first 3 steps in the assessment process (ie, identifying the program standards, creating/updating the program goals, and creating/updating the SLOs) are completed, the next step is to run the data report. It is pointless to create an assessment plan and collect the data if it is not analyzed or interpreted.20  Benchmarks, standards or points of reference against which things may be compared or assessed, should be set prior to this analysis.1  For example, a benchmark for clinical education could be a 75% satisfaction or positive score for each Likert-scaled item on the preceptor evaluation of the student. When setting up the assessment plan, know what is considered a normal benchmark. Each program must decide upon and set its own benchmarks. There are the philosophical considerations: Does it look better to reach easily met benchmarks, or should the benchmarks be attainable, but not at 100%?

Figure 2. 

Assessment loop.

Figure 2. 

Assessment loop.

Close modal

The next step in the process, data review, is often the most overlooked, but must be completed so that an assessment report can be written. This is an opportunity to look at the program as a whole, use benchmarks to determine what is being done well, and decide what areas can be improved upon and how.20,21  Remediation plans based on outcomes may or may not be necessary.20  Potential assessment plan modification can occur in any area, whether it is the types of assessments being used, the rubrics, or even the entire program assessment plan,22  but should be based on multiple trends/patterns, not on a single deficiency.20  Programs should also have a group of individuals who review the collected data20,23  to identify problem areas, create a remediation plan to effect change, and enact improvements at whatever level (eg, instructor, evaluation tool, program goal) necessary.15,20 

The assessment loop helps evaluate a program, but in order for it to be effective, the loop must be closed.1,2,20  Questions to consider include: (a) Are results being reported and used? (b) What changes are going to be made in the program as a result of the outcomes measurement? and (c) Is the data being used regularly to assess program effectiveness?2  Assessment data are used to identify patterns of strength and weakness and can link internal processes for continuous improvement, program review, budgeting and planning, teaching and learning improvement, and assessment improvement.4,11,15,20,24,25  This is useful in determining how or when to modify the curriculum, individual courses, or assignments to improve student learning. For example, a program may identify that there is a demonstrated pattern of students scoring low in therapeutic modalities throughout the curriculum and a concurrent need to alter the course or offer additional learning opportunities. On the other hand, sometimes, the assessment tool itself needs to be modified. For example, if the benchmark criterion is set too low and all students demonstrated proficiency, perhaps the benchmark should be raised.

In many cases, the most important reason for closing the loop is to respond to various audiences, including faculty, departmental personnel, students, and external regional and professional accreditors.4,25  By analyzing, reporting, and sharing results, a program is communicating how the information collected will be used for improvement. An effective way to share information is by holding assessment days, where program administrators and faculty meet to discuss the assessment data. Within this meeting, best practices can be shared, areas needing improvement identified, and a plan to address these issues and enhance the overall program conceived.

Incorporating Clinical Education into Assessment

Often neglected, clinical education offers a multitude of opportunities for assessment throughout the curriculum. Some ways to incorporate clinical education into programmatic assessment include:

  • written goals and descriptive journals kept by students throughout the course of the program,

  • weekly reports and descriptions of patient contact hours to offer insight on the student's growth and confidence over time at each site,

  • professional dispositions of the student written by the preceptor,

  • preceptor's assessment of student skill acquisition and clinical application,

  • final reports by 1 or both parties to summarize experiences,

  • a cumulating portfolio showing the value of the experiences to demonstrate learning over time.5 

Assessment is also imperative for program growth. Clinical education provides insights as to whether or not programmatic goals and objectives are being adequately addressed in the didactic courses so that the clinical expectations are representative of what the students have learned. Clinical education allows preceptors to evaluate students' professional skills and dispositions and gauge how well they apply skills learned in the didactic setting to real patients. For example, if a skill is taught didactically with classmates, it is important to know if that student can transfer that skill to a patient in the field.

Clinical education assessment ensures that students are having authentic experiences that meet the NATA Athletic Training Education Competencies6  and satisfy accreditation standards. The sequence of learning starts in the classroom, but the skills, refinement, and more advanced learning will take place in the clinical portion of education. Creation of a clinical education assessment plan will link the didactic knowledge with the acquisition of greater skills and proficiency in the clinical settings. Use of the assessment plan will ensure the needs of the students are being met in both the didactic and clinical courses to demonstrate the cognitive and psychomotor skills necessary for the entry-level athletic trainer.

1
Martin
M,
Vale
D.
Developing a program-assessment plan
.
Athl Ther Today
.
2005
;
10
(
5
):
40
42
.
2
Cartwright
R,
Weiner
K,
Streamer-Veneruso
S.
Student Learning Outcomes Assessment Handbook
.
Montgomery County, MD: Montgomery College; 2007. Available at: www.clark.edu/tlc/outcome_assessment/documents/StudentLearningOutcomesAssessmentHandbook/pdf. Accessed November 11,
2014
.
3
Schilling
JF.
Quality of instruments used to assess competencies in athletic training
.
Athl Train Ed J
.
2012
;
7
(
4
):
187
197
.
4
Palomba
CA,
Banta
TW.
Assessment Essentials: Planning, Implementing, Improving
.
San Francisco, CA
:
Jossey-Bass;
1999
.
5
Greater Expectations Project on Accreditation and Assessment
.
Criteria for Recognizing “Good Practice” in Assessing Liberal Education as Collaborative and Integrative
.
Washington, DC: Association of American Colleges and Universities; 2002. Available at: http://www.aacu-edu.org/gex/paa/assessment.cfm. Accessed November 11,
2014
.
6
National Athletic Trainer's Association
.
Athletic Training Education Competencies. 5th ed. National Athletic Trainers' Association Web site
.
2015
.
7
Thompson
GA,
Moss
R,
Applegate
B.
Using performance assessments to determine competence in clinical athletic training education: how valid are our assessments?
Athl Train Ed J
.
2014
;
9
(
3
):
135
141
.
8
Criteria for recognizing “good practice” in assessing liberal education
.
Association of American Colleges and Universities Web site
.
Available at: http://www.aacu-edu.org/paa/assessment.cfm. Accessed November 11,
2014
.
9
Core principles of effective assessment
.
Australian Universities Teaching Committee Web site
.
2014
.
10
How to Write a Program Mission Statement
.
University of Connecticut Web site. Available at: assessment.ucon.edu/docs/HowToWriteMission.pdf Accessed November 5,
2014
.
11
Huba
ME,
Freed
JE.
Learner-Centered Assessment on College Campuses: Shifting the Focus from Teaching to Learning
.
Needham Heights, MA
:
Allyn and Bacon;
2000
.
12
American Productivity and Quality Center
.
Measuring Institutional Performance Outcomes: Consortium Benchmarking Study Best-in-Class Report
.
Houston, TX
:
American Productivity and Quality Center;
1999
.
13
Kahanov
L,
Eberman
LE.
Defining outcomes and creating assessment tools for AT education, part 1
.
Int J Athl Ther Train
.
2010
;
15
(
6
):
41
44
.
14
American Association for Higher Education
.
Nine Principles of Good Practice for Assessing Student Learning
.
Sterling, VA
:
Stylus Publishing, LLC;
1996
.
15
Banta
TW.
Characteristics of effective outcomes assessment: foundations and examples
.
In
:
TW Banta and Associates, eds. Building a Scholarship of Assessment
.
San Francisco, CA: Jossey-Bass;
2002
:
261–283.
16
Eberman
LE,
Kahanov
L.
Defining outcomes and creating assessment tools for AT education, part 2
.
Int J Athl Ther Train
.
2011
;
16
(
1
):
37
41
.
17
Ewell
P.
CHEA Workshop on Accreditation and Student Learning Outcomes. Council for Higher Education Accreditation Web site
.
Available at: http://www.chea.org/pdf/workshop_outcomes_ewell_02.pdf. Accessed November 11,
2014
.
18
Stevens
DD,
Levi
AJ.
Introduction to Rubrics
.
Sterling, VA
:
Stylus Publishing, LLC;
2005
.
19
Steen
LA.
Assessing assessment
.
In
:
Gold
B,
Keith
SZ,
Marion
WA,
eds
.
Assessment Practices in Undergraduate Mathematics
.
Washington, DC
:
Mathematical Association of America;
1999
:
1
6
.
20
Eberman
LE,
Kahanov
L,
Williams
RB.
Defining outcomes and creating assessment tools for AT education, part 3
.
Int J Athl Ther Train
.
2011
;
16
(
2
):
27
32
.
21
Suskie
L.
Fair assessment practices: giving students equitable opportunities to demonstrate learning
.
AAHE Bull
2000
;
52
(
9
):
7
9
.
22
Eder
DJ.
Installing authentic assessment: putting assessment in its place
.
Southern Illinois University Edwardsville Web site
.
Available at: http://www.siue.edu/∼deder/assess/denver0.html. Accessed November 11,
2014
.
23
Council of Regional Accrediting Commissions
.
Regional Accreditation and Student Learning: A Guide for Institutions and Evaluators
.
Southern Association of Colleges and Schools Commission on Colleges Web site
.
2014
.
24
Association of American Colleges and Universities
.
Our Students' Best Work: A Framework for Accountability Worthy of Our Mission
.
Washington, DC
:
Association of American Colleges and Universities;
2008
.
25
Driscoll
A,
Cordero de Noriega
D.
Taking Ownership of Accreditation: Assessment Processes that Promote Institutional Improvement and Faculty Engagement
.
Sterling, VA
:
Stylus;
2006
.
26
Assessment primer: goals, objectives and outcomes
.
University of Connecticut Web site
.
Available at: assessment.ucon.edu/primer/goals1.html. Accessed November 5,
2014
.
27
What is the Difference Between Course Objectives and Learning Outcomes?
San Francisco State University Web site
.
2014
.
28
How to Write Program Objective/Outcomes
.
University of Connecticut Web site
.
2014
.
29
Standard
.
Merriam-Webster Web site
.
Available at: http://www.merriam-webster.com/dictionary/standard. Accessed January 8,
2015
.
30
Benchmark
.
Merriam-Webster Web site
.
Available at: http://www.merriam-webster.com/dictionary/benchmark. Accessed January 8,
2015
.
31
Learning Over Time
.
University of Delaware Web site
.
2014
.
32
Anderson
LW,
Krathwohl
DR,
eds
.
A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy of Educational Objectives
.
University of Wisconsin Web site
.
Boston, MA: Allyn and Bacon. Available at: http://www.uwsp.edu/education/lwilson/curric/newtaxonomy.htm. Accessed November 11,
2014
.

Author notes

Dr Moffit is currently Director for the Master of Science in Athletic Training Program in the Sport Science and Physical Education Department at Idaho State University. Please address all correspondence to Dani M. Moffit, PhD, LAT, ATC, Sport Science and Physical Education, Idaho State University, 921 South 8th Avenue, Stop 8105, Pocatello, ID 83209-8105. [email protected].