Almost 15 years ago, the first epidemiology studies1 based on National Collegiate Athletic Association (NCAA) injury-surveillance data were published in the Journal of Athletic Training. In these reports, researchers detailed the injury rates and trends that occurred in intercollegiate athletics. The purpose of these reports was to provide the basis for athletic trainers to advance the recognition, rehabilitation, and prevention of injuries and illnesses related to athletic participation. Having a robust understanding of the foundational aspects of epidemiology is necessary for the ability to read, critique, and incorporate epidemiologic evidence into clinical practice, education, and research. The purpose of this commentary is to provide a brief overview of how to consume and critique epidemiology research, including (1) a review of basic concepts, (2) how to critically appraise the methods and results of epidemiology research, and (3) how to apply epidemiology research to clinical practice, education, and research.
Epidemiology is the study of disease distributions and patterns within a particular population of interest.2 That population of interest is defined by certain characteristics; in research, we call them the inclusion and exclusion or selection criteria. In clinical practice, the population of interest may be defined by sport, age, level of play, sex, or other activity. After identifying our population of interest, we need to understand the typical disease, injury, or behavioral patterns in that population. This information helps clinicians determine what should be done to reduce the risk of acquiring a disease, reduce transmissibility, and plan interventions if the disease is acquired. In athletic training and sports medicine, epidemiology has played a critical role in guiding clinicians, patients, and other stakeholders in their decisions through systematic injury surveillance in a variety of professional settings.
The National Collegiate Athletic Association (NCAA) has maintained some form of injury surveillance since 1982, beginning with the NCAA Injury Surveillance System.3,4 Over time, this program has evolved through a series of adaptations and advancements to reach its current state, which is now known as the NCAA Injury Surveillance Program (NCAA-ISP). The NCAA-ISP works to capture data related to injury incidence and sport exposure among athletes participating in all sports and at all levels that are sponsored by the NCAA. Athletic trainers (ATs) at participating schools provide deidentified injury data to the NCAA-ISP via electronic medical record systems. These data are used by various stakeholders and researchers to generate research-driven programs, policies, rules, and education aimed at preventing, mitigating, and treating sports injuries.
In 2009, the NCAA partnered with the Datalys Center, an independent nonprofit organization that acts as a management system and clearinghouse for data in the NCAA-ISP. The Datalys Center currently manages several epidemiology programs, including the NCAA-ISP; the High School National Athletic Treatment, Injury and Outcomes Network Surveillance program; the High School Reporting Information Online; the Concussion Assessment Research and Education Project; and the Consortium for Catastrophic Injury Monitoring in Sport. As part of managing these programs, the Datalys Center provides participating institutions with data and disseminates important epidemiology research via peer-reviewed publications and scientific conferences. Data from the NCAA-ISP, as well as the other programs, provide metrics for sport participation, sport-related injury and illness incidence, and injury and illness frequencies and patterns. This information is then used by practicing ATs, researchers, and other stakeholders, including administrators and governing bodies, to guide decision making with the goal of improving sports safety. Descriptive epidemiology findings from the NCAA-ISP have been used by researchers to develop hypotheses for risk-factor identification that can then be studied using etiologic research designs. For example, to improve safety, research from the NCAA-ISP data has been used to guide rule changes in multiple sports. The NCAA has also partnered with the National Athletic Trainers' Association to ensure that findings from the NCAA-ISP are readily available to ATs to drive clinical practice, research, and education. The purpose of this commentary is to provide a brief overview of consuming and critiquing epidemiology research, including (1) a review of basic concepts, (2) how to critically appraise the methods and results of epidemiology research, and (3) how to apply epidemiology research findings to clinical practice, education, and research.
A BRIEF PRIMER OF EPIDEMIOLOGY
To understand sports-related epidemiology research, a working knowledge of the most commonly used terms, especially as related to the Datalys Center reports in the Journal of Athletic Training, is necessary. The text, Table, and Figures in this document describe some of the most commonly used terms in epidemiology research. (This list is not exhaustive.) From this point, we will focus more on the terms used in sports-injury epidemiology research:
Determinant: Any factor or exposure that can increase or decrease the chance of having the target condition. The term risk factor describes a determinant that causes an increased chance of injury.5 For the purpose of this paper, we use the term risk factor to describe a determinant that can cause injury.
Exposure: Contact or experience with any determinant (risk factor) that increases the chance of experiencing the event (being injured). In athletics, exposure can mean “exposure to a risk factor” and “participating in a competition or practice.”2 Although these seem to be 2 definitions for the same term, they really have the same meaning—exposure to something, whether it is participating in a sport or another risk factor, that increases the individual's chance of being injured.
Incident: A new episode of illness or injury. Other terms that may be used include event, condition, outcome, or result. In the context of sports, injury is a common incident that is tracked.
CRITIQUING THE METHODS OF EPIDEMIOLOGY STUDIES
Perspective
When critiquing the methods of epidemiology research, one should consider key questions. Initially, we identify whether the study was prospective or retrospective. For retrospective studies, only an association between the risk factor and the injury can be drawn. A retrospective study is a current snapshot of individuals to see what they may have been exposed to in the past. Using retrospective studies, researchers help to identify and describe potential risk factors. For a prospective study, a cause-and-effect relationship (causality) can be drawn between the exposure to the risk factor and the injury. A prospective study starts with a current snapshot of individuals and their exposure to risk factors (a baseline) and then follows them into the future. In prospective studies, researchers help to predict the influence of a risk factor on developing an injury. Data from the NCAA-ISP are used for retrospective analyses; epidemiologic researchers using this database and others have driven studies and subsequent rule changes to increase sports safety. For example, in football, kickoff returns account for many concussions. A rule change to move the kickoff line from the 35-yard to the 40-yard line was proposed. Football teams in the Ivy League experienced an approximate 80% reduction in kickoff concussions after the rule change.6 This is an example of using epidemiologic evidence to evaluate the effect of rule changes.
External Validity (Generalizability)
External validity is the extent to which findings from a study can be generalized to the population of interest from which the study sample was drawn. As when critiquing any research, it is important to consider the population of interest being studied. This helps determine the population to which the results can be generalized.7 For example, 280 schools in Divisions I, II, and III contributed injury data to the NCAA-ISP during the 2018–2019 academic year. Although not all schools in the NCAA contributed to the database, those that did provided an extensive sample of injuries that occurred in NCAA sports; however, not all injuries, mechanisms, and return-to-play timelines apply to every athlete. It is important to evaluate the selection criteria within a specific study to ensure that the study population is adequately defined. Furthermore, epidemiology studies tend to be large—including thousands or even hundreds of thousands of individuals or data points. If a sample was drawn to represent the overall population of interest, it would be important to determine if the sample was representative of the population for which inferences (logical explanations and predictions about the population) will be made.5
Internal Validity
Internal validity is the extent to which a cause-and-effect relationship can be drawn between a determinant (risk factor) and an outcome (injury).8 To assess internal validity, we identify sources of potential bias. Bias is a threat to internal validity. Although bias may exist, it does not mean that the threat affects internal validity; it is just a potential threat. Furthermore, not all bias is necessarily bad; it may just exist, and the reader must determine the seriousness of those threats to internal validity.9
How Exposures Were Measured
In epidemiology studies, we want to ensure that exposures are defined clearly and are being measured or assessed the same way in all individuals. If the exposures are not measured using the same methods, bias or error could be introduced. For example, in athletics, the methods for capturing participation exposure have varied widely. Some injury-surveillance programs, such as the NCAA-ISP, count participation exposures as “participating in 1 school-sanctioned practice or competition,” regardless of whether the person plays.3 In other words, even if the individual suited up and was on the bench, the person still experienced a participation exposure. In the context of the NCAA-ISP, 1 student-athlete participating in 1 school-sanctioned practice or competition is considered 1 athlete-exposure, regardless of the time spent participating in that session or event.
In other injury-surveillance databases, participation exposure may be measured differently.10 For example, participation might be measured in 15- or 30-minute blocks or by the quarter, half, or other playing period. Any of these participation measurement blocks can be valid. Shorter blocks more accurately represent actual exposure but are much more resource driven for successful data capture. Longer time blocks may be less accurate, but they are easier to track efficiently and may be more sustainable for long-term collection and across many institutions or programs. Within a single injury-surveillance database, it is important that the participation exposure be clearly defined and the collection method be consistent. The measurement of all exposures should be clearly defined to minimize the risk of bias.
Investigator Blinding
As with other study designs, researchers need to determine if there was blinding of the investigators who were measuring either the exposures or the presence of injury. To minimize the risk of bias, investigators who assess the injury status of the individual should be blind to the exposure status. Similarly, the investigators who measure the exposures should be blind to the outcome status of the participants.
How the Outcomes Were Measured
As with exposures, the outcomes of interest (the injury events) need to be well defined. In sports medicine, defining injury can be difficult, and there can be disagreement about how certain injuries are defined. Within the NCAA ISP, an injury is defined as having “occurred during participation in an organized intercollegiate or interscholastic athletic practice or competition and required medical attention by the team certified AT and/or the team physician.”3
Injury Severity
Although many injury-severity grading scales are based on tissue damage, severity can also be based on the time lost from participation. The way that time lost from participation is defined and documented must be consistently applied. For the NCAA ISP, time loss is defined as “the time between the original injury and return to participation at a level that would allow competition participation.”3 Historically, time-loss injuries have been defined as those that restrict participation for at least 24 hours. Injuries that do not meet this criterion are considered non–time-loss injuries.4 Furthermore, within the NCAA-ISP, time loss for certain injuries is often also explored using specific temporal increments, such as 1 to 6 days, 7 to 21 days, and >21 days (>3 weeks).11,12 This allows researchers and clinicians to infer injury severity based on how much time was actually lost due to injury.
CRITIQUING THE RESULTS OF EPIDEMIOLOGY STUDIES
Descriptive Epidemiology Measures
Descriptive epidemiology is used to describe rates of injuries or disease. Investigators describe the frequencies and patterns observed but do not propose hypotheses about the causes (risk factors) of the frequencies and patterns. The NCAA ISP reports are also descriptive epidemiology reports. In these reports, injuries that occur during athletic participation are described based on levels of exposure (event type, competition level, and season segment) and injury characteristics (time loss, body part, diagnosis, and activity at the time of injury). The first Journal of Athletic Training special issue on sports epidemiology, which was published in 2007, included a summary of the descriptive injury rates for individual sports.1 Although the authors did not seek to determine the direct causes of the injury trends identified, many studies since then have built on the evidence to prospectively link specific risk factors to injury.
Several terms that are used to describe injury patterns and trends are defined in the Table and illustrated in Figure 1. These terms are used to define an injury in a specific population of interest.
Representation of important terms in descriptive epidemiology. A, Case. B, Prevalence. Twenty of 100 individuals in the population are injured, so the prevalence of injury is 20 of 100, or 20%. C, Incidence. Ten new incidents (depicted in the black rectangle) occur over 1 week, so the prevalence of injury is 30 of 100, or 30%; the incidence rate is 10 new incidents per 100 individuals per week, or 1 injury per 10 person-weeks.
Representation of important terms in descriptive epidemiology. A, Case. B, Prevalence. Twenty of 100 individuals in the population are injured, so the prevalence of injury is 20 of 100, or 20%. C, Incidence. Ten new incidents (depicted in the black rectangle) occur over 1 week, so the prevalence of injury is 30 of 100, or 30%; the incidence rate is 10 new incidents per 100 individuals per week, or 1 injury per 10 person-weeks.
A case is an individual with the injury of interest. Prevalence is the number of current cases in a given population.2 An incident is an injury event that occurs. Incidence is a frequency count of new injuries in a given timeframe. An incidence rate is a descriptor of the number of events that occur per participant during a given timeframe (Equation).2 The denominator includes a person-time element (ie, athlete-exposure) as a unit of incidence density, which is necessary to provide context to the injury frequency. The incidence rate describes the injury for a single population of interest. It can be extrapolated as the probability of an athlete experiencing the event (when in similar circumstances).
To interpret the incidence rate, the numerator—the actual number of injuries—must be read in the context of the incidence density. We can use a simple example to illustrate the necessity of the incidence density. A fictional soccer team (n = 25 players) and basketball team (n = 10 players) both competed for 4 weeks, participating 6 days per week (24 sessions or exposures). Over that time, each team incurred 5 new ankle sprains; therefore, the raw incidence of ankle sprains for each team equals 5.
Without knowing anything else, it would appear that the problem of ankle sprains in soccer and basketball was equal. However, by including the incidence density, we see that the injury problem between teams was different. For the soccer team, 25 players participated in 24 sessions; the incidence density was 600 athlete-exposures. In contrast, for the basketball teams, only 10 players participated in 24 sessions; the incidence density was 240 athlete-exposures. The incidence rate of ankle sprains for the soccer team was 5/600 athlete-exposures, 0.0083 sprains per 1 athlete-exposure, and 8.3 sprains per 1000 athlete-exposures, whereas the basketball team had an incidence rate of 5/240, 0.02083 sprains per 1 athlete-exposure, and 20.1 sprains per 1000 athlete-exposures.
Including a measure of incidence density provides context to the injury problem and allows for comparisons between groups. In this fictional example, the basketball team has a greater injury problem, and resources might be allocated appropriately.
Analytic Epidemiology Measures
Analytic epidemiology is used to determine cause and effect, establish relationships between exposures and disease, or compare groups with different levels of exposure. Several mathematical techniques are used to determine these associations. In Figures 2 and 3, we describe and illustrate a few of the more common methods for analyzing and interpreting the relationships between risk factor exposure and specific injuries. The rate ratio (RR) and odds ratio (OR) are 2 analytic techniques used to establish a relationship between the risk factor and the injury of interest.
A prospective cohort model in analytic epidemiology. A, Representation of individuals exposed and not exposed to a risk factor. B, Model for prospective exposure-to-injury data. C, Contingency table setup for dichotomous data. D. Equation for calculating a rate ratio (relative risk).
A prospective cohort model in analytic epidemiology. A, Representation of individuals exposed and not exposed to a risk factor. B, Model for prospective exposure-to-injury data. C, Contingency table setup for dichotomous data. D. Equation for calculating a rate ratio (relative risk).
The case-control model in analytic epidemiology. A, Representation of the number of cases with the target condition who were exposed or not exposed to a risk factor. B, Representation of the controls (those without the target condition) who were exposed or not exposed to a risk factor. C, Contingency table for comparing the exposure rate between the cases in A and the controls in B. D, Equation for calculating the odds ratio to determine the relationship between exposure and the target condition.
The case-control model in analytic epidemiology. A, Representation of the number of cases with the target condition who were exposed or not exposed to a risk factor. B, Representation of the controls (those without the target condition) who were exposed or not exposed to a risk factor. C, Contingency table for comparing the exposure rate between the cases in A and the controls in B. D, Equation for calculating the odds ratio to determine the relationship between exposure and the target condition.
Risk
Risk is the mathematical representation of the chance of an injury occurring based on exposure to a risk factor (a determinant).
Relative Risk
Relative risk (RR) is a comparative measure of the cumulative injury rate in 1 group compared with that in another group based on exposure status (Figure 2).2 It indicates the strength of association between a risk factor and an event. Commonly studied factors in sports injury epidemiology are sex, practice versus competition, sport, and body part. The RR is most appropriately used in prospective cohort studies in which risk factors are present before the injury happens. When a cohort model is used, exposure or no exposure to a risk factor is studied prospectively, and individuals are tracked until some experience the event of interest (eg, some are injured and some are not). The RR is a direct comparison of the incidence rate of an exposed group and the incidence rate of an unexposed group. When the exposure is studied prospectively, we can draw a causal link between the exposure and the disease event.
Odds
Odds are the ratio of events to nonevents. A classic example is the odds of selecting the queen of hearts from a deck of cards. Odds are the ratio of the event occurring (ie, 1/52) to the event not occurring (ie, 51/52). The odds of drawing the queen of hearts are 1/52:51/52 ≈ 0.02 or about 2%.
Odds Ratio
The odds ratio is the comparison of odds of an event in an exposed group to odds in an unexposed group (Figure 3).2 It is a measure of the strength of association between the exposure and the event. However, it is an indirect measure only, and a causal link—ie, the exposure caused the event—cannot be drawn. The OR is most appropriate when a retrospective case-control study design is used because individuals already have the condition of interest. We can use ORs to assess the differences between those who have the condition of interest and those who do not and determine if there is a strong reason to believe that a previous exposure was linked to having a particular condition. Given that the exposure is studied retrospectively, we cannot draw a causal link between the exposure and the injury event—only the strength of the association. Therefore, we have to be careful in phrasing results based on ORs. With ORs, it is inappropriate to state unequivocally that the determinant caused an increased chance of the event because causality cannot be established via a retrospective study. It should only be stated that an increased association was present between the determinant and the event or that individuals who had experienced the event had an increased chance of also being exposed. Odds ratios are an indirect measure of risk and are used to determine the strength of those associations.
Calculating RR and OR
As described previously, the RR and the OR are 2 mathematical techniques for determining the strength of the relationship between exposure to a risk factor and an injury. Next, we describe the meaning of the RR and OR. In Figures 2 and 3, we detail how these calculations are derived.
In calculation, the RR is the comparison between the incidence rate of a group exposed to a risk factor and an unexposed group. To compare the incidence rates of 2 populations of interest, we can use the RR. The RR is a direct comparison of incidence rates, typically between 2 groups that are similar with the exception of a single factor (the determinant). The RR calculation allows for a measurement of the strength of an association between a determinant and an event (ie, injury or disease; Figure 2D). If the comparison of the groups approaches 1, the value of equivalency, then the injury rates between the 2 groups do not differ, and the strength of the association of the risk factor is likely negligible.
The OR is a rate comparison of what was expected (individuals who were injured were most likely exposed to the risk factor and those who remained uninjured were not) versus what was not expected (individuals who were injured were not exposed to a risk factor and those who sustained an injury were not exposed to a risk factor). With ORs, we can anticipate that if a strong association exists between a risk factor and an injury, the rate of the expected will be greater than the rate of the unexpected (Figure 3D). As noted, given that ORs are used for retrospective data, we can state only that an association or relationship exists between the risk factor and injury. As with RRs, the value of equivalency for ORs is 1.
Finally, a measure of variability is needed to describe the precision of the RR or the OR. Most commonly in sports injury research, a 95% CI is used to express the variability. Although it is outside the scope of this paper to detail CI calculations, a brief definition and explanation of how to interpret CIs are useful. The 95% CI is a measure of variability around the calculated RR or OR estimate. It is the range of values in which the truth most likely exists.13 Narrow CIs indicate high precision, and wide CIs indicate less precision. Estimates with narrow CIs indicate we can be more confident that the calculated ratio is very precise, and we would not expect to see wide variations in the strength of the associations. Estimates with wide CIs reflect more uncertainty about the strength of the risk factor-injury relationship. For CIs that encompass 1, the value of equivalency, it is even more difficult to determine the strength of relationship, and it is possible that no association at all exists between risk factor and injury. In Figures 4 and 5, forest plots are used to illustrate the relationship estimate (RR and OR) and 95% CIs (variability).
The calculated rate ratio, or relative risk, with 95% CI is plotted on a forest plot based on the data from Figure 2. The rate ratio is the incidence rate in individuals exposed versus the incidence rate in individuals not exposed to a risk factor. The rate ratio indicates a 5-fold increase in risk of injury as a result of being exposed to the risk factor, with a CI that does not encompass 1, which is the value of equivalency for ratios. The calculation of the CI is not presented here.
The calculated rate ratio, or relative risk, with 95% CI is plotted on a forest plot based on the data from Figure 2. The rate ratio is the incidence rate in individuals exposed versus the incidence rate in individuals not exposed to a risk factor. The rate ratio indicates a 5-fold increase in risk of injury as a result of being exposed to the risk factor, with a CI that does not encompass 1, which is the value of equivalency for ratios. The calculation of the CI is not presented here.
The calculated odds ratio with 95% CI is shown on a forest plot based on the data from Figure 3. The odds ratio is the odds of exposure in cases versus the odds of exposure in controls. The odds ratio indicates an 11-fold increased chance of previous exposure to the risk factor for the injured group, with a CI that does not encompass 1, which is the value of equivalency for ratios. The calculation of the CI is not presented here.
The calculated odds ratio with 95% CI is shown on a forest plot based on the data from Figure 3. The odds ratio is the odds of exposure in cases versus the odds of exposure in controls. The odds ratio indicates an 11-fold increased chance of previous exposure to the risk factor for the injured group, with a CI that does not encompass 1, which is the value of equivalency for ratios. The calculation of the CI is not presented here.
APPLICABILITY
Epidemiology for the AT-Researcher
The clinical scientific method14 is the systematic process of moving from observation to intervention. This process starts with the systematic observation and description of the injuries and diseases encountered by ATs. By detailing the frequency and patterns of these conditions through descriptive epidemiology, hypotheses can be developed to explain the factors that might contribute to their development (analytic epidemiology). These determinants can also be evaluated prospectively through analytic epidemiology as risk factors for those athletes who develop these conditions. By explaining the strength of associations between determinants and injury, more robust intervention strategies can be developed to target these determinants and potentially change the patterns and frequencies of these conditions in the future. In this way, the clinical scientific method helps to develop athletic training's body of knowledge for the recognition, rehabilitation, and prevention of athletic conditions and provides the framework for the 5 sources of clinical evidence we use to inform clinical decisions: epidemiology, diagnosis, etiology, prognosis, and therapy.
The Datalys Center reports play a critical role in the clinical scientific method by supplying updates from practicing ATs about the frequency and patterns of conditions encountered. These descriptive epidemiology reports provide the foundation for the distribution and patterns of conditions (injuries or illnesses) sustained by athletes in specific populations (from preadolescent to collegiate, ages 5–23 years), specific sports and activities, and specific times (eg, competition, practice, and off-season). These data then afford researchers the opportunity to develop and refine hypotheses associated with the biopsychosocial determinants of these conditions and thus explore new pathways that may lead to more robust intervention strategies for the treatment and prevention of these conditions.
Key questions to ask from the research perspective when reading the Datalys Center reports include the following:
For a given population, what are the most commonly reported injuries?
Are the injury patterns and frequencies similar to findings in settings other than those in the Datalys Center reports?
Based on their patterns and frequencies, what are the associated economic and societal burdens of these injuries?
What are the current hypotheses about the determinants for these injuries?
Have intervention strategies been developed based on these determinants to reduce the frequency of these injuries already? Do they seem to be effective?
If the current intervention strategies do not appear to be effective in reducing the patterns and frequencies of these injuries, what other determinants need to be explored?
Are the current intervention strategies directly linked to validated determinants through the clinical scientific method?
Epidemiology for the AT-Clinician
The Datalys Center reports provide practicing ATs with direct comparisons of the trends from their own clinical practices and what other practicing ATs in similar situations are describing.15 Appreciating the national trends across age groups, sports, and institutions can help practicing ATs better prepare for what to expect in their own clinical settings. Perhaps the patterns from the individual practice match what has been described in the descriptive epidemiology reports. If so, the injury patterns and frequency can most likely be predicted and thus planned for in the future. An individual AT's patterns and frequencies of injuries that differ from what has been described in the descriptive epidemiology report may lead to opportunities to disseminate practice-based evidence through clinical Contributions to the Available Sources of Evidence reports,16 furthering the body of knowledge.
Beyond appreciating the patterns and frequencies of injuries based on national trends, the descriptive epidemiology reports allow practicing ATs to raise relevant questions about their clinical practices, which can then be used to guide them in seeking other necessary research evidence. Knowing the national trends in injury patterns and frequencies can help clinicians prepare themselves to recognize the more common conditions through the clinical evaluation and diagnosis process. For example, ankle sprains account for most injuries in men's soccer during both practices and competitions. This epidemiologic evidence can help practicing ATs ensure that they are up to date on the key evidence for recognizing ankle sprains in this population (eg, most common mechanisms, type, and severity). Using the research evidence in this way, clinicians can then also recognize the features that do not fit these injuries to avoid diagnostic blinders. In addition, practicing ATs can examine the published evidence related to the most common clinical problems and sequelae an athlete might report with an ankle sprain (eg, increased pain, episodes of the ankle “giving way,” and fear and avoidance).17,18 Practicing ATs can use descriptive epidemiology reports in conjunction with their own experiences to cultivate a spirit of inquiry19 about the best available evidence to guide clinical decisions related to the recognition, rehabilitation, and prevention of injuries they are likely to encounter.15
Key questions to ask from the practicing AT's perspective are the following:
Does the topic of this epidemiology report relate to the population and injuries I encounter in my practice?
What are the most common injuries reported? Have I encountered them before in my own practice? Are the patterns and frequencies of my encounters similar to those in the epidemiology report?
Based on the patterns and frequencies of injuries described in the report, what is the typical pattern of presentation of these conditions? Do I know the most important defining and discriminating features for these conditions? What other conditions may present similarly?
In my own practice, do I have an idea of the key determinants for developing these conditions? From what I see in the epidemiology report, do I need to update myself on the best available published research evidence associated with potential determinants in my patient population?
What is the typical timeframe for athletes to return to sport after developing these conditions? What have I seen in my own practice? How does the typical timeline associated with my clinical decisions compare with the best available published timelines?
What intervention strategies do I use for these conditions to treat athletes and return them safely to play? How do my intervention strategies and patient outcomes compare with the best available published evidence?
What are the most important prevention strategies for this condition based on the best available evidence? Do I have prevention strategies in place to reduce the likelihood of an athlete developing this condition? Are there economic or social considerations that I need to consider in my patient population when implementing these strategies?
Epidemiology for the AT-Educator
Evidence from epidemiologic studies allows for contextualization of injury information. Clinical practice in athletic training is highly dependent on our preparedness to recognize, treat, and prevent athletic injuries and illnesses. Athletic injury or illness epidemiology plays a critical role in this preparedness as a source of evidence for our decision making. Although epidemiologic studies can be somewhat overwhelming to read, epidemiology concepts are embedded in our clinical minds and the way we communicate with coaches, parents, athletes, and students.
Teaching Epidemiology Frequency in Athletic Training Education
A common expression used by seasoned clinicians and preceptors is “When you hear hoofbeats in the night, think horses, not zebras.” It is a statement meant to focus us first on the conditions we are most likely to encounter. From an epidemiology perspective, this statement highlights the importance of understanding frequency. We need to keep in mind not only how many injuries a clinician is likely to encounter in a particular setting but also who develops them (person: particular sports, age groups, and sexes), when they develop them (time: competition, practice, off-season, and point in season), and where they develop them (location).
Teaching Epidemiology Patterns in Athletic Training Education
“If it looks like a duck, quacks like a duck, and walks like a duck, it's probably a duck” is a common expression that we use in clinical practice to help students learn the process of diagnosis. It captures what a condition looks and sounds like and how it behaves. Epidemiology provides the practicing clinician with a framework for describing these conditions. In this type of description, we seek to clarify the patterns in the observations.
Athletic training's body of knowledge is ever growing.20 We have recently updated our educational standards to reflect a broader population of those who seek to be physically active across their lifespan. Along with the expansion of the population with whom we work comes an expanded recognition of more conditions, the patterns of presentation, and effective intervention strategies for injury prevention or the restoration of health in those we serve. Epidemiology reports help to highlight the most common conditions on which we need to focus our efforts in preparing instructors to present the most relevant information to professional students. Through an epidemiology lens, instructors can streamline the “need-to-know” information based on the patterns and frequencies of presentations across different populations and settings.
Key questions to ask when using epidemiology reports in athletic training education include the following:
What are the most important conditions to cover in the curriculum based on the patterns and frequencies of injuries encountered by ATs?
What are the most important key features for recognizing these conditions?
What are the conditions that may present similarly in pattern and frequency and should be considered in the differential diagnosis?
What are the key biopsychosocial determinants for these conditions that I need to help my professional students comprehend and identify?
What are the key recommendations for managing these conditions based on their biopsychosocial determinants?
Which intervention strategies are most effective for altering the patterns and frequency of injuries?
From a clinical education perspective, these reports provide a foundation for students who are preparing to meet the demands of their clinical rotations. These reports can enhance students' readiness to engage in particular clinical settings by helping them to gauge the most likely conditions (frequency) they will encounter.15 From these conditions, they can explore the research evidence related to the key diagnostic features they need to know in order to recognize these conditions (patterns). This process also affords the opportunity to explore determinants for a condition's development, the important timelines associated with them (eg, return to play), and the most current and relevant intervention strategies for managing them.
IMPORTANT COLLABORATIONS
The Datalys Center's descriptive epidemiology reports represent an outstanding collaboration among 3 groups. Practicing ATs are responsible for capturing and reporting the data that are analyzed by athletic training researchers. The epidemiology trends then provide other athletic training researchers the opportunity to advance athletic training's body of knowledge regarding the recognition, rehabilitation, and prevention of these conditions.20 The insights gained from these reports combined with professional education will help athletic training educators better prepare athletic training students to remain current regarding injury or illness trends related to physical activity and athletic participation.
SUMMARY
Epidemiology is a systematic method of studying disease trends in a specified population of interest. Epidemiologic evidence guides much of the clinical practice, education, and research within the profession of athletic training, allowing for the advancement of the recognition, rehabilitation, and prevention of athletic injuries and illnesses. Athletic training clinicians, educators, and researchers use such data in different ways, and yet, together they are able to develop a robust understanding of the patterns and frequencies of injuries encountered, raise important questions to drive further research and study, and educate our next generation of clinicians on the importance of evidence-based clinical practice. Without clinicians, educators, and researchers who have a solid understanding of the importance and effect of descriptive epidemiology, we would only be able to fulfill parts of the clinical scientific method necessary to advance the body of knowledge that drives the athletic training profession. As such, each of these roles is a necessary piece of the puzzle, and collaboration is imperative for creating and using research evidence that progresses the profession of athletic training.
The following key points address the importance and use of athletic training epidemiology:
Epidemiology is the study of disease trends in a population of interest. As it pertains to athletic training, epidemiology provides an understanding of the incidence and trends of injuries and illnesses among athletes.
The ability to understand and evaluate epidemiologic studies, including the strengths and limitations of study designs and measures, is critical to ATs' ability to use the findings to benefit clinical practice, research, and education.
Athletic training researchers should consider using epidemiologic data to guide research projects, advancing the body of evidence for the profession through the clinical scientific method, and ultimately providing clinicians with effective, evidence-based interventions to use in clinical practice.
Descriptive epidemiology helps athletic training clinicians garner an understanding of injury trends in their specific area of practice, better preparing them for what to expect in a specific clinical setting and thereby improving their clinical practice.
By including epidemiology in their curriculum, athletic training educators can prepare students for success in using evidence to guide clinical practice in their various clinical experiences and beyond.
Epidemiology is critical to the athletic training profession and plays an important role in advancing the research evidence available for clinical decision making and improving clinical practice.
Knowing the extent to which epidemiology plays a multifaceted role in athletic training, this primer serves as a resource on how best to read, critique, and interpret epidemiologic research in athletic training. This will be a guide for the Datalys Center reports from the NCAA-ISP published in the Journal of Athletic Training. We strongly encourage readers to familiarize themselves with the key concepts related to how the patterns and frequencies of injuries and conditions are captured and can be used by practicing ATs. Take time to understand how researchers use such data to develop questions and hypotheses that further our understanding of specific injuries and conditions. Think like an educator and use this information to help distinguish the horses from the zebras and know what to look for based on the available evidence, in addition to the case presentation and context. Descriptive epidemiology plays a critical role in the clinical scientific method for advancing the research evidence and clinical decision making. Through careful observation and description of the patterns and frequencies of injuries we encounter as ATs, we can continue to build the robust body of knowledge that guides our profession and works to better serve the public. These reports represent the collaborative efforts of, by, and for ATs.