Researchers have identified high exposure to game conditions, low back dysfunction, and poor endurance of the core musculature as strong predictors for the occurrence of sprains and strains among collegiate football players.
To refine a previously developed injury-prediction model through analysis of 3 consecutive seasons of data.
National Collegiate Athletic Association Division I Football Championship Subdivision football program.
For 3 consecutive years, all 152 team members (age = 19.7 ± 1.5 years, height = 1.84 ± 0.08 m, mass = 101.08 ± 19.28 kg) presented for a mandatory physical examination on the day before initiation of preseason practice sessions.
Associations between preseason measurements and the subsequent occurrence of a core or lower extremity sprain or strain were established for 256 player-seasons of data. We used receiver operating characteristic analysis to identify optimal cut points for dichotomous categorizations of cases as high risk or low risk. Both logistic regression and Cox regression analyses were used to identify a multivariable injury-prediction model with optimal discriminatory power.
Exceptionally good discrimination between injured and uninjured cases was found for a 3-factor prediction model that included equal to or greater than 1 game as a starter, Oswestry Disability Index score equal to or greater than 4, and poor wall-sit–hold performance. The existence of at least 2 of the 3 risk factors demonstrated 56% sensitivity, 80% specificity, an odds ratio of 5.28 (90% confidence interval = 3.31, 8.44), and a hazard ratio of 2.97 (90% confidence interval = 2.14, 4.12).
High exposure to game conditions was the dominant injury risk factor for collegiate football players, but a surprisingly mild degree of low back dysfunction and poor core-muscle endurance appeared to be important modifiable risk factors that should be identified and addressed before participation.
A 3-factor prediction model that includes 2 modifiable injury risk factors can be used to identify collegiate football players who might benefit from targeted risk-reduction interventions.
A mild degree of low back dysfunction and a suboptimal level of core-muscle endurance appeared to be important injury risk factors that should be identified and addressed.
High exposure to game conditions was a dominant injury risk factor.
The combination of high exposure to game conditions with a potentially modifiable risk factor was associated with a substantially increased risk of core or lower extremity sprain or strain.
Injury prevention is mentioned in virtually every definition of sports medicine, but very little research evidence is available to support specific procedures for reduction of injury risk. A 4-step model to guide sports injury-prevention research and practice was introduced more than 20 years ago by van Mechelen et al.1 The model subsequently was modified to incorporate additional concepts,2,3 but very little progress has been made beyond the initial step of documenting injury incidences for various populations.4,5 Risk factors for some specific types of injury have been identified, but little information in the literature has supported specific screening procedures to identify individual athletes who possess elevated injury risk.6–8 The relative lack of evidence for the effectiveness of specific interventions for reducing injury incidence may be explained by the highly injury-specific and sport-specific nature of many risk factors9 and the cumulative effects, and possibly interactive effects, of multiple risk factors in creating injury susceptibility.3,10–13
Injury prevention is typically categorized as a clinical-practice domain that is distinct from injury rehabilitation, but some overlap exists. A previously sustained injury is a well-established risk factor for subsequent injury, which often may be attributable to suboptimal clinical management.14,15 Furthermore, intrinsic injury risk factors may affect the rate at which an athlete's functional capabilities are restored after an injury. An individual's capacity to tolerate the external loads imposed by sport-related activities largely depends on tissue stiffness,11 which is potentially modifiable through training-induced adaptations in neuromuscular function. Furthermore, injury-induced neural inhibition of muscle function can produce subtle and persistent performance deficiencies among highly active elite athletes.16 Most injuries do not completely remove athletes from participation,15 which may result in an unrecognized, persistent increase in injury susceptibility.
A clinical prediction model can provide a quantitative estimate of the likelihood that an individual who possesses a particular combination of factors will ultimately develop a particular condition or experience an adverse event at some time.17 The combination of simple core-muscle–endurance test results, survey responses, anthropometric measurements, and recorded exposures to game conditions has been shown to differentiate the preseason profiles of collegiate football players who subsequently sustained core or lower extremity sprains or strains from players who did not, which was represented quantitatively by an odds ratio (OR).8 The maximum time that static body positions can be maintained against gravity has been reported to provide highly reliable measurements of core-muscle endurance.18 Wilkerson et al8 administered 4 tests in the same sequence: (1) back-extension hold, (2) 60° trunk-flexion hold, (3) side-bridge hold, and (4) bilateral wall-sit hold. Surveys that were originally designed to quantify joint function to document treatment outcome can be modified for use as discriminative instruments before injury occurrence.19 Researchers8 have suggested that well-validated outcome survey instruments can undergo minor modifications to obtain preparticipation joint function scores that have value for injury prediction. Self-perception of the preparticipation functional status of the lower back, knees, and ankles and feet has been quantified by 3 surveys with well-established psychometric properties: (1) the Oswestry Disability Index (ODI),20,21 (2) the International Knee Documentation Committee Subjective Knee Form,22 and (3) the sports component of the Foot and Ankle Ability Measure.23
Wilkerson et al8 observed that the odds for occurrence of a core or lower extremity sprain or strain over 1 football season were 16 times greater for players who had at least 3 of the following characteristics: (1) trunk-flexion hold time equal to or less than 161 seconds, (2) bilateral wall-sit–hold time equal to or less than 88 seconds, (3) ODI score equal to or greater than 6, and (4) starting in 3 or more games or playing in all 11 games. With game exposure removed from the analysis, the odds for injury incidence among players with at least 2 of the 3 potentially modifiable risk factors was 4 times greater than the risk level for players with 0 or 1 factor. In subsequent years, the core-muscle–endurance tests were modified to increase their difficulty and thereby shorten the time required for their administration. Every modification of testing procedures resulted in improved efficiency of administration without loss of predictive power. Two subsequent single-season analyses confirmed the validity of the original multifactor model, but the results also demonstrated that the model could be simplified without substantial loss of predictive power (G.B.W., unpublished data, 2011, 2012). Therefore, the purpose of our study was to analyze 3 consecutive seasons of combined data for preseason status, game exposures, and injury occurrences to derive a refined model for prediction of core or lower extremity sprain or strain during participation in collegiate football.
The prospective cohort study design included all members of a National Collegiate Athletic Association Division I Football Championship Subdivision football team who were present for a preparticipation examination immediately before the initiation of preseason practice sessions in 2009 (n = 83), 2010 (n = 88), and 2011 (n = 85). The cohort consisted of 152 individuals (age = 19.7 ± 1.5 years, height = 1.84 ± 0.08 m, mass = 101.08 ± 19.28 kg) who were members of the team for the duration of a given season, which yielded 256 player-seasons of data over the 3 consecutive seasons (17 208 practice session and game exposures). Players who participated in more than 1 season were treated as separate cases for each season, which is a widely accepted practice for such multiyear studies.24–27 Among the 152 players who contributed data to the analysis, 33 participated in all 3 seasons, 38 participated in 2 seasons, and 81 participated in 1 season. Information acquired at the preparticipation examination included responses to 3 previously identified surveys for quantification of joint-specific function.8 All injuries that resulted from participation in practice sessions, conditioning sessions, or games were documented from the start of the preseason practice period until the end of the season. An injury was operationally defined as a core or lower extremity sprain or strain that required the attention of an athletic trainer and that limited football participation to any extent for at least 1 day after its occurrence. Fractures, dislocations, contusions, lacerations, abrasions, and overuse syndromes were excluded to limit the analysis to injuries that were most likely to result from an insufficient neuromuscular response to dynamic loading of muscles and joints. All participants provided written informed consent, and the study was approved by the Institutional Review Board of the University of Tennessee at Chattanooga.
Variations of 3 core-muscle–endurance tests that involved maintenance of a specified postural position for as long as possible (trunk-flexion hold, wall-sit hold, back-extension hold) were administered each year. To simultaneously accelerate the process of test administration and improve sensitivity for detection of injury risk, the tests were modified during the 3-year period. For example, the trunk-flexion hold at 60° (sitting position) was performed the first year with the knees in 90° of flexion, the elbows in full flexion, and the upper extremities in 90° of abduction. For the second year, the upper extremities were maintained in an elevated overhead position that corresponded to the 60° position of the trunk. The result was a reduction in average hold duration from 141 seconds to 110 seconds. Administration of the test was further accelerated the third year by maintaining the knees in a fully extended position, which decreased the average hold duration to 75 seconds. Whereas the test modifications accomplished the goal of faster test administration, the relative contribution of the trunk-flexion hold to the predictive power of multifactor prediction models decreased.
For the first year, athletes performed the wall-sit–hold test with body mass equally distributed between the lower extremities and with the knees and hips maintained in 90° of flexion (Figure 1). For the second year, we devised a unilateral test that athletes performed with the nonsupporting lower leg crossed on top of the thigh of the supporting extremity in a figure-4 position; each extremity was tested separately. The test was improved further the third year by having athletes slightly lift the foot to remove all body-mass support. The result was a 65% reduction in average test duration from 79 seconds the first year to 28 seconds the third year, while maintaining the discriminative power of the test (ie, OR > 2). Test-retest reliability for the unilateral foot-lift version of the wall-sit hold (average of right and left extremity values) has been assessed in a convenience sample of 14 players who performed the test twice within a 48-hour interval, which demonstrated an intraclass correlation coefficient (2,1) of 0.85 and a standard error of measurement value of 3.5 seconds (K. Miyazaki, MS, ATC, unpublished data, 2011).
Receiver operating characteristic (ROC) analysis was used to identify cut points for preseason posture-hold test results, survey-derived joint-function scores, anthropometric measurements, and subsequent exposure to game conditions during each season. The initial single-season prediction model established greater likelihood of injury for players who had at least 3 of 4 risk factors, which included a high level of exposure to game conditions, suboptimal low back function, and poor performance on 1 or both of 2 different tests of core-muscle endurance (ie, trunk-flexion hold and wall-sit hold).8 The following year, a single-season analysis yielded a 3-factor model that eliminated the modified trunk-flexion hold and produced the identical OR derived from the original 4-factor model. In addition, the definition of a high level of game exposure was simplified from starter status for 3 or more games and playing in all 11 games to starter status for 1 or more games. The more complex operational definition of starter originally was chosen on the basis of its slightly larger observed effect as measured by OR estimates (OR = 8.66 versus OR = 7.65). Subsequent analyses demonstrated that either definition of starter status provided a reasonably comparable indication of the effect of high-level exposure to game conditions, so we adopted the simpler method to designate starter status.
To validate the predictive power of the 3 risk factors that were identified by both of the single-season analyses, we combined and analyzed data for 3 consecutive seasons. Other dichotomized variables that had been measured in a consistent manner each year were also assessed for predictive value by separate cross-tabulation analyses. Cut points for dichotomization of each variable were determined by ROC analysis of the 3-season combined dataset, with the exception of the trunk-flexion hold and wall-sit hold. Given that technique changes dramatically reduced average test duration for the core muscle-endurance tests from year to year, we used the ROC-derived cut point for a given testing procedure for each successive year to classify cases as high risk or low risk. Cross-tabulation analysis was performed to calculate the OR for each predictor variable. Logistic regression analysis was used to assess the relative contributions of predictor variables to the discriminatory power of a multivariable model, and a confidence interval (CI) function was created to assess both the magnitude and precision of OR values for the multivariable model. Predictor variables retained by the logistic regression analysis were entered into a Cox regression analysis to model the instantaneous probability for injury occurrence across the course of a football season (ie, cumulative hazard). We used IBM SPSS Statistics (version 21; IBM Corporation, Armonk, NY) to analyze the data.
A total of 132 core or lower extremity sprains and strains were sustained by 82 of 152 individual players during 17 208 player-exposures (7.7 per 1000 player-exposures). Among 71 players who participated in either 2 or 3 seasons, only 19 were injured during more than 1 season, and only 2 of 33 players who participated in all 3 seasons sustained injuries during each season. Over the 3-season study period, 5 players sustained 3 different injuries and 19 players sustained 2 different injuries within the same season. For the 256 player-seasons, 103 players sustained at least 1 injury during a given season (ie, 103 cases).
The results of separate analyses of the 3-season aggregated dataset for 7 predictor variables are presented in Table 1. The results of logistic regression and Cox regression analyses demonstrated that inclusion of the trunk-flexion hold in a 4-factor model was not superior to the simpler 3-factor model (Table 2). The 3-factor logistic regression model was associated strongly with the dichotomous outcome (χ23 = 43.64, P < .001), and the Hosmer-Lemeshow goodness-of-fit test demonstrated an exceptional level of agreement between observed and predicted values (χ26 = 1.43, P = .96). A Cox regression model that included the same 3 factors was also associated strongly with the outcome variable (χ23 = 36.72, P < .001). Subsequent ROC analysis demonstrated that the existence of at least 2 of the 3 risk factors (starter ≥1 game, ODI score ≥4, and wall-sit hold ≤cut point specific to test version) provided exceptionally good discrimination between injured and uninjured cases (Table 3). A CI function graph that illustrates the magnitude and precision of the 3-factor prediction model OR value is presented in Figure 2. Follow-up analysis-of-injury-incidence graphs for various combinations of risk factors did not demonstrate evidence of interactions among factors (Figure 3), and the addition of interaction terms to the logistic regression analysis did not demonstrate an effect for any combination of factors (starter × ODI: P = .68; starter × wall-sit hold: P = .87; ODI × wall-sit hold: P = .53). A progressive increase in injury incidence was clearly associated with an increase in the number of risk factors (Table 4, Figure 4). The cumulative hazard predicted by the Cox regression equation for both levels of each factor (adjusted for the effects of the other 2 factors in the 3-factor model) is depicted in Figure 5.
The refined prediction model demonstrated a high degree of accuracy in discriminating injured cases from uninjured cases, which suggests that optimal function of the lumbar spine and fatigue resistance of the core musculature are important considerations for preventing core and lower extremity sprains and strains among collegiate football players. The dominant injury risk factor was clearly a high volume of exposure to game conditions, but the level of injury risk among both starters and nonstarters appeared to be increased substantially by either a relatively mild degree of low back dysfunction or a deficiency in core-muscle endurance. We observed that the 2 potentially modifiable factors had comparable magnitudes of effect on injury risk, but the combined factors elevated the level of injury risk beyond that observed for the existence of only 1 factor among both starters and nonstarters.
Sports medicine practitioners tend to focus on the mechanism by which an inciting event produces a pathologic condition, which is the final link in a chain of injury causation.3,10,11 If preventing a sport-related injury is possible, some objective means is needed to predict that an injury is likely to occur. Such information could be used to guide implementation of a specific intervention that is designed to improve the individual's capacity to prevent an injury by either avoiding or tolerating the transfer of energy from the external environment to body tissues.11,12 A prospective cohort study design provides the only feasible method to quantify the strength of associations between preparticipation characteristics and subsequent injury occurrence. Such exposure-outcome associations can provide evidence that strongly suggests a causative influence, which depends on avoiding systematic bias, minimizing random error, and an analysis that rules out the influence of possible confounding factors.3,13
The observed incidence rate of 7.7 core or lower extremity sprains and strains per 1000 player-exposures for our cohort was relatively close to the estimated national incidence rate of 6.1 per 1000 player-exposures for National Collegiate Athletic Association collegiate football.28 The strong precision of the OR and hazard ratio point estimates (ie, narrow CIs) suggested that random error did not exert a major influence on the results. Given that starter status clearly had the greatest effect on injury risk, a high volume of exposure to game conditions represents a potentially important confounding factor that needed to be thoroughly assessed. The results of the logistic regression analysis and of the stratified analyses of starter versus nonstarter status suggested that the influence of the 2 potentially modifiable risk factors in the prediction model (ie, low back dysfunction and core-muscle fatigue) incrementally increased injury risk in a manner that was comparable for starters and nonstarters. Despite the lack of a confounding effect, the profound influence of starter status on injury risk made the incremental influence of 1 or more other risk factors a serious concern that should be addressed.
The preparticipation ODI score demonstrated poor sensitivity (41%) but good specificity for identifying players who avoided injury (77%). Whereas many players who report a mild degree of low back dysfunction may avoid injury, athletes who report the absence of low back symptoms or functional limitations appear to be less susceptible to core and lower extremity sprains and strains. Given that the ODI survey items were developed to assess low back dysfunction in the general population, a survey instrument specifically designed for young competitive athletes may offer a more precise representation of the influence of low back symptoms on sport-specific performance capabilities29 and thereby provide greater discriminatory power. Our results suggested that therapeutic remediation of any low back symptoms should be a high priority before a football player is exposed to high-intensity practice drills and game conditions. Relatively minor or intermittent low back symptoms could be associated with subtle and persistent alterations in neuromuscular activation patterns that elevate injury risk.16
Rapid fatigue of the core musculature and low back dysfunction have been related to impaired neuromuscular control of the body's center of mass, inhibition of lower extremity muscles, and elevated risk for lower extremity injury.30–35 The unilateral wall-sit–hold test appeared to effectively identify athletes who experience rapid fatigue in muscles that are important for maintenance of lumbar spine, pelvis, hip, knee, and ankle positioning. The muscle-activation patterns required to maintain the unilateral wall-sit–hold position may relate to the ability to avoid excessive hip adduction and knee valgus during dynamic activities, which needs to be assessed using electromyographic analysis. Given that the test imposes simultaneous demands on the quadriceps and hamstrings, rapid fatigue in either muscle group might indicate a diminished ability to dynamically stabilize the knee joint. Further assessment of the reliability of wall-sit–hold duration measurements is also needed.
The reliability of exposure-outcome associations derived from cohort studies is highly dependent on the number of criterion-positive (eg, injured) cases. For example, inclusion of the trunk-flexion hold in the original 4-factor prediction model was based on an analysis of injury data for a cohort of 83 players. Subsequent single-season analyses did not replicate the predictive value of the trunk-flexion hold for the separate cohorts of 88 and 85 players, whereas the other 3 predictors consistently demonstrated strong predictive value from year to year. The broad operational definition of injury as any core or lower extremity sprain or strain ensured a relatively large number of injured cases, whereas a narrower definition might have yielded less reliable estimates of exposure-outcome associations. Much larger datasets are needed to generate reliable injury-prediction models for different age and sex groups, different sports, and specific types of injuries.
A limitation of this study was the operational definition of an injury as any core or lower extremity sprain or strain. For example, the estimated mass moment of inertia (MMOI) around a horizontal axis through the ankle has been identified as a risk factor for lateral ankle sprain.36 Our univariable analysis of MMOI revealed that a cut point of equal to or greater than 450 kg · m2 was associated with an OR of 2.08 (90% CI lower limit = 1.13) but it did not contribute substantially to the power of the multivariable model for predicting core and lower extremity sprains and strains. A relatively large amount of upper body mass could elevate the risk for any lower extremity sprain or strain, but its influence may be greatest at the most distal joints. Furthermore, the exclusion of a variable from the final prediction model should not be interpreted as an indication that it completely lacks predictive value. In future research on ankle injuries, investigators might identify an interaction between estimated MMOI and some modifiable factor (such as postural balance deficiency, muscle weakness, or structural malalignment) that would further support a highly individualized approach to injury prevention.
Another limitation of this study was the possibility that important predictors of collegiate football injury risk were not included in our 3-season cumulative analysis. For example, computerized neurocognitive testing was not included as a standard component of the preparticipation assessment until the last year of the study period. Authors37 of a recently completed single-season univariable analysis suggested that neurocognitive reaction time was a strong predictor of lower extremity sprains and strains, but a larger dataset is needed to establish its importance in relation to the factors that have been confirmed as predictors of injury risk through this 3-season analysis. The anterior reach component of the Star Excursion Balance Test also has been identified recently as a strong predictor of ankle and knee injuries among collegiate football players.38 Much more research is needed to develop prediction models for specific injury types (eg, acute trauma, chronic instability, overuse syndrome) at specific locations (eg, joint, bone, muscle group) in specific populations (eg, age group, sex, sport).
Although a relatively broad operational definition of injury can preclude identification of risk factors that are specific to a given type of injury (eg, lateral ankle sprain), it may identify other risk factors that contribute to multiple types of injuries. Thus, a prediction model for an outcome that is broadly defined may provide greater clinical utility than one that is highly specific to a single type of injury. Furthermore, a complex mathematical model that requires data derived from multiple time-consuming test procedures is not likely to be used by most practicing clinicians. A clinical prediction guide can provide an individualized estimate of injury risk that is more accurate than a clinician's intuitive assessment, but practical considerations dictate that the guide's components must be easy to remember and simple to apply.17 Our multiyear effort to reduce the amount of time required to administer screening tests while attempting to maintain or improve the predictive power of their results yielded a clinical prediction guide with fewer components than the one originally derived from analysis of a single season of data and a high degree of prediction accuracy.
The results of our analysis supported simplification of the previously developed 4-factor prediction model to a 3-factor model that includes 2 modifiable injury risk factors. A relatively mild degree of low back dysfunction and a suboptimal level of core muscle endurance appeared to be important injury risk factors that should be identified and addressed. Whereas exposure to game conditions is the dominant injury risk factor for collegiate football players, when combined with a potentially modifiable factor that adversely affects core function, the risk for a core or lower extremity sprain or strain appeared to increase substantially.