Over the past 20 years, research on the training-load–injury relationship has grown exponentially. With the benefit of more data, our understanding of the training-performance puzzle has improved. What were we thinking 20 years ago, and how has our thinking changed over time?
Although early investigators attributed overuse injuries to excessive training loads, it has become clear that rapid spikes in training load, above what an athlete is accustomed, explain (at least in part) a large proportion of injuries. In this respect, it appears that overuse injuries may arise from athletes being underprepared for the load they are about to perform. However, a question of interest to both athletic trainers (ATs) and researchers is why some athletes sustain injury at low training loads, while others can tolerate much greater training loads? A higher chronic training load and well-developed aerobic fitness and lower body strength appear to moderate the training-injury relationship and provide a protective effect against spikes in load.
The training-performance puzzle is complex and dynamic—at any given time, multiple inputs to injury and performance exist. The challenge facing researchers is obtaining large enough longitudinal data sets to capture the time-varying nature of physiological and musculoskeletal capacities and training-load data to adequately inform injury-prevention efforts. The training-performance puzzle can be solved, but it will take collaboration between researchers and clinicians as well as an understanding that efficacy (ie, how training load affects performance and injury in an idealized or controlled setting) does not equate to effectiveness (ie, how training load affects performance and injury in the real-world setting, where many variables cannot be controlled).
Rapid increases in training load relative to load capacity have been associated with injury among athletes in multiple sports.
Many other factors (eg, age, previous injury, low chronic load, poor strength and aerobic fitness) in isolation, or in combination with spikes in training load, can also contribute to injury.
Consideration of these factors, along with the short- and longer-term responses to training load, provides an evidence-based approach for optimizing positive training adaptations in athletes.
Training load can be considered both in terms of the external stimulus applied (external load) and the physiological, psychological, or biomechanical response to the applied external load (internal load).1 Examples of external load include the weight lifted, distance run, and number and intensity of jumps, whereas examples of internal load include heart rate, blood lactate concentration, rating of perceived exertion, joint load, muscle load, and perceived tissue damage.2 External : internal load ratios have been proposed as measures of fatigue3 and (mal)adaptation.4
Although the concepts have been used in applied practice for decades, acute and chronic training loads have only recently been described in the literature.5 Acute training loads can be as short as 1 session, but in most sports, they are reported on a weekly basis. Chronic training loads represent the training performed over a longer period of time (eg, 3–6 weeks). In this respect, chronic training loads are analogous to a state of fitness and acute training loads are analogous to a state of fatigue.6 The ratio between acute and chronic training loads has been termed the acute : chronic workload ratio (ACWR).7
Training load has been associated with performance,8–11 and various theoretical models have been proposed to explain the link between training load and injury.2,12,13 Training loads that are too high or too low may result in underperformance and increased injury risk. As such, the monitoring and management of training loads has become commonplace among high-performance athletes. One of the first studies8 to investigate the influence of training load on performance addressed elite skaters, runners, and cyclists. A 10-fold increase in training load was associated with a performance improvement of approximately 10%. However, the negative consequences associated with training are also thought to be dose related, and both high and low loads14–24 have been associated with a greater incidence of injury. Since this work was conducted, training-load research has grown exponentially.25
The quality of scientific research is typically evaluated using the Sackett hierarchy of evidence26 (Figure 1). Although different versions of the hierarchy exist,27 all have similar traits. First, all evidence is incomplete and continues to evolve. Second, not all scientific publications are created equal—editorials and letters to the editor are weaker than randomized controlled trials, meta-analyses, and systematic reviews. At present, the literature on training load and injury includes systematic reviews, cohort studies, case-control studies, cross-sectional surveys, and case studies. To date, no meta-analyses have been performed, and few randomized controlled trials have been published.28–30 Among the randomized controlled trials, additional training load (in the form of muscular strengthening, coordination, mobility, and flexibility) either reduced injury prevalence29,30 or improved performance.28 As such, a moderate-to-strong level of evidence has linked training load with injury,31–36 although several studies37,38 showed no association. Clearly, additional investigation is needed to better understand the interaction between training-load and injury. Clinicians are advised to consider the strength of the evidence when evaluating the training load literature.
A large volume of field-based research, coupled with statistical and technological developments, has advanced our understanding of how training affects performance and injury. As such, and as is the case with any scientific research, the training-load literature continues to evolve. With the benefit of hindsight and more data, so too have the interpretations of the training-performance puzzle. What were we thinking 20 years ago, and how has our thinking changed over time? This brief review summarizes the peer-reviewed research on training load, injury, and performance. In this article, I have reanalyzed early findings to provide a hypothetical model of how physical qualities and training load interact to influence injury risk. Finally, the aim of this review was to provide examples of how training load can influence the day-to-day practice of athletic trainers (ATs) and how these clinicians can use this information to support the athletes in their care.
THE PERFORMANCE PUZZLE
Physical Qualities and Injury
Researchers39–41 have demonstrated a relationship between physical qualities and injury risk; across a wide range of sports, athletes with poorer fitness were at greater risk of injury. For example, amateur team-sport athletes with poor maximal aerobic capacity (VO2max; <42.8 mL·kg–1·min–1) were > 6 times more likely to sustain an injury than those with higher Vo2max (≥47.7 mL·kg–1·min–1).42 Athletes with poorer upper body strength and prolonged high-intensity running ability were 2 to 3 times more likely to sustain an injury than fitter and stronger athletes.43 Although these findings provided important information on the physical qualities that may protect against injury in specific sports, a limitation of these studies was that, in all cases, the physical qualities were measured in the preseason period before any training was performed. This limitation was acknowledged by the authors, yet this approach implied that these physical qualities were stable and did not change over time or with training. However, in practice, when clinicians identify poor physical qualities or less than satisfactory musculoskeletal screening results, an intervention is implemented to rectify the deficiency.
Planned and Actual Training Loads
Load management is one of the most common phrases used in professional sport. In its simplest form, load management involves the planning, prescription, and evaluation of training. Whether prescribing training loads to develop physical qualities (eg, strength, speed, or aerobic fitness) or skills, a plan is required to optimize adaptation. This plan is often based on a combination of art and science but is always based on achieving a specific outcome (improved performance, reduced fatigue, or reduced injury risk or a combination of all 3). Monitoring training loads allows clinicians to evaluate the prescribed training program against the plan. Training can be monitored by evaluating external (eg, global positioning system technology and inertial measurement sensors) and internal (eg, heart rate, blood lactate concentration, and rating of perceived exertion) loads. Figure 2 shows an example of how ATs can monitor planned and actual training loads.44 Using the session rating of perceived exertion to quantify internal training loads, ATs can determine when athletes have undertrained or overtrained relative to their plan. This approach is a starting point for ATs to maximize the positive and minimize the negative responses associated with training.
Training Load and Injury
Anderson et al45 were among the first to investigate the relationship between training load and injury. In a study of 12 female college basketball players, the authors found that greater training loads were associated with higher injury rates; the highest incidence of injury occurred in the first week of training, when training loads were greatest. In a separate study, the relationship between training load and injury was examined in rugby league players.46 First, the training load progressively increased during the preseason period and decreased throughout the competitive phase of the season. Second, a significant relationship (r = 0.86) was observed between changes in training load and injury; greater changes in training loads were associated with greater injury incidence (Figure 3A). These results suggested that the harder athletes trained, the more injuries they would sustain. The obvious challenge was to provide an adequate training stimulus to enhance physical fitness and performance without unduly increasing the risk of injury.
REEVALUATING OUR THINKING
Although researchers investigating univariate risk factors for injury (eg, physical qualities) and relationships between training load and injury have provided important insights into training, fitness, and injury, a limitation of these studies was that each tended to be interpreted in isolation. Each study provided an important piece of the training-performance puzzle, but many pieces of the puzzle were still absent. How did fitness, training load, and injury interact? How could athletes train to maximize physiological adaptations and minimize injury risk? Was there an optimal training load for athletes, and how could it be identified? To demonstrate how thinking about training load has evolved, a hypothetical reanalysis of the early study46 and the association between training load and injury in rugby league players is presented here (Figure 3). The first 3 months of the season represented the highest training load, with injury rates closely tracking the increases in training load. The training loads performed in the preseason were associated with a 2.3-fold greater injury risk than those performed in season. However, whether these high training loads explained the high injury rates in the preseason or the low injury rates season was unclear. Could it be that the greater preseason training loads allowed players to better tolerate the in-season training loads?47,48 Or could it be that the low training loads performed in the off-season contributed to the preseason injury rates? Athletes are commonly prescribed postseason training programs, but not all athletes complete these programs.49 As such, hypothetically, at least some athletes will return to the preseason in a deconditioned state (Figure 3B). Coupled with the findings that (1) athletes with poor physical qualities were at greater risk of injury39–42 and (2) higher preseason training loads were associated with greater injury rates,46 it was also shown that players who completed <18 weeks of training before sustaining their first injury were approximately 9 times more likely to sustain a subsequent injury.42 Taken together, these results suggest that insufficient training loads during the off-season period may lead to poor fitness, which in turn may increase injury risk. In this respect, the off-season period represents a window of opportunity50 for developing physical qualities that allow athletes to withstand the training load, thereby decreasing the injury risk during the preseason and in-season periods.
As early as 1992, Kibler et al51 proposed that injury occurred due to the load exceeding the tissues' capacity to withstand the load. Therefore, overuse injuries were often attributed to overtraining. However, circa 2014, evidence of athletes sustaining injuries at low training loads emerged.52 This may explain why some athletes undergo a period of rehabilitation and then sustain a subsequent injury (again at low training loads; Figure 3C). These findings posed a challenge for researchers and clinicians. Why were injuries occurring at low training loads? Wasn't overtraining (and high training loads) responsible for overuse injuries?
Overuse or Underprepared?
To understand the low absolute training load-injury paradox, it is worth revisiting factors that are thought to predispose an athlete to overuse injuries. In 1986, Micheli53 proposed several predisposing factors for overuse injuries in athletes: anatomic malalignment; muscle-tendon imbalances in strength, endurance, or flexibility; footwear; surface; and preexisting disease states. In addition, an inappropriate progression of the rate, intensity, and duration of training was suggested as contributing to these overuse conditions. The obvious question for clinicians was “What constitutes an inappropriate progression of the rate, intensity, and duration of training?”
For load capacity to improve, the applied load must be slightly greater than the athlete's current capacity. However, if the load applied is excessively greater than the athlete's current capacity, then the athlete is at an increased risk of injury. Progressive overload is one of the most well-known training load principles.
In 2014, Hulin et al52 first described the relationship between acute (ie, short term, as a surrogate measure of fatigue) and chronic (ie, longer term, as a surrogate measure of fitness or physical capacity) training load, the ACWR (also described as training-stress balance), and injury. Before this study, rapid increases or sudden changes in training load were simply viewed in terms of week-to-week changes. Unfortunately, week-to-week changes in training load do not account for individual differences in capacity among athletes. Hulin et al52 showed that when acute loads were rapidly increased relative to chronic load, the risk of injury doubled. This was the first attempt to quantify changes in training load relative to an athlete's capacity and then determine the relationship with subsequent injury. The findings demonstrated that the load an athlete was prepared to handle was more associated with overuse injury than high absolute training loads in isolation. In fact, high chronic training loads were associated with the lowest likelihood of injury.
Blanch and Gabbett7 investigated the relationship between the ACWR and subsequent injury in 3 professional sports: cricket,52 rugby league,54 and Australian football. A second-order polynomial curve was fit to the data, with 53% of the variance in injury likelihood explained by the ACWR. These findings have 2 important implications: (1) a rapid increase in training load explains a large number of overuse injuries, but (2) 47% of the variance in injury likelihood is still explained by factors other than training load. The authors proposed a model that used acute and chronic training loads to inform the decision-making process when returning athletes to sport.
In their article on 3 Australian sports, Blanch and Gabbett7 stated, “this representation is illustrative only and should only be considered a guide to how the acute and chronic loads of athletes can be manipulated to minimize the risk of injury” and “each type of loading for different sports will most likely require its own specific model.” Since the initial model was published, the ACWR injury model has been validated by >25 peer-reviewed studies of different sports from different research groups.55–80
Challenges in Capturing Chronic Load and Calculating the ACWR
As the training-load literature continues to evolve, controversies have arisen. First, as chronic load requires a longer period of time to develop, the ACWR cannot be calculated until an adequate amount of training (3 to 6 weeks) has been performed (and captured). Various mathematical and statistical limitations of the ACWR have been identified,81 and researchers82 have already proposed solutions to overcome some of these challenges (eg, an initial value problem). Other challenges (eg, sparse data sets) are more difficult to overcome, as (1) recent training-load–injury research has mostly been performed by practitioners working with single teams in the field (and not academics working in traditional research settings), (2) ATs work with sparse data (ranging from 1 athlete to 1 team) on a daily basis (ie, by definition, elite athletes are rare), and (3) competing teams typically do not share data. Applied solutions and collaboration between academics and clinicians will be required to truly bridge the gap between research and practice.
Second, 2 main approaches have been used to calculate the ACWR: rolling averages and exponentially weighted moving averages (EWMAs). Rolling averages treat every training load performed over a 4-week window equally—the load performed 28 days ago is considered to contribute to adaptation and injury risk equally to the load performed 1 day ago. Williams et al83 proposed the EWMA model to account for the decaying effect of training load over time. To date, the evidence is equivocal: 1 study55 showed greater sensitivity of training-load–injury risk models when the ACWR was derived using the EWMA (ie, the ACWR injury curve shifted to the left), whereas another79 demonstrated no difference between rolling averages and EWMAs in determining injury risk.
Finally, traditional calculations of acute and chronic workloads have been mathematically coupled; that is, the chronic workload includes the current week's acute workload.84 Coupled chronic workloads generate spurious correlations with acute workloads, which have been hypothesized to lead to biased inferences.84 In contrast, uncoupled chronic workloads exclude the acute workload of the most recent week. The implication of coupling acute and chronic loads is that the increased injury risk observed with large ACWRs is simply due to a spurious correlation. However, if this was the case, then (1) the relationship between ACWR and injury for coupled and uncoupled data would be completely different, and (2) spikes in training load observed using an uncoupled ACWR would not be associated with an increased injury risk.
The ACWR-injury relationship has recently been compared using coupled and uncoupled calculations of the ACWR.85 When the values were expressed as percentile ranks, no differences in injury risk were present between the coupled and uncoupled ACWRs. Higher ACWRs were associated with an increased injury likelihood for both the coupled and uncoupled methods; however, the injury risks using the coupled and uncoupled ACWRs did not differ. However, when expressed as absolute ACWRs, uncoupling acute and chronic loads shifted the ACWR injury curve to the right. From a practical perspective, these findings suggest that coupling or uncoupling the ACWR makes little difference to the ACWR-injury relationship. Furthermore, these results are consistent with those of previous authors; higher ACWRs were associated with a greater risk, irrespective of whether the acute and chronic workloads were coupled or uncoupled.
In summary, a growing body of work has demonstrated an association between training load and injury; however, many questions clearly remain to be answered if we are to further understand this complex relationship.
WHAT MISTAKES DO CLINICIANS MAKE WHEN INTERPRETING THE ACWR?
Several publications have addressed common mistakes that arise when interpreting ACWR data. First, given that the ACWR explains a little more than half of the variance in injury likelihood, other factors (eg, chronic load, biomechanics) obviously contribute to the injury risk. As such, the ACWR should never be viewed in isolation.25,86,87 Equally, because the injury risk increased at an ACWR of approximately 1.5 to 2.0,6 several clinicians86 have used the ACWR as a threshold at which all training should cease. It is important to recognize that an ACWR of 1.5 is not a magic number—increased risk does not guarantee injury will occur.25 These results are consistent with those of others88 who demonstrated significant associations between musculoskeletal screening tests (eg, biomechanical or strength measurements) and injury. Although these associations may help us understand potential causative factors (acknowledging that association does not equal causation), in isolation, these tests lack the sensitivity and specificity to predict injury with sufficient accuracy.88 A similar line of thinking should be applied to the ACWR.
WHICH FACTORS SEPARATE ROBUST FROM FRAGILE ATHLETES?
If sudden changes in training load contribute to overuse injuries, perhaps the most interesting question is “Why do some athletes tolerate rapid spikes in load and others do not?” Which factors separate the robust from the fragile athletes? Evidence89 has shown that specific physical qualities moderate the relationship between training load and injury. For example, Malone et al62 demonstrated that, when exposed to rapid spikes in training load, players with a higher level of aerobic fitness were at lower injury risk than players with poorly developed aerobic qualities. Similar results have been noted for tests of speed, repeated-sprint ability, and lower body strength; when exposed to rapid changes in training load, players with better developed physical qualities had a lower risk of injury.64 It is likely that many moderators of the training-load–injury relationship are still to be uncovered. Equally, some moderators may be more important for some sports than others. For instance, strength might be considered a critical moderator of the training-load–injury relationship for an American football player, whereas aerobic fitness may be more important for a marathon runner. Further research is required to determine the potential myriad of moderating factors that differentiate robust from fragile athletes.
THE INTERACTION OF MULTIPLE VARIABLES
Increasing Capacity Involves More Than Increasing Load
Verhagen and Gabbett90 recently described the relationship among load, load capacity, health, and performance (Figure 4). Positive training adaptations occur when the load is gradually and systematically increased above an athlete's current load capacity. This would suggest that in order to increase the load tolerance, all one has to do is safely progress the load above the current capacity. However, load capacity is also influenced by factors associated with health (eg, mood, stress, sleep quality). Therefore, the load that can be tolerated today may not be tolerable tomorrow.90 This indicates that in order to safely progress load, clinicians must also consider an athlete's health.90 When building load capacity, before sensibly progressing load, ATs may need to regress load.
Which Comes First: High Training Loads or the Robust Athlete?
As stated earlier, various physical qualities moderate the relationship between training load and injury. For example, tolerance to spikes in training load is moderated by aerobic fitness and lower body strength: athletes with well-developed physical qualities have a reduced risk of injury; given the same increase in load, athletes with poorly developed physical qualities have a greater risk of injury.64 Nonetheless, this presents somewhat of a chicken-or-egg problem. Which comes first: load or the ability to tolerate load? That is, the development of physical qualities (that protect against spikes in load) requires high training loads, but tolerating high training loads requires well-developed physical qualities. Presumably, structure-specific load capacity, which is associated with a degree of physical capacity (eg, aerobic fitness, strength), allows an individual to tolerate training load. In turn, application of the training load further develops these physical qualities, which eventually leads to sport-specific load capacity (Figure 5).91
Integrating Musculoskeletal Screening Results and Training-Load Data
Because of the complex and dynamic manner in which organisms behave92 and the multiple inputs to injury and performance, the reductionist approach used by most authors42,46 represents a limitation of previous research. Møller et al59 successfully integrated musculoskeletal screening results, training load, and injury data in handball players. Large changes in throwing load (>60% per week) were associated with a 2-fold increase in the shoulder injury rate (hazard ratio = 1.91). The effects of moderate (20% to 60% per week) and large (>60% per week) increases in training load were exacerbated in the presence of poor external-rotation strength and scapular dyskinesis. These findings provide insight into why some athletes may tolerate rapid changes in training load, whereas others cannot. Athletic trainers commonly use musculoskeletal screening and physiological test results in combination with longitudinal training-load data to inform decision making. Most of these decisions are evidence based but made in real time within the high-pressure constraints of elite sport. The challenge facing researchers is obtaining large enough longitudinal data sets to capture the time-varying nature of physiological and musculoskeletal capacities and training-load data to adequately inform injury-prevention efforts.93
IF TRAINING IS ABOUT MATHEMATICS, DON'T FORGET THE DENOMINATOR
Although the ACWR has become popular as a method for safely progressing and regressing training loads, optimal loading involves more than simply monitoring athletes. Furthermore, athlete monitoring involves more than capturing a single variable.94 With a large focus on the ACWR, it is important that clinicians remember the denominator in the equation—chronic load. Chronic load can be viewed as a surrogate measure of an athlete's fitness or physical capacity. Evidence47,48 suggests that athletes who completed a greater number of preseason sessions missed fewer in-season games due to injury. These results highlight the important role of an effective preseason program in minimizing the risk of in-season injuries. Similarly, others* observed that higher chronic loads were associated with a lower injury risk. Although training loads will need to be reduced in some cases to promote recovery, these findings indicate that restricting training loads on a regular basis to protect against overuse injury is unlikely to produce robust and resilient athletes. Athletes need to load in order to withstand load.
HOW CAN ATHLETIC TRAINERS USE TRAINING LOAD IN THEIR DAY-TO-DAY PRACTICE?
Athletes and clinicians have been involved in the training process in one form or another for centuries, yet training-load research is relatively new.25 Consequently, the influence of training load on performance and injury has not been exhaustively studied in every sport. Equally, even though rehabilitation programs for tendon,106–109 muscle,110 bone,111–113 and joint114,115 injuries have been proposed, the ideal sport-specific program that can be applied to all athletes (injured or healthy) does not exist. So, if the evidence on training load is incomplete, does this mean ATs can still use evidence-based practice? Clearly, the answer is yes because effective evidence-based practice not only relies on the use of the best available peer-reviewed research but also integrates clinical expertise and athlete values and expectations into the decision-making process (Figure 6).116,117
Among their many roles, ATs design rehabilitation programs for athletes, assist and monitor injured players as they progress toward recovery, and work as part of a multidisciplinary team to evaluate the health and condition of players. Early loading (ie, 2 days postinjury) results in faster recovery from muscle injuries than delayed loading (9 days postinjury).118 With this in mind, application of appropriate training load should be the cornerstone of medical care provided by the AT during the acute injury phase. The planning, prescription, and monitoring of training loads forms an important part of the protocol for returning athletes to full capacity.
Athletic trainers can use training loads to design appropriately staged rehabilitation programs in order to safely return athletes to competition after injury. An understanding of the athlete's current capacity, capacity required for the sport, and expected biological healing time allows ATs to plan rehabilitation programs that minimize spikes in load during the return-to-sport process.119 For example, when determining appropriate loading progressions for a specific sport (eg, baseball), the clinician must comprehensively assess the sport-specific (eg, muscular strength) and structure-specific (eg, scapular-control)120 capacities required to ensure that training loads (eg, pitch counts) are progressed on an individual basis. A myriad of factors (eg, age, tissue health, psychological stress, sleep, strength, aerobic fitness) can affect load tolerance, resulting in some athletes tolerating faster (or slower) progressions than others.
If the load applied is considerably greater than the load capacity, unfavorable symptoms (eg, pain) may result. During rehabilitation sessions, the internal load may be monitored to ensure that the athlete is tolerating the applied external load. This information can be used to progress or regress training loads during the session. Equally important, depending on the injured tissue type, in the 24 to 48 hours after rehabilitation sessions, monitoring wellbeing symptoms (eg, soreness, pain) will indicate when the athlete is ready to load again.
Perhaps the most important use of training-load data is in determining whether an athlete has performed adequate training to safely return to play.7 After injury, ATs must be mindful of both local tissue capacity and sport-specific capacity. Here, working with the performance, medical, and coaching staff as part of a multidisciplinary team is critical. Although the early stages of rehabilitation will be used to restore local tissue capacity, it is important that sport-specific capacity is also maintained during this period. If the sport-specific chronic load is allowed to decline during rehabilitation, the rehabilitation process may be prolonged, resulting in the athlete being underprepared for the acute demands of the sport and at risk of subsequent injury.49
SOLVING THE TRAINING-PERFORMANCE PUZZLE
Answering the complex performance and injury questions that arise in sport is akin to solving a jigsaw puzzle with an infinite number of pieces. As 1 piece is fit into the puzzle, another loose piece takes its place. The training-performance puzzle can be solved but requires collaboration between researchers and clinicians and an understanding that efficacy (ie, how training load affects performance and injury in an idealized or controlled setting) does not equate to effectiveness (ie, how training load affects performance and injury in the real-world setting where many variables cannot be controlled). It is incumbent on ATs to stay abreast of the training-performance literature, understand that research evidence has different levels of strength, and recognize that research will continue to evolve. Equally, researchers are encouraged to gain an appreciation of the day-to-day challenges facing ATs so that their research better informs broader strategies for managing injury risk.