The purpose of this 2-part commentary series is† to explain why we believe our ability to control injury risk by manipulating training load (TL) in its current state is an illusion and why the foundations of this illusion are weak and unreliable. In part 1, we introduce the training process framework and contextualize the role of TL monitoring in the injury-prevention paradigm. In part 2, we describe the conceptual and methodologic pitfalls of previous authors who associated TL and injury in ways that limited their suitability for the derivation of practical recommendations. The first important step in the training process is developing the training program: the practitioner develops a strategy based on available evidence, professional knowledge, and experience. For decades, exercise strategies have been based on the fundamental training principles of overload and progression. Training-load monitoring allows the practitioner to determine whether athletes have completed training as planned and how they have coped with the physical stress. Training load and its associated metrics cannot provide a quantitative indication of whether particular load progressions will increase or decrease the injury risk, given the nature of previous studies (descriptive and at best predictive) and their methodologic weaknesses. The overreliance on TL has moved the attention away from the multifactorial nature of injury and the roles of other important contextual factors. We argue that no evidence supports the quantitative use of TL data to manipulate future training with the purpose of preventing injury. Therefore, determining “how much is too much” and how to properly manipulate and progress TL are currently subjective decisions based on generic training principles and our experience of adjusting training according to an individual athlete's response. Our message to practitioners is to stop seeking overly simplistic solutions to complex problems and instead embrace the risks and uncertainty inherent in the training process and injury prevention.

Injury prevention is a multimodal process in which various professionals should collaborate, contributing their expertise to reach this common goal. Improving the understanding of each professional's role within this interdisciplinary team facilitates the collaborative process. The interest in injury prevention for athletes and team sports is generally motivated by the negative effects injuries can have on performance and the associated costs incurred by sporting organizations.1,2  Furthermore, common sense dictates that athletic performance cannot be optimized when athletes are restricted by injury. No coach wishes any athlete under his or her care to become injured, but it is true that, to maximize performance, training must sometimes be close to the limit of athlete tolerance, eg, between functional and nonfunctional overreaching while avoiding overtraining.3  Athletes and coaches might deliberately accept an increased injury risk as a tradeoff for potential performance benefits.4  Less common, however, may be the acceptance of risk by team managers, supporters, and the media, which may raise the pressure on staff to minimize injury occurrence and effects while maximizing athlete availability and physical output.

The hypothesis of a relationship between sport injury and training load (TL) was proposed by Kibler et al5  more than 30 years ago. In recent years, TL-injury research has grown exponentially, particularly after 2 publications6,7  and subsequent editorials8,9  that contributed to catalyzing the interest in this area. Authors of these seminal papers and the subsequent research have pushed practitioners to take a more explicit role in the monitoring and management of TL for injury prevention. In fact, the high-performance sport industry has been quick to adopt and implement the metrics and recommendations from the expanding body of literature on the association between TL and injury. Though it is great to see early adoption of this concept, we argue that no strong evidence supports the quantitative use of TL data to manipulate future training for the purpose of preventing injury.

Some recommendations (eg, do not train too much too soon) make sense but are not derived from or supported by the literature, with associations being shown in different directions or not at all (see Part 2). In most cases, these recommendations are the outcome of selecting, from the diverse findings available, those that fit popular beliefs and comply with common sense principles. The predisposition of humans to fit information to currently held beliefs (confirmation bias), along with the recent plethora of investigators reporting any kind of association, has only solidified the illusion that we have more control over the occurrence of injury than we in fact do.10  Our human inclination to simplify complex problems has possibly been the driver for this current theory being so widely adopted so quickly.

In this commentary series, our goal was to explain why we believe our ability to influence injury risk by manipulating TL is largely an illusion and why the foundations of this illusion are weak and unreliable. In part 1, we introduce the training process framework, and we contextualize the role of TL monitoring in the injury-prevention paradigm by providing operational alternatives to the quantitative use of metrics derived from TL measures. In part 2, we describe conceptual and methodologic pitfalls typical of prior authors who associated TL and injury in ways that limited their suitability for the derivation of practical recommendations.

Training Load and Its Role in Monitoring

Training load can be defined as the input variable that practitioners use to induce a training outcome.11  Training load includes all the exercise sessions that potentially elicit training effects (including competitions). About 15 years ago, the idea of internal and external load was introduced to explain the fundamental training notion of the stimulus-response concept12 : the psychophysiological response experienced by the athlete during the training process is the stimulus for the biological and psychological adaptations (the training outcome). The external load is the physical work prescribed in the training program (the quality, quantity, and organization of the exercises selected)13,14  that ultimately induces a specific psychophysiological response (ie, internal load). The same framework has been extended to biomechanical components.15 

No criterion standard measures of TL exist, but measures that may be more or less appropriate in relation to the context and the target are available.12  For example, proxy measures of external load are usually specific to the nature of the training undertaken. The external load in resistance training can be the external resistance lifted; however, we may be more interested in the work completed, the mechanical load experienced, or the velocity generated during lifting.16  In team sports, external load can be assessed as the total distance covered in specific speed bands or accelerations. Similarly, measures of internal load (ie, the responses to the external load), such as heart rate, can be appropriate and valid for endurance-based exercises, but they are less valid for quantifying the cardiovascular load during sprint-based or intermittent exercises. This applies to other proxies of internal load, such as blood lactate level (also not very feasible in practice) and rating of perceived exertion. Any measure has strengths and weakness that should be considered when selecting and interpreting the results.

The training-process framework includes essential measurable components necessary for monitoring and controlling the whole training process: (1) the external load, (2) the internal load, and (3) the training outcome (Figure 1). Assessing these components allows the practitioner to understand whether the external load has induced the planned psychophysiological response (internal load) and whether that load has induced the expected adaptations (indirectly assessed by measuring the training outcome). Failure to meet the target responses can be used to provide feedback to modify the training plan (feedback loop).

Before considering a training program effective or ineffective, the practitioner needs to be sure that the athlete has completed the training as planned. This second level of control is carried out by setting standards (TL planned) and comparing the measured TL (TL completed) with those standards (Figure 2). If the standards are met, the training can proceed as planned; if the standards are not met, corrective actions may be needed after the reasons for the failure are determined. For example, a practitioner may have planned a certain number of sprints, repetitions, distance run above a threshold, or time above a percentage of maximal heart rate. Although these goals can be easily achieved by athletes completing highly controllable exercises (eg, gym, sprint sessions, generic interval training), it is more difficult to predict the individual responses to other forms of specific exercises, such as small-sided games in which the activity is spontaneous and can differ among players. Nevertheless, the practitioner usually has a target TL he or she aims for the athlete to reach with the proposed exercise, and hence, the monitoring of actual TL is crucial. Similarly, TL during competition cannot be manipulated, but it constitutes a training stimulus. Therefore, an estimate of the competition load should be included in TL planning. In essence, the main goal of TL monitoring is to control whether athletes have been doing what was planned by the practitioner. An additional goal of monitoring is to evaluate how the athletes are coping with and tolerating the TL. For this, measures of TL based on perceptions, such as the rating of perceived exertion,17  together with other responses to internal load can provide useful information.

Even though the development of the training plan is not shown in Figure 1, it is an essential requirement for any training process (indeed, the process cannot progress without this step). The development of a training program is usually based on an understanding of the determinants (eg, limiting factors) of the performance, which are the physiological systems that the practitioner tries to target when planning the training. In the context of injury prevention, these determinants are the factors related not to performance but to injury occurrence (Figure 3). These components—risk factors and causal mechanisms—must be considered during the planning phase.

When developing a training plan, a coach combines and uses (1) the evidence available, (2) professional knowledge, (3) her or his own experience, and (4) an understanding of the athlete's individual needs. This is no different than the process followed by any practitioner who is developing training or rehabilitation programs. The ability to make decisions based on these components is commonly referred to as evidence-based practice, which is one of the competencies specified by the National Athletic Trainers' Association,18  among other organizations. This process is clearly subjective and depends on the practitioner's level of knowledge, experience, and relationship with the athlete. Yet this degree of subjectivity introduces uncertainty (ie, risk) into the training process, as we can never be certain that what we are doing is the best option. Currently, this is inevitable and common in many professions. An example is the medical profession. Physicians make diagnostic and treatment decisions based on many different bits of information (clinical history, physical examination results, imaging findings, blood test results, etc). The final diagnosis and resulting treatment recommendation are based on available evidence and knowledge, combined with the personal experiences, biases, and beliefs of the physician and his or her patients. Different physicians may make different decisions but demonstrate outcomes that are equally successful. Clinical professionals have learned to accept this uncertainty, even though it may still elicit an uncomfortable feeling of not having total control over a situation and its outcomes.

Of course, a simple solution to modeling TL and injury risk is very appealing, but injury prevention in high-performance athletes is a complex phenomenon and requires an equally complex consideration of the multiple factors that might contribute to injury.

Similar to a physician who uses all the information at his or her disposal to diagnose a condition and plan a treatment, diverse expert opinions from the multidisciplinary performance support staff will ensure that the best-informed decisions regarding training plans are being made.

Overload and progression are fundamental training principles that have been used for decades to develop training programs.19  Similar training principles are used and advocated in other exercise-related contexts, such as physical activity in special or clinical populations.20  Accordingly, training plans commonly adhere to the idea that progressing load too quickly can have a negative effect on load tolerance as a result of suboptimal adaptations, thereby increasing the risk of nonfunctional overreaching or overtraining.5  These 2 conditions have been suggested as increasing the injury risk.5  However, more recently, a purportedly data-driven approach has been popularized: the use of new TL metrics to determine injury risk and guide remodulation of the prescribed TL (path B in Figure 3). Although such a simple approach would be ideal, unfortunately, this is still difficult for several reasons, including the fact that available TL measures have limitations in reflecting injury causes (eg, tissue or structure-specific strength and mechanical load responses, ie, tissue or structure-specific stress and strain).21,22  Additionally, significant conceptual and methodologic problems exist with previously published studies on which this paradigm is based. To decrease the injury risk, a training program should focus on 2 main components: tissue-specific strength and tissue-specific stress and strain, as we23  recently presented in a conceptual framework (Figure 4). Training-load monitoring, instead, pertains to the implementation phase. In this phase, we measure TL and adopt corrective actions when the measures deviate above a certain amount (predefined) of what was planned (Figure 2). Measures of TL (absolute and relative) can also inform us about whether the load progression is following the plan.

Determining how to manipulate and progress TL (eg, avoiding too much too soon) is a subjective decision based on generic training principles and our own trial-and-error experience of adjusting training based on an athlete's response and load tolerance. Training-load monitoring is a systematic process for evaluating an athlete's TL exposure over time to inform the practitioner about the athlete's progression in a training phase or program. The practitioner can then use this information to provide criteria for manipulating the athlete's future training sessions and treatment or recovery approaches. It can be summarized and simplified in the following steps:

  1. systematic measurement,

  2. results summary,

  3. objectively informed critical thinking and processing,

  4. informed decision making.

Part 2 of this viewpoint addresses the different conceptual and methodologic problems that arose when previous researchers recommended the use of TL metrics to establish injury risk and the manipulation of TL to reduce that risk (path B of Figure 3). These arguments are presented to (1) demonstrate to practitioners that the evidence and method behind this approach are not as strong as perceived and (2) provide advice to authors who would like to address this topic in the future. However, an even more fundamental concern to date has been completely ignored in the literature on TL injury: the establishment of a clear causal path. Indeed, in the absence of an attempt to establish causation, practical recommendations regarding manipulation of investigated prognostic factors are not possible.24  This limitation alone invalidates the practical use of TL measures to determine if the TL progression is appropriate for reducing the injury risk.

Of Course, Association is Not Prediction. . . but Neither is Causation.

By suggesting manipulation of a prognostic factor (such as altering TL) to influence the likelihood of a future outcome (eg, injury), a cause-effect relationship is clearly assumed. This erroneous interpretation has been widespread in the practical and clinical settings in which practitioners believe that TL manipulation has been proven to reduce the injury risk.25,26  Such as “High training workloads alone do not cause sports injuries: how you get there is the real issue”27  articles published in scientific journals have no doubt perpetuated this belief and increased confusion. Unfortunately, these recommendations and associated metrics are now included in commercially available software and athlete management systems and used by international federations for developing TL management and injury-prevention guidelines.28  Authors24,2931  of previous studies and editorials have acknowledged that association is not prediction, but none have realized that neither of these reflect causation. Causation is required to suggest that manipulating 1 variable will influence another. Though no authors have yet tried to establish a causal link between TL and injuries, similar practical recommendations are provided, all of which imply causation. These researchers provide associations between TL and injury; however, descriptive associations can only be used for generating hypotheses. A few have even tried to develop predictive models, and it is generally thought that the first country or team able to accurately and consistently predict injuries is likely to attain a substantial competitive advantage. Still, the ability to predict an event does not necessarily shed light on the cause of the event.24  This does not detract from the role of prediction. Theoretically, it is possible to predict, but predictors are not necessarily the causes of the outcome, and therefore, manipulating them does not necessarily affect the predicted event. Nonetheless, when done properly, prediction may provide information, prompting the adoption of other strategies with a proven effect on the event. For example, a high value in a prognostic marker for a disease can initiate a series of additional interventions and further screenings. Unless the relationship is causal, we cannot expect much from manipulating the predictor or prognostic factor. Similarly, a TL metric may be found in the future to predict injury, but unless causation is specifically investigated, manipulating these metrics does not mean that we will alter the injury risk.24,32 

The famous example of the spurious relationship between shark attacks and ice cream sold demonstrates this well. Understandably, a strong relationship exists between the amount of ice cream sold and the number of shark attacks, so much so that it may be possible to predict the number of shark attacks using the amount of ice cream sold. However, reducing or even banning the selling of ice cream would not influence the number of shark attacks. Instead, it would be necessary to activate other countermeasures such as increasing coastal surveillance, providing warnings to surfers, closing beaches, etc. Based on the available evidence, suggesting manipulation of TL and derived metrics to reduce the risk of injury is the equivalent of putting a ban on ice cream sales to prevent shark attacks.

Although experimental studies are the criterion standard for establishing cause-and-effect phenomena, causal inferences have been based on extensive and well-established methods in observational studies. However, no authors have used these methods.33 

Operational Proposal for Practitioners: Back to the Future

Considering both the lack of causal studies and the methodologic problems with the associative studies (outlined in part 2), the only solution is for practitioners to continue to do what they have been doing for decades: that is, working on how to inform and develop training plan progressions based on expert knowledge and best practice while individually adjusting each progression relative to the athlete's tolerance (eg, self-reports such as soreness scales), responses (eg, internal load measures such as heart-rate measures), and training outcomes (eg, physical or performance targets such as countermovement jump height). The framework presented in Figure 3, including the planning and implementation phases, can serve as a conceptual guide for developing a rational program that facilitates a systematic process. According to this framework, practitioners such as strength and conditioning coaches, athletic trainers, and physiotherapists should ideally work on the components causally related to injuries—structural or tissue strength and mechanical loading—to improve an athlete's tolerance to specific loading and performance requirements.

This concept already underpins rehabilitation, and an anatomical structure is considered at full capacity if the athlete is able to perform functional movements at the volume and frequency required without exacerbating symptoms or causing cumulative damage that exceeds reparability, thereby resulting in tissue injury.34  There is no reason to think that this does not also apply in the injury-prevention space. Any training, rehabilitation, or injury-prevention plan should account for the different tissues' adaptive characteristics, such as recovery time and response to load. This is the approach proposed by Glasgow et al,35  who suggested that optimal loading in the context of rehabilitation should include integration of the entire neuromusculoskeletal system and should (1) target the appropriate tissues; (2) ensure loading through the functional ranges; (3) balance compressive, tensile, and shear loading; (4) vary the magnitude, direction, duration, and intensity; (5) incorporate neural overload; (6) adapt to individual characteristics; and (7) be functional. Again, this is not based on one-size-fits-all metrics but on clinical reasoning and intuition based on the individual's responses and necessities.

Practitioners can develop training programs that address biomechanical alterations or limitations by modifying functional movement patterns that might be affecting load distribution through specific tissues, generating abnormal stresses and strains. Combining restoration of function to avoid abnormal tissue load distributions with exercise to reestablish structural integrity after an injury, for example, may help to reduce the risk of reinjury. Sustaining an injury increases the risk of a subsequent injury of any type, not just a recurrence of the original injury.36,37  This indicates that alterations from previous injuries might overload structures other than those directly involved in the initial injury.

Although current TL metrics cannot provide meaningful information on whether the load progression increases the injury risk by excessively loading a particular structure, they can assist in setting targets that an athlete must tolerate to successfully return to sport postinjury (when sufficient historical TL records are available). Training load is just another tool in the belt of a skilled practitioner. Information regarding TL should certainly be considered when planning programs for athletes, but it is by no means the only factor in successful rehabilitation or performance, and TL metrics should certainly never be used independently as a substitute for sound reasoning processes. The most appropriate progression is not determined by the metric but by expert knowledge, experience, and what is considered the best practice. A good practitioner should be using problem-solving skills to maximize the athlete's capabilities, making the athlete more robust so that he or she can better tolerate what is required to improve or maintain competitive performance.

And. . . Look at the Bigger Picture: Downsize Your and Your Stakeholders' Expectations.

The other risk we run is implying that we have the answer to the problem of injury in terms of stakeholder expectations. Most experts agree that injuries are multifactorial in nature, but if we place too great an emphasis on 1 variable alone (such as TL monitoring and manipulation), we are unlikely to succeed in appreciably and consistently reducing the injury rate. By using oversimplified, unilateral techniques to suggest we have the answer, we run the risks of both missing out on the opportunity to identify other factors that may contribute to injury and leaving stakeholders unimpressed when injury rates fail to decrease by the promised amounts. In recent years, experts have published several articles38,39  highlighting the complexity of injury prevention and emphasizing the need for more comprehensive approaches and multimodal interventions. The Translating Research into Injury Prevention Practice (TRIPP)40  framework proposes a 6-stage, evidence-based research process for injury control:

  1. injury surveillance (ie, establishing the extent of the problem),

  2. mechanisms of injury etiology (eg, risk factor identification),

  3. development of injury-prevention measures (ie, appropriate intervention selection),

  4. evaluation of injury-prevention interventions (ie, assessing efficacy),

  5. description of the intervention context (eg, translating efficacy into effectiveness),

  6. the evaluation of injury-prevention interventions in a specific implementation context.

Many organizations have begun this process with surveillance systems in place (stage 1). Hulme et al41  stated that authors attempting to identify risk factors use somewhat of a black-box approach, meaning that stage 2 (identification and understanding of injury etiology) has not been properly addressed. This is especially true when no theoretical basis or explanation of the links between risk factors and the biological or mechanical causes of injuries exists. To understand mechanisms of injury, we need conceptual frameworks that provide causal assumptions, ie, hypotheses of causal structures that can be formally tested. Challenging these frameworks is the scientific process that leads to optimization and arrival at an acceptable etiological model. This is an unavoidable step for progressing to the other stages of the TRIPP model because an injury-prevention measure should be linked to the mechanism of injury if it is to have any chance of being effective. The framework referred to earlier (Figure 3) is an attempt to move us in this direction. However, it is important to highlight that injury prevention should target these suggested mechanisms (tissue or structure strength and mechanical loading) not only with training but ideally also with interventions for other contextual factors to have a comprehensive influence on these mechanisms. Using the socioecological model proposed by Bolling et al42  as a further example (Figure 5), all levels (country, association, sports, and athlete) can have causal effects on injury occurrence only if they influence the mechanisms of injuries (inner level), for which we offered a causal framework.23  Ideally, each level of context should be considered when developing injury-prevention programs, though this is not always possible.

As mentioned, acting on all of these contextual factors in a practical setting is difficult, but it nicely demonstrates that we should temper our expectations when discussing TL and its effect on injury because it is only 1 factor among many that can influence the injury risk. Educating stakeholders to accept risk and uncertainty, thereby reducing pressure on coaching and performance support staff, will probably be more effective than embracing overly simplistic views and will allow practitioners to work responsibly.

Researchers must be careful with the messages they popularize. For example, suggesting replacement of the term overuse injuries with TL errors43  not only negates the multifactorial nature of injuries38,42  but assigns too much importance to TL without acknowledging the limitations of the available measures and obscuring other potential contextual factors. These types of statements can also generate conflicts between different personnel in the injury-prevention domain (for instance, between clinicians and coaches) and add unnecessary confusion.44 

In the absence of strong and unbiased evidence, practitioners must be aware of the risks of relying on TL metrics to inform their decision making (see part 2). Information regarding TL should only complement the many other skills used to ensure that the athletes with whom practitioners work are prepared to train and compete at their best level. Until the science catches up, practitioners should continue to focus on using established evidence-based strategies and common sense to guide their decision making, focusing on interventions that, at least theoretically, can influence the mechanisms of injury.

The role of all performance support staff is to provide the coach and the athlete with the information required so they can make an informed decision regarding training in the context of any given scenario. If athletes do “break” (and this is often outside our control), it is often manageable, the performance support staff have the expertise to “fix” them. All stakeholders should be aware of the uncertainty behind injury risk control and the training process. Some factors are simply not controllable.

Finally, athletic performance and injury prevention or management are not mutually exclusive and instead depend heavily on one another, with optimal performance always the main goal in a sport setting. As such, medical staff need to work closely with coaches and other performance support team members to help deliver what is needed to improve performance while trying to minimize the risk of injury.

1. 
Hägglund
M,
Waldén
M,
Magnusson
H,
Kristenson
K,
Bengtsson
H,
Ekstrand
J.
Injuries affect team performance negatively in professional football: an 11-year follow-up of the UEFA Champions League injury study
.
Br J Sports Med
.
2013
;
47
(12)
:
738
742
.
2. 
Hickey
J,
Shield
AJ,
Williams
MD,
Opar
DA.
The financial cost of hamstring strain injuries in the Australian Football League
.
Br J Sports Med
.
2014
;
48
(8)
:
729
730
.
3. 
Meeusen
R,
Duclos
M,
Foster
C,
et al
Prevention, diagnosis, and treatment of the overtraining syndrome: joint consensus statement of the European College of Sport Science and the American College of Sports Medicine
.
Med Sci Sports Exerc
.
2013
;
45
(1)
:
186
205
.
4. 
Drawer
S,
Fuller
CW.
Evaluating the level of injury in English professional football using a risk based assessment process
.
Br J Sports Med
.
2002
;
36
(6)
:
446
451
.
5. 
Kibler
WB,
Chandler
TJ,
Stracener
ES.
Musculoskeletal adaptations and injuries due to overtraining
.
Exerc Sport Sci Rev
.
1992
;
20
:
99
126
.
6. 
Hulin
BT,
Gabbett
TJ,
Blanch
P,
Chapman
P,
Bailey
D,
Orchard
JW.
Spikes in acute workload are associated with increased injury risk in elite cricket fast bowlers
.
Br J Sports Med
.
2014
;
48
(8)
:
708
712
.
7. 
Hulin
BT,
Gabbett
TJ,
Lawson
DW,
Caputi
P,
Sampson
JA.
The acute : chronic workload ratio predicts injury: high chronic workload may decrease injury risk in elite rugby league players
.
Br J Sports Med
.
2016
;
50
(4)
:
231
236
.
8. 
Blanch
P,
Gabbett
TJ.
Has the athlete trained enough to return to play safely? The acute : chronic workload ratio permits clinicians to quantify a player's risk of subsequent injury
.
Br J Sports Med
.
2016
;
50
(8)
:
471
475
.
9. 
Gabbett
TJ.
The training-injury prevention paradox: should athletes be training smarter and harder?
Br J Sports Med
.
2016
;
50
(5)
:
273
280
.
10. 
Evans
JSBT.
Reasoning, biases and dual processes: the lasting impact of Wason (1960)
.
Q J Exp Psychol (Hove)
.
2016
;
69
(10)
:
2076
2092
.
11. 
Coutts
AJ,
Crowcroft
S,
Kempton
T.
Developing athlete monitoring systems: theoretical basis and practical applications
.
In:
Kellmann
M,
Beckmann
J,
eds.
Sport, Recovery, and Performance: Interdisciplinary Insights
.
Abingdon, Oxon
:
Routledge;
2018
:
19
32
.
12. 
Impellizzeri
FM,
Marcora
SM,
Coutts
AJ.
Internal and external training load: 15 years on
.
Int J Sports Physiol Perform
.
2019
;
14
(2)
:
270
273
.
13. 
Impellizzeri
FM,
Rampinini
E,
Marcora
SM.
Physiological assessment of aerobic training in soccer
.
J Sports Sci
.
2005
;
23
(6)
:
583
592
.
14. 
McLaren
SJ,
Macpherson
TW,
Coutts
AJ,
Hurst
C,
Spears
IR,
Weston
M.
The relationships between internal and external measures of training load and intensity in team sports: a meta-analysis
.
Sports Med
.
2018
;
48
(3)
:
641
658
.
15. 
Vanrenterghem
J,
Nedergaard
NJ,
Robinson
MA,
Drust
B.
Training load monitoring in team sports: a novel framework separating physiological and biomechanical load-adaptation pathways
.
Sports Med
.
2017
;
47
(11)
:
2135
2142
.
16. 
Scott
BR,
Duthie
GM,
Thornton
HR,
Dascombe
BJ.
Training monitoring for resistance exercise: theory and applications
.
Sports Med
.
2016
;
46
(5)
:
687
698
.
17. 
Foster
C.
Monitoring training in athletes with reference to overtraining syndrome
.
Med Sci Sports Exerc
.
1998
;
30
(7)
:
1164
1168
.
18. 
National Athletic Trainers' Association
.
Athletic Training Education Competencies. 5th ed
.
Dallas, TX
:
National Athletic Trainers' Association;
2011
.
19. 
Todd
JS,
Shurley
JP,
Todd
TC.
DeLorme
Thomas L.
and the science of progressive resistance exercise
.
J Strength Cond Res
.
2012
;
26
(11)
:
2913
2923
.
20. 
Neil-Sztramko
SE,
Medysky
ME,
Campbell
KL,
Bland
KA,
Winters-Stone
KM.
Attention to the principles of exercise training in exercise studies on prostate cancer survivors: a systematic review
.
BMC Cancer
.
2019
;
19
(1)
:
321
.
21. 
Ellison
MA,
Kenny
M,
Fulford
J,
Javadi
A,
Rice
HM.
Incorporating subject-specific geometry to compare metatarsal stress during running with different foot strike patterns
.
J Biomech.
2020
:
105:109792.
22. 
Verheul
J,
Nedergaard
NJ,
Vanrenterghem
J,
Robinson
MA.
Measuring biomechanical loads in team sports—from lab to field
[published online ahead of print January 8,
2020]
.
Sci Med Football
.
23. 
Kalkhoven
J,
Watsford
M,
Impellizzeri
FM.
A conceptual model and detailed framework for stress-related, strain-related, and overuse athletic injury
.
J Sci Med Sport
.
2020
;
23
(8)
:
726
734
.
24. 
Hernán
MA,
Hsu
J,
Healy
B.
A second chance to get causal inference right: a classification of data science tasks
.
Chance
.
2019
;
32
(1)
:
42
49
.
25. 
Weston
M.
Training load monitoring in elite English soccer: a comparison of practices and perceptions between coaches and practitioners
.
Sci Med Football
.
2018
;
2
(3)
:
216
224
.
26. 
Akenhead
R,
Nassis
GP.
Training load and player monitoring in high-level football: current practice and perceptions
.
Int J Sports Physiol Perform
.
2016
;
11
(5)
:
587
593
.
27. 
Gabbett
TJ,
Hulin
BT,
Blanch
P,
Whiteley
R.
High training workloads alone do not cause sports injuries: how you get there is the real issue
.
Br J Sports Med
.
2016
;
50
(8)
:
444
445
.
28. 
Soligard
T,
Schwellnus
M,
Alonso
JM,
et al
How much is too much? (Part 1) International Olympic Committee consensus statement on load in sport and risk of injury
.
Br J Sports Med
.
2016
;
50
(17)
:
1030
1041
.
29. 
Fanchini
M,
Rampinini
E,
Riggio
M,
Coutts
AJ,
Pecci
C,
McCall
A.
Despite association, the acute : chronic work load ratio does not predict non-contact injury in elite footballers
.
Sci Med Football
.
2018
;
2
(2)
:
108
114
.
30. 
Hulin
BT,
Gabbett
TJ.
Indeed association does not equal prediction: the never-ending search for the perfect acute : chronic workload ratio
.
Br J Sports Med
.
2018
;
53
(3)
:
144
145
.
31. 
McCall
A,
Fanchini
M,
Coutts
AJ.
Prediction: the modern-day sport-science and sports-medicine “Quest for the Holy Grail.”
Int J Sports Physiol Perform
.
2017
;
12
(5)
:
704
706
.
32. 
Rothman
KJ,
Greenland
S.
Causation and causal inference in epidemiology
.
Am J Public Health
.
2005
;
95
(suppl 1)
:
S144
S150
.
33. 
Rothman
KJ,
Greenland
S,
Lash
TL.
Modern Epidemiology. 3rd ed
.
Philadelphia, PA
:
Lippincott Williams & Wilkins;
2012
.
34. 
Cook
JL,
Docking
SI.
“Rehabilitation will increase the ‘capacity' of your . . . insert musculoskeletal tissue here. . .” Defining ‘tissue capacity': a core concept for clinicians
.
Br J Sports Med
.
2015
;
49
(23)
:
1484
1485
.
35. 
Glasgow
P,
Phillips
N,
Bleakley
C.
Optimal loading: key variables and mechanisms
.
Br J Sports Med
.
2015
;
49
(5)
:
278
279
.
36. 
Toohey
LA,
Drew
MK,
Fortington
LV,
Menaspa
MJ,
Finch
CF,
Cook
JL.
Comparison of subsequent injury categorisation (SIC) models and their application in a sporting population
.
Inj Epidemiol
.
2019
;
6
:
9
.
37. 
Toohey
LA,
Drew
MK,
Cook
JL,
Finch
CF,
Gaida
JE.
Is subsequent lower limb injury associated with previous injury? A systematic review and meta-analysis
.
Br J Sports Med
.
2017
;
51
(23)
:
1670
1678
.
38. 
Hulme
A,
Finch
CF.
From monocausality to systems thinking: a complementary and alternative conceptual approach for better understanding the development and prevention of sports injury
.
Inj Epidemiol
.
2015
;
2
(1)
:
31
.
39. 
Hulme
A,
Thompson
J,
Nielsen
RO,
Read
GJM,
Salmon
PM.
Towards a complex systems approach in sports injury research: simulating running-related injury development with agent-based modelling
.
Br J Sports Med
.
2019
;
53
(9)
:
560
569
.
40. 
Finch
C.
A new framework for research leading to sports injury prevention
.
J Sci Med Sport
.
2006
;
9
(1–2)
:
3
9
;
discussion 10. doi: 10.1016/j.jsams.2006.02.009.
41. 
Hulme
A,
Salmon
PM,
Nielsen
RO,
Read
GJM,
Finch
CF.
Closing Pandora's box: adapting a systems ergonomics methodology for better understanding the ecological complexity underpinning the development and prevention of running-related injury
.
Theor Issues Ergon Sci
.
2017
;
18
(4)
:
338
359
.
42. 
Bolling
C,
van Mechelen
W,
Pasman
HR,
Verhagen
E.
Context matters: revisiting the first step of the ‘sequence of prevention' of sports injuries
.
Sports Med
.
2018
;
48
(10)
:
2227
2234
.
43. 
Drew
MK,
Purdam
C.
Time to bin the term ‘overuse' injury: is ‘training load error' a more accurate term?
Br J Sports Med
.
2016
;
50
(22)
:
1423
1424
.
44. 
Kalkhoven
J,
Coutts
AJ,
Impellizzeri
FM.
‘Training load error' is not a more accurate term than ‘overuse' injury
[published online ahead of print February 24,
2020]
.
Br J Sports Med.