Leisure-time running is one of the most popular forms of physical activity around the world. It can be practiced almost everywhere and requires mainly a pair of “appropriate” running shoes. However, the term appropriate is ambiguous, and the properties of running footwear have always generated hot debates among clinicians, coaches, and athletes, whatever the level of practice. As the main interface between the runner's foot and the ground, the shoe potentially plays an important role in managing repetitive external mechanical loads applied to the musculoskeletal system and, thus, in injury prevention. Consequently, over the last decades, running shoes have been prescribed based on matching shoe features to foot morphology. This strategy aligns with the popular belief that footwear is one of the main extrinsic factors influencing running-related injury risk. Despite a seemingly sound strategy for shoe prescription and constant progress in running-footwear technology, the injury rate remains high. Therefore, our aim in this narrative literature review is to clarify whether the prescription of appropriate footwear to prevent injury in running is evidence based, the result of logical fallacy, or just a myth. The literature presented in this review is based on a nonsystematic search of the MEDLINE database and focuses on work investigating the effect of shoe features on injury risk in runners. In addition, key elements for a proper understanding of the literature on running footwear and injury risk are addressed. In this literature review, we outline (1) the main risk factors and the mechanisms underlying the occurrence of running-related injury, (2) important methodologic considerations for generating high-level evidence, (3) the evidence regarding the influence of running-shoe features on injury risk, (4) future directions for research, and (5) final general recommendations.
Leisure-time running is among the most popular physical activities practiced around the world1 and has numerous health benefits.2 However, one of its main drawbacks is the high risk of developing a running-related injury.3,4 This is a serious concern, from both sports performance and public health perspectives, given that a running-related injury is the main reason to stop running training.5 The running shoe is at the interface between the runner and the environment and potentially plays an important role in injury prevention. The question of whether the prescription of “appropriate” footwear can prevent injury in leisure-time runners has always generated hot debates and has already been addressed by previous authors,6,7 who called for caution against overstating the benefits or harms of any shoe feature to runners.8 Indeed, experts have not reached consensus; different streams of thought regarding the effect of footwear on injury occurrence can be easily identified in the scientific literature.
Anthropologic evidence supports the general idea that humans adopted bipedal locomotion some 2 million years ago and evolved into effective “endurance” runners over time.9 Given that our ancestors evolved without modern (or with only minimal) footwear, one may assume that the most natural form of running is the forefoot-strike pattern observed in most modern barefoot runners.10 Yet rearfoot strike is also common in barefoot runners.11 Thus, it has been suggested that the current high incidence of running-related injuries reflects a mismatch between the mechanics with which humans evolved and adaptation to the modern environment.12 Although the mismatch theory provides good and rational arguments, some assumptions are questionable. Whether our ancestors really ran with a forefoot-strike pattern and whether their running habits were similar to the training patterns of modern runners (ie, running for hours over long distances or in bouts of a few minutes when hunting) is unknown. In addition, no information is available on injury rates at that time.
Another expert opinion13 is based on the idea that injury frequency is perceived as remaining constant despite the evolution of running footwear and that evidence of the influence of shoe technology on injury occurrence is limited. Thus, 2 new paradigms have been suggested: (1) the musculoskeletal system strives to stay in the preferred movement path for a given task and (2) the comfort filter, as perceived by the runner, is the most critical aspect for both injury prevention and running performance.13 Even though these are interesting ideas that are worth investigation, data on the incidence of running injuries from previous decades are sparse. Furthermore, comparisons between different time periods are limited by the differences among studies in designs, methods, populations, and injury definitions. Thus, the absence of evidence is the main pillar of these newly suggested paradigms. However, “absence of evidence is not evidence of absence,” and an argument stemming from ignorance is a fallacy in informal logic that could lead one to incorrectly conclude that minimalist shoes or the comfort paradigm is superior to a traditional shoe prescription for injury prevention.8
In this narrative literature review, we propose that any statement pertaining to the role that running footwear might play on injury risk should be evidence based. Hence, we aim to clarify what scientific evidence is available for clinicians and coaches to provide runners with advice on the choice of appropriate running shoes, with a view to reducing injury risk. In addition, we will address several important methodologic considerations for facilitating a proper understanding of study results and avoiding common pitfalls when interpreting and generalizing the findings. The literature presented in this review results from a nonsystematic search of the MEDLINE database performed in October 2019 and specifically focuses on the effect of shoe features on the injury risk in running.
THE CAUSES OF RUNNING-RELATED INJURY
Authors3,4 of a vast literature reported the high incidence of running-related injuries over the last 40 years. Depending on the study design and the population investigated, the overall incidence rate ranged between 18.2% and 92.4%,4 and injury incidence density ranged between 2.5 and 33.0 injuries per 1000 hours of running.3 Overload injuries accounted for about 75% to 85% of injuries.14,15 The majority of running injuries concerned mainly the lower limbs and back regions16 and developed over time due to an imbalance between the repetitive loading of the musculoskeletal system and the tissue load capacity.17
Much research has been directed at identifying potential risk factors for running-related injury, including demographics, lower limb anatomy, training behavior, and the type of running shoes used. Only a few factors have been consistently found to be related to injury risk, most notably previous running injury.18,19 However, even if these factors could help identify a subgroup of runners at greater or lesser injury risk, this would not imply that they are causally related to running-related injury.17,20 By themselves, these factors might be insufficient to trigger an injury.20 For example, being overweight or wearing a certain shoe type does not, per se, cause a running injury. Performing running practice is a necessary condition and, actually, the only necessary one.
Obviously, running-related injuries have a complex multifactorial origin,20,21 but most are thought to be caused by training errors (eg, a sudden increase in training load), although the authors22,23 of recent systematic reviews were unable to identify any trends from the existing literature. Therefore, clinicians and coaches must be aware that if certain factors are somehow related to running injury (ie, a significant association appears in a regression model) and may influence the relationship between running participation and injury risk, these factors might only be effect-measure modifiers when considered within an etiologic framework of running injury risk (Figure 1).20 In line with these considerations, a conceptual framework for the complex, multifactorial causes of running injuries has been suggested.17 This framework implies that a running-related injury does not occur because of footwear features but when a runner increases his or her running, so that given the other risk factors (eg, footwear features), the load capacity of a body structure is exceeded. In conclusion, footwear does not cause injury but can modify the global training load a runner can tolerate before sustaining an injury.20
Abundant scientific literature has focused on the influence of footwear on running biomechanics and injury. Consequently, clinicians and coaches may be confused when reading the many study results, which are not always consistent and are frequently subject to overinterpretation. The reader should bear in mind that key methodologic features such as the study design, the population investigated, and the outcomes of interest will define the level of evidence, the generalizability of the results, and the scope of the conclusions supported by the study results, respectively. Thus, to facilitate understanding and critical appraisal of the current evidence on the relationship between running footwear and injury, Theisen et al7 presented a framework under the term Bermuda triangle, which referenced the types of studies conducted within the triangular relationship among running footwear, running biomechanics, and running-related injury (Figure 2).
The most extensively studied axis of the framework has been the influence of footwear on running biomechanics and, more specifically, the effect of certain shoe features on selected variables related to external ground reaction forces and body motion during running.24,25 These biomechanical studies are mostly of cross-sectional design (rare exceptions exist26 ), require a small number of healthy (ie, uninjured) participants, involve data collection at a defined time in the laboratory, and rely on the implicit assumption that the recorded running technique represents the runners' usual running style (ecological validity). One of the main limitations of this approach is that none of the differences observed between shoe conditions could be related to injury risk because injury was not the outcome of interest. As a result, any conclusion of that kind is speculative.
The second axis of the framework is related to the field of clinical biomechanics and focuses on the relationship between running biomechanics and injury by comparing recently injured runners with a healthy control group (Figure 2). The methods used are similar to those previously described, although designs can be cross-sectional,27 retrospective,28 or prospective.29 Also, groups are usually matched on personal characteristics thought to be related to injury risk (eg, age, sex, body mass, training status, running experience), so that any differences in running biomechanics could theoretically be associated with the presence of injury. However, the direction of the relationship cannot be determined via case-control and retrospective studies, which strongly limits the scope of the conclusions that can be drawn. Furthermore, many unmeasured (eg, previous injuries, other physical activities, fitness) or unknown factors for the groups being studied are not necessarily matched and that may influence the results.
The third axis of the framework is based on epidemiologic studies in which the main outcome of interest is running-related injury (Figure 2). These studies generally involve much greater numbers of participants (at least several hundred) and follow up over several months in an observational study14,30 or a randomized trial.15,31,32 The latter design has the advantage of randomization, which allows for an equal distribution among study groups of all other factors that may influence injury risk. Observational cohort studies and randomized trials make it possible to study the long-term effects of personal characteristics, training behavior, or a given shoe type on running injury. They also offer a greater level of evidence than cross-sectional studies, which makes causal inference generally more plausible. Unfortunately, in the absence of biomechanical analyses, they do not provide any explanations about the underlying mechanisms of the risk factors identified.
Given the limitations of each of these study types, we could argue that a superior design would combine several methods. The ideal approach would be to monitor a large cohort of runners, analyzing their running technique in standard conditions as well as in their own habitual environment, and follow them over a sufficiently long period to assess their exposure to running and injuries sustained.7,33 Such studies have never been performed due to the numerous challenges that need to be overcome (ie, cost, human resources required, sample size, study duration), which explains why more pragmatic approaches have been preferred, even though the level of evidence or the generalizability of the findings is reduced.
Considering that the main objective of our literature review was to identify the scientific evidence regarding the effect of running shoes on injury risk, the next sections will mainly focus on research in which injury was the outcome.
THE EVIDENCE RELATING RUNNING FOOTWEAR AND INJURY RISK IN HEALTHY RUNNERS
Leisure-time runners usually pay a great deal of attention to selecting their running shoes. Indeed, investigators34 who addressed leisure-time runners' beliefs on running-related injuries observed that next to training factors and body limits, the runners largely attributed their injury risk to their shoes, which were thought to be the main extrinsic risk factor. Many of these strong beliefs probably related to the selling arguments put forward by the running shoe industry. Over the past decades, various characteristics have been added to (and sometimes removed from) running footwear to influence biomechanics and indirectly prevent running injuries.35 According to the authors35 of a systematic review, footwear characteristics studied in relation to running injuries were heel-to-toe drop, midsole thickness, minimalist index,36 innersole thickness, mass, midsole hardness, stability elements, and shoe age and usage. Unraveling the contribution of each shoe feature to running-related injury is extremely complex, given that shoe models often differ in many aspects and footwear is usually classified in 2 or 3 categories (eg, traditional, partial-minimalist, and full-minimalist shoes; neutral, stability, or motion-control shoes). In addition, shoe features are not consistently reported in the scientific literature.35 In this section, we present the current state of knowledge about the relationship between running shoe features and injury risk and describe whether this relationship varied across different populations.
Shoe Prescription According to Foot Morphology
Overuse injuries in runners result from an imbalance between (1) training load and the body's regenerative capacity and (2) external and internal mechanical strains generated by running training.17 The rationale behind the popular shoe-prescription approach, previously termed the shoe-shop theory7 and essentially based on expert opinion,6,37 relied on the assumptions that running injuries were caused by excessive external ground reaction forces and excessive foot motion. Therefore, running shoes should be designed to reduce impact forces and attenuate excessive foot pronation during the stance phase. Based on foot morphology and mainly plantar shape, 3 main shoe types have emerged.38 Cushioned shoes have greater cushioning properties and are advised for runners with high-arched, rigid feet and reduced pronation. Stability shoes have some cushioning and motion control and are suited for runners with normal foot morphology. Finally, motion-control shoes have arch-support features, dual-density midsoles, or a rigid heel counter to limit rearfoot eversion. They are recommended for runners with flat feet who display excessive foot pronation and lower limb malalignment during the stance phase. A decade ago, there was still no scientific evidence to support the shoe-shop theory.6 To test whether that strategy led to a decrease in injury risk, 3 trials using the same methods were conducted in the US military services, whereby 7203 recruits were randomly assigned to the study groups.38 Those in the experimental group were provided with specific shoe types according to their plantar shape, as previously described. Those in the control group were assigned a stability shoe, irrespective of their plantar shape. A meta-analysis combining the results from the 3 trials showed no difference in injury incidence rates between the 2 groups for either men (global incidence rate ratio = 0.97; 95% confidence interval [CI] = 0.88, 1.06) or women (global incidence rate ratio = 0.97; 95% CI = 0.85, 1.08). Furthermore, injury rates did not differ across the 3 shoe types used, in those with either high-arched or low-arched feet. Hence, it seems that this shoe-prescription approach was not effective in reducing the injury risk in the context of military training. A similar approach was applied in a cohort of female runners, and again, the results did not support the approach, although the sample size was limited (n = 81) and the outcome was pain level (measured on a visual analog scale).32
So far, no evidence indicates that prescribing shoes according to foot morphology reduced the injury risk. However, this does not mean that individual shoe features such as motion-control and shock-absorption systems are irrelevant in the context of injury prevention. Consequently, an alternative approach is to separately investigate the effect of different shoe characteristics on injury risk.
The shock-absorption properties of footwear mainly result from the materials used in the sole (ie, the type, density, structure, and combination), as well as from the geometry of the shoe (ie, the midsole thickness and design of inserts). One of the most popular approaches has been to change the hardness of the shoe midsole.15,25 The rationale for promoting cushioning systems in running shoes is based on the assumptions that external impact forces are associated with injury risk, running on a hard surface is a cause of high-impact forces, cushioning material can reduce these impact forces, and the cushioning itself has no detrimental effect on injury risk.6 Whereas some scientific evidence suggested that external impact forces were associated with injury risk,29,39 the influences of shoe cushioning on impact-force characteristics were inconsistent,25,40,41 as opposed to, for example, running velocity40 or step rate.42 This does not exclude a role of shoe cushioning in injury risk, but the active mechanism is most likely not via external impact forces.
Only 3 studies have examined the association between cushioning properties and the risk of running-related injury. Researchers43 analyzed whether shock-absorbing insoles influenced the interruption of training among 1205 Air Force recruits due to lower limb injury during basic military training. Rates of lower limb injuries were similar across the 3 study groups (Sorbothane [Sorbothane, Inc, Kent, OH], Poron [Rogers Corp, Chandler, AZ], and non–shock-absorbing insoles); no support for the use of shock-absorbing insoles was presented. A more recent trial15 investigated injury risk in 247 recreational runners randomly allocated to 1 of 2 groups, wearing either a standard running shoe with a soft midsole or a shoe with a harder midsole. The running shoes were prototypes specifically designed for the trial and identical in all aspects except for midsole hardness, with a 13% difference in shock-absorption properties. Both the participants and assessors were blinded to the shoe allocation. After a 5-month follow-up period, no association was observed between shoe cushioning and injury risk (hazard ratio [HR] = 0.92; 95% CI = 0.57, 1.48). A plausible explanation for these negative results could be that the runners adapted their running technique to keep external impact forces constant44 and thereby mitigated the effect of the cushioning properties. Another possibility is that the difference in midsole hardness was too limited to reveal any effect of the cushioning properties. The same research team33 addressed this aspect in a study of 848 leisure-time runners. For this investigation, the difference in shock-absorption properties between the 2 shoe versions was 35%. The main finding was that the injury rate was greater in those runners who received the hard shoes (HR = 1.52; 95% CI = 1.07, 2.16). Because popular belief suggests that heavier runners should use footwear with greater cushioning properties, the authors also investigated whether this association could be observed in both lighter and heavier runners. The stratified analysis according to body mass revealed that the effect of greater risk in hard shoes was confined to light runners (HR = 1.80; 95% CI = 1.09, 2.98).45 In other words, for the first time, these results indicated a protective effect of shoe cushioning but only for light runners (HR = 1.23; 95% CI = 0.75, 2.03). To what extent this protective effect of shoe cushioning is related to impact force attenuation during running is currently being explored by the same team. To conclude, shoe cushioning may play a role in injury prevention, but this finding must be confirmed, and the populations that could benefit from greater cushioning as well as the optimal level of cushioning still need to be defined.
Foot Morphology and Shoe Types
Another popular belief is that foot posture is linked to the risk of running-related injury to the lower extremity. The rationale is that a pronated foot posture and excessive foot eversion during the stance phase might compromise lower extremity alignment and increase the risk of certain running injuries. This idea was supported by a meta-analysis46 that showed a relationship between a pronated foot posture and the risk of developing medial tibial stress syndrome in different sports including running, although the effect size was small. In contrast, a large 1-year prospective observational study (DANORUN)30 on more than 900 runners demonstrated that foot pronation was not associated with injury risk in novice runners. Moreover, research47 revealed that different shoe orthoses for supporting the medial foot arch were not efficient in limiting foot eversion during running and yielded only small and inconsistent within-subject effects. If foot posture was only weakly related to injury risk and if pronation-control features in running shoes had little influence on foot motion, it is legitimate to wonder whether this technology has any influence at all on injury risk.
A randomized trial31 with a 6-month follow-up was specifically designed to address this question, as well as to determine whether the association between motion-control technology and injury risk depended on the runner's foot posture. More than 400 regular recreational runners were recruited for the study. Based on their foot category, the participants were randomly allocated to 1 of 2 groups: 1 group received a pair of standard neutral shoes without any motion-control technology, whereas the other received a pair of motion-control shoes that had a dual-density midsole and an arch-supporting element in the medial midfoot. The primary analysis revealed that the group using the motion-control shoes had a smaller injury risk, regardless of foot type (HR = 0.55; 95% CI = 0.36, 0.85). When looking in detail at foot posture, the authors found that the positive effect was confined to those runners with pronated feet (n = 94; HR = 0.34; 95% CI = 0.13, 0.84). Equally interesting was that motion-control shoes were not harmful for those with neutral (n = 218; HR = 0.78; 95% CI = 0.44, 1.37) or supinated (n = 60; HR = 0.59; 95% CI= 0.20, 1.73) feet, although the sample size was too small to draw a definitive conclusion. This was the first study to provide some justification for the use of motion-control technology in cushioned running shoes. It is worth noting that the neutral shoes used in this trial had no motion-control technology at all and that many standard cushioned running shoes categorized as neutral or stability shoes did have features of that type. This may also explain the apparent contrast with the findings from the DANORUN study that used a neutral shoe (model Supernova Glide 3; Adidas AG, Herzogenaurach, Germany; this shoe included a medial arch support) for all their novice runners. To conclude, runners with pronated feet may be advised to avoid shoes that lack motion-control technology.
One of the most popular shoe features investigated recently has been the heel-to-toe drop (ie, the difference in stack height between the heel and forefoot). The influence of the heel-to-toe drop of standard cushioned running shoes was tested in a randomized trial48 with 6 months of follow up among 553 leisure-time runners. Three versions of the same shoe model that differed only in heel-to-toe drop (10, 6, and 0 mm) were compared. Overall, the injury risk was not influenced by heel-to-toe drop in the whole cohort (HR = 1.30; 95% CI = 0.86, 1.98, and HR = 1.17; 95% CI = 0.76; 1.80, for the 6- and 0-mm versions, respectively, compared with the 10-mm version). However, the stratified analysis showed that in occasional runners (ie, weekly running for <6 months over the 12 months before the study), the injury risk was less among those using the 6- or 0-mm versions (HR = 0.48; 95% CI = 0.23, 0.98). Conversely, the injury risk was greater in regular runners who had received the low-drop versions (HR = 1.67; 95% CI = 1.07, 2.62). Given these secondary analyses, it seems safe to recommend low-drop footwear for occasional or inexperienced runners. In contrast, regular runners who received low-drop shoes appeared to be at greater risk than those using conventional shoes. Because the participants were required to use the study shoes for all their running sessions, one could speculate that the transition from their usual running shoes to the low-drop versions was not progressive enough and increased the injury risk in the regular runners.
In a 2003 Canadian prospective study, researchers49 collected data on running shoe age and injuries. The authors observed an association, but the effect was in opposite directions for men and women. In women, wearing running shoes that were 4 to 6 months old was a risk factor for injury (relative risk [RR] = 1.74; 95% CI = 1.01, 2.98), whereas in men, running in shoes of that age was associated with fewer injuries (RR = 0.36; 95% CI = 0.15, 0.83). Therefore, no strong conclusion could be drawn. Indeed, this question deserves further attention because shoe degradation (200 mi [322 km]) induced modifications in the running pattern (ie, an increase in stance time) to maintain constant variables related to the external impact forces acting on the body.44 However, the adaptations to shoe use were not influenced by different cushioning technologies, which disqualifies shoe-cushioning technology as a critical aspect in shoe degradation. To conclude, no evidence-based recommendation on shoe age for preventing injury can be made at this stage.
Shoe Brands and Cost
A simple but pragmatic question might be whether any advice on the brand or cost of the running shoes would help prevent injury. In a retrospective study published in the 1980s, the authors50 observed that among injured runners with shin splints, a disproportionately high number wore Adidas shoes, whereas few used Nike (Nike, Inc, Beaverton, OR) shoes. Similarly, a disproportionately high number of runners with the iliotibial band syndrome wore New Balance (Boston, MA) shoes. The authors concluded that the prescription of the most appropriate running shoes is an important aspect of optimal treatment. Although this seems to make sense, it is a highly speculative interpretation, given that the study was not designed to support this conclusion. Twenty years later, the same team investigated whether this shoe-prescription strategy was effective.51 They compared the injury incidences between runners who had been advised on running shoes after a clinical assessment and those who had received only general advice. No difference was present between the groups, suggesting that this strategy did not work. In another retrospective investigation,52 more than 4000 participants filled out a questionnaire on training habits, injuries, and running shoes the week before a popular 16-km race (Grand Prix of Bern). The main findings were (1) injury incidences did not differ between the runners who preferred 1 of the 3 most popular running shoes and those using other shoes, (2) those who had no preference had significantly fewer injuries, and (3) expensive shoes were associated with a greater injury incidence.52 These results are interesting, yet caution is needed when interpreting these observations because the study was retrospective by design and poorly controlled for other important variables that could explain these observations. Researchers53 in another study focused on shoe cost and analyzed whether more expensive shoes (£70–£75 [US $94–$100]) improved plantar-pressure attenuation and perceived comfort over low-priced models (£40–£45 [US $53–$60] and £60–£65 [US $80–$87]). Unfortunately, the association with injury risk was not examined. Nevertheless, the authors concluded that cushioning performance was not related to shoe cost and that the low- and medium-cost shoes provided equally good subjective comfort as the high-cost models. In short, no association between shoe brand or cost and injury risk has been established.
Barefoot Running and Minimalist Shoes
The very first question that we should answer is whether runners should be advised to wear running shoes at all. The rationale is that running barefoot or in minimalist shoes is often54 but not always11 associated with adopting a midfoot or forefoot strike, simply because landing on the heel is uncomfortable. The foot-strike pattern is associated with other differences in running technique (eg, a greater step rate) that may attenuate external ground reaction forces and the physical load at the knee joint but also increase strain in the foot region and put greater load on the ankle-extensor complex.55 Consequently, reduced impact forces should logically decrease the physical load on the musculoskeletal system and carry a reduced injury risk. However, the redistribution of internal strain among different body parts should increase the injury risk in those specific areas that are suddenly overloaded.56
To the best of our knowledge, only 1 team56 addressed whether barefoot running could reduce injury occurrence. In a 1-year prospective study, they compared the injury incidence and rate between shod runners (n = 94) and those who practiced at least 50% of their mileage barefoot (n = 107). Globally, no differences occurred between shod and barefoot runners in the proportions reporting musculoskeletal injuries, but the total number of musculoskeletal injuries per runner was less in the barefoot than in the shod runners (1.17 versus 1.66 injuries/runner). Yet barefoot runners demonstrated less weekly mileage. As a result, the injury-incidence density was slightly greater in barefoot runners, although the 2 groups displayed no statistical differences.
The introduction of minimalist shoes and claims of potentially decreasing injury risk have generated much controversy,57 although the current evidence is still weak. In a prospective study,58 participants who received a minimalist (FiveFingers, Vibram, Albizzate, Italy) or partial-minimalist (Nike Free) shoe were at a greater risk of injury. Bone marrow edema was more common in runners after a 10-week period of transitioning from traditional to very minimalist running shoes, despite a lower training volume in the minimalist than in the traditional shoe group. The question remains whether the main reason for this greater risk of injury was the minimalist shoe itself or the transition to the minimalist shoe.59 In a more recent trial60 with a 6-month follow-up period, 61 habitual rearfoot strikers received either conventional or minimalist shoes. Shoe type was not associated with injury risk, but an interaction between shoe type and body mass was observed. Among runners using minimalist shoes, sustaining an injury became increasingly more likely as body mass increased above 71.4 kg than in runners using conventional shoes. It should, however, be stressed that this study was underpowered (27 injuries) and thus inconclusive. Unfortunately, no authors have conducted a large randomized controlled trial to investigate the difference in injury risk between using minimalist and conventional shoes after a transition period. Thus, no current evidence supports recommendations on the use of minimalist shoes in specific populations.
On October 12, 2019, Eliud Kipchoge became the first person in history to run the marathon distance of 42 195 km in under 2 hours. He wore the new Nike ZoomX Vaporfly Next%. Although this shoe is promoted as the “fastest running shoe” and is already on the market (recommended retail price: $250), the influence of its embedded new technology on injury risk in elite runners, as well as in other populations of runners, has not been evaluated. Indeed, the evidence on the role of running footwear on injury risk remains limited and fragmentary. Insufficient research has been carried out regarding the multi–billion-dollar running-footwear industry. For instance, the effects of shoe age and wear on injury risk have never been properly assessed. Therefore, the maximal distance for running shoes cited as 800 to 1000 km is based on mere popular belief, simply following recommendations from the running shoe sector. Comfort has been suggested to be an important factor,13 even though it has never been seriously studied either. Other footwear features—such as toe-box width, longitudinal bending stiffness, or weight, to name but a few—have never been explored with respect to injury. More research is required to systematically investigate individual or combined shoe features among large cohorts and runners with different profiles to allow for scientifically sound conclusions on matching shoe and runner. These studies should focus on prospective long-term follow up of runners training under usual conditions. Running-related injury should be the main outcome of interest, whereas the metrics related to running technique and biomechanics (eg, as provided by smart wearables) may provide insights into the underlying mechanisms.
Also, any radical change in running shoe features will inevitably increase the injury risk.8 Further work is needed to provide evidence-based recommendations for transitioning to a new pair of shoes, taking into account the specific shoe features that differ from the previously used model (eg, cushioning, drop, motion control), as well as the amount of change (eg, from maximalist to barefoot or from standard cushioned shoes to partial-minimalist shoes). The current lack of knowledge about shoe transitioning can be circumvented to some degree by using several pairs of running shoes in parallel, depending on the surface (ie, road, forest, or mountain) or the purpose (ie, training or competition).14 A 5-month prospective trial14 involving about 250 recreational runners addressed whether regular alternating between different pairs of running shoes might play a role in injury prevention. The injury risk was less in those using different pairs of running shoes concomitantly versus those using a single pair (HR = 0.61; 95% CI = 0.39, 0.97). Of course, the optimal strategies for alternation in terms of frequency, shoe features, and context of use still must be defined.
Finally, the mechanisms relating shoe characteristics and running injury may be specific to certain injury types. From a methodologic viewpoint, to record sufficient events of interest and achieve adequate statistical power, many more participants than usual need to be recruited to determine the relationship between a particular shoe feature and a specific running injury.
Overall, it is still too early to formulate evidence-based prescriptions regarding the choice of running shoe features. Many statements and arguments advanced in favor of certain footwear characteristics are simplistic and not supported by scientific evidence. Nevertheless, authors of some epidemiologic studies have suggested that certain footwear characteristics may benefit particular subgroups of runners. It seems that a minimum of motion control in cushioned shoes, as provided in most standard models, is relevant to the risk of running injury, especially for runners with highly pronated feet. In addition, it appears safe to recommend low-drop footwear for occasional or inexperienced runners. Recent findings indicated that cushioning had a preventive effect, especially in light runners. Nonetheless, these results need to be confirmed before any shoe prescription guidelines are scientifically justified. Furthermore, the underlying mechanisms of these few positive results are yet to be uncovered. Finally, the ultimate question of how much running training (eg, frequency, volume, running speed) in the presence of a given anatomical predisposition, running technique, and specific footwear can be tolerated without incurring injury remains unanswered.
In short, it is possible that the role of running shoe technology in injury prevention has been largely overrated. Apart from these preliminary conclusions, it seems that some basic rules are still valid, such as the subjective feeling of comfort when choosing a pair of running shoes, transitioning progressively and carefully into a new pair, and listening to your body when training. Along the same line, it is probably a good idea to alternate between pairs of running shoes to avoid systematic mechanical overload and allow progressive transitioning to new shoes. The most important aspect of injury prevention may eventually be athlete education that allows runners to develop their own optimal self-management strategies. Indeed, researchers tend to search for group effects, with a difference between tested conditions that they feel confident about (ie, statistically significant). However, each runner is unique and may adapt in his or her own way to a given shoe type. These individual adaptations do not necessarily translate into significant group effects that would support any strong scientific claim. In that respect, science can provide general guidelines, but the final decision will always be an individual one and should preferably be based on correct and unbiased information. Although some will gladly accept a simple lie regarding the role of footwear in injury prevention, the truth is far more complex. Caution and common sense should be exercised when generalizing findings from research, as well as when considering simplistic explanations.