Context

Immediate Postconcussion Assessment and Cognitive Testing (ImPACT) is one of the most widely used computerized neurocognitive assessment batteries in athletics and serves as both a baseline and postinjury assessment. It has become increasingly popular to administer the ImPACT baseline test in an unsupervised remote environment; however, whether the lack of supervision affects the test-retest reliability is unknown.

Objective

To establish the minimal detectable change (MDC) of composite scores from the ImPACT test when administered to National Collegiate Athletic Association Division I student-athletes in an unsupervised remote environment before 2 consecutive athletic seasons.

Design

Cross-sectional study.

Setting

Participants were provided with a unique link and detailed written instructions on how to complete the ImPACT test at home.

Patients or Other Participants

Division I student-athletes.

Main Outcome Measure(s)

Remote baseline ImPACT results from the 2020–2021 and 2021–2022 athletic seasons were analyzed. The MDC was calculated at the 95%, 90%, and 80% CIs for each of the ImPACT composite scores as well as the average and SD.

Results

The MDC at the 95% CI was 18.6 for the verbal memory composite score, 24.44 for visual memory, 8.76 for visual motor, 0.14 for reaction time, and 6.13 for impulse control. One-way repeated-measures multivariate analysis of variance, repeated-measures analysis of variance, and Wilcoxon signed ranks tests suggested no difference in the composite scores and impulse control between time points.

Conclusions

The ImPACT composite scores and impulse control did not change between the 2 remote testing time points when administered approximately 1 year apart. Our study suggests that the MDC serves as a clinician’s guide for evaluating changes in ImPACT baseline scores and in making clinical judgments on sport-related concussion when the test is administered at home.

Key Points

  • The minimal detectable change of Immediate Postconcussion Assessment and Cognitive Testing (ImPACT) is meant to serve as a simple guideline for determining if changes in baseline scores are due to natural fluctuations or cognitive deficits associated with concussion.

  • Scores on ImPACT were consistent across 2 remote testing time points.

The Centers for Disease Control and Prevention estimated that 300 000 concussions occur annually in the United States; however, this approximation included only concussions that resulted in a loss of consciousness. Researchers have since estimated that the average incidence of sport-related concussions (SRCs) is 4.13 per 10 000 athlete-exposures, with men’s ice hockey producing the highest concussion rate of 7.35 per 10 000 athlete-exposures.1  As SRCs are prevalent in the realm of athletics, reliable and simple assessment methods must be available to sports medicine staff. All athletes should be evaluated using the multifaceted battery (ie, cognition, postural control, and vision) to aid in critical decision-making; yet not all institutions are equipped to perform an extensive battery.2  A common method of concussion management is neurocognitive testing because these instruments are sensitive to even slight cognitive impairment and are often computerized to facilitate mass testing in athletics.3  The National Collegiate Athletic Association (NCAA) requires each student-athlete to receive at least 1 preseason neurocognitive baseline test, which will help to identify individual differences in attention, memory, concentration, reaction time, and impulse control.4  The baseline evaluation must be completed before athletes begin their season, as without that reference, determining if low neurocognitive test scores are due to individual differences or concussion-related cognitive deficits is difficult to determine.5 

One of the more widely used computerized neurocognitive assessment tools is Immediate Postconcussion Assessment and Cognitive Testing (ImPACT).6  The ImPACT is designed to be administered as a preseason baseline and after an SRC to aid in clinical decision-making. Part of ImPACT’s popularity stems from the fact that it can be administered online, in either a controlled or uncontrolled environment, which alleviates the burden of in-person assessments by sports medicine staff. The test-retest reliability of ImPACT has been heavily debated, as some studies have shown the visual motor speed section of the examination to be unreliable across multiple testing periods,7,8  whereas others have found the visual motor speed section reliable across testing time points.9,10 

Best practice is to have a licensed neuropsychologist or credentialed ImPACT consultant (CIC) on staff to interpret ImPACT results. Unfortunately, consultation with these individuals is often too costly for most high school and even some collegiate institutions. In those cases, the burden of interpretation falls on the sports medicine team, such as a team physician or athletic trainer. Although these clinicians are the backbone and frontline of all sports teams, they may not have the proper training to accurately screen neurocognitive examination results.11  This could result in premature return to play and perhaps long-term cognitive deficits or premature removal causing athletes to miss competitions in which they could have participated. Thus, our aim was to define score changes that athletic trainers can use to determine if a difference in scores across testing periods is due to cognitive deficits or normal fluctuations in testing. The ImPACT only flags test results that fall outside the reliable change index (RCI) score, meaning that it may not automatically flag all significant score changes.6  Although athletes can mature and grow cognitively from year to year, a sudden drop in baseline ImPACT performance may be caused by a lack of effort from the athlete or lingering cognitive deficits from an SRC. A lack of effort or suboptimal testing environment that affects the baseline test could have numerous consequences if the athlete incurs a concussion.7  Considering the ongoing coronavirus disease 2019 pandemic, the burden of new responsibilities on sports medicine staff has been immense and resulted in a need to administer ImPACT at home. Without the supervision of sports medicine staff to ensure an adequate effort and optimal testing environment, score changes must be re-evaluated, as previous authors used in-person ImPACT assessments.7 

In this scenario, evaluating changes in scores by calculating the minimal detectable change (MDC) makes the most sense. The MDC values serve as the clinician’s guide to clinically relevant changes in performance across repeated tests.12  Increased understanding of meaningful changes in ImPACT scores will aid clinicians in determining if score changes are due to concussion-related decreases in neurologic function or natural fluctuations in performance.5  This information is important for the average clinician, as ImPACT only flags test results that fall outside the RCI score.6  Clinicians use their own discretion to interpret if these flagged scores are a significant decline from baseline. The RCI provides an estimate of the probability that the given test result is not the result of measurement error; however, no numeric interpretations or change scores have been provided by ImPACT. This places the responsibility for proper interpretation of the test scores on the clinician and limits the ability to make quick game-time decisions.

To address these concerns regarding neurocognitive testing for SRCs, our goal was to establish MDC values for each of the 5 ImPACT composite scores across 2 remote testing time points to serve as a relevant score change guide in the future. In addition, we wanted to identify any differences in average ImPACT composite scores between 2 remote testing time points, thereby showing if repeated remote baseline testing was as reliable as in-person testing. We hypothesized that ImPACT scores would remain consistent between remote testing time points, as the previous literature5,7  suggested consistency in scores between both remote and in-person testing.

Participants

A total of 172 NCAA Division I student-athletes (127 women and 45 men) participated in this study (Table 1). Their average age was 19.37 ± 1.24 years at the time of the first remote ImPACT administration and 20.25 ± 1.29 at the time of the second remote test. The time between tests was 302 ± 46.35 days. We recruited participants from the same athletic department if they had completed the NCAA-mandated preseason screening for 2 consecutive years. Successful completion of preseason screening was defined as a valid, uncontrolled remote ImPACT baseline assessment. Although we focused on baseline ImPACT measures, all participants also underwent vestibular and vision testing. Invalid scores were marked by ImPACT, reviewed by a CIC, and excluded from the study. The ImPACT marks results as invalid if test takers between 14 and 59 years of age receive an impulse control composite score of >30, word memory learning percentage correct <69%, design memory learning percentage correct <60%, or a 3 letters total letters correct score of <8. Exclusionary criteria were a self-reported history of attention-deficit/hyperactivity disorder (ADHD) and dyslexia. Originally, 195 participants were identified; however, 19 were excluded due to a self-reported ADHD diagnosis and 1 due to a self-reported dyslexia diagnosis. The exclusionary criteria were designed for consistency with earlier research on this topic, as both ADHD and dyslexia can affect ImPACT performance.1315  Although not all authors cited dyslexia in their exclusionary criteria, individuals with a self-reported dyslexia diagnosis scored lower on all ImPACT cognitive measures.14  Their results also indicated the need for more specificity and diversity among the sample population that ImPACT used to create its norms, which we did not address in our investigation.

Table 1.

Participants’ Characteristics at Both Testing Time Points

Participants’ Characteristics at Both Testing Time Points
Participants’ Characteristics at Both Testing Time Points

The largest cohorts of athletes participated in cheerleading (20.4%), cross-country or track and field (17.7%), soccer (10.3%), and swimming and diving (10.3%), with the remaining athletes participating in baseball, basketball, football, golf, softball, volleyball, and tennis (Table 1). This study was conducted in compliance with the University of Nevada, Reno, Institutional Review Board.

Immediate Postconcussion Assessment and Cognitive Testing

The ImPACT is a computer-based neuropsychological diagnostic tool traditionally used as both a baseline and postinjury concussion assessment and is one of the most widely used neurocognitive assessment batteries in athletics. The NCAA recommends administering a neurocognitive examination before the athletic season and again as a postinjury assessment if a health care provider, such as an athletic trainer, believes that an athlete might have sustained an SRC.

The examination highlights 5 areas of cognitive functioning proven to be impaired during a concussion: verbal memory, visual memory, processing speed, reaction time, and impulse control. The test takes just under 30 minutes and can be administered in both proctored and uncontrolled remote environments.5  The clinical report consists of 5 composite scores: verbal memory, visual memory, visual-motor speed, reaction time, and impulse control. In addition to the neurocognitive assessment, 22 self-reported postconcussion symptoms are measured on a 7-point Likert scale, and self-reports of relevant medical history are obtained.

Because of clinicians’ reliance on ImPACT to make an informed SRC diagnosis, the temporal stability of ImPACT has been assessed on numerous occasions with inconsistent results. Some researchers found good test-retest reliability,9,10,16  some found low to moderate test-retest reliability,13,17  and others found poor reliability in specific assessment categories.7,8,18 

Procedures

Participants were required to complete baseline ImPACT tests before 2 consecutive athletic seasons and before any preseason competition (average of 302 ± 46.35 days between tests). At both times, we provided participants with a unique testing link and instructions to complete the test in an uncontrolled remote environment. Per ImPACT administration guidelines, we also instructed participants to (1) find a quiet location for testing, (2) use a computer with internet access, (3) not have engaged in a strenuous workout in the previous 3 hours, (4) take the test in their native language, (5) maintain silence and stow mobile devices during the examination, (6) not ingest alcoholic beverages before testing, (7) have eaten recently, and (8) have slept ≥6 hours the night before. We asked the athletes to complete the initial demographics, background, and medical history sections before starting ImPACT, but they were not required to complete the additional demographics section.

Although we requested participants complete the test in their native language, some individuals were inconsistent in their language choices, completing 1 test in their native language and 1 in English. The mismatch in testing language was considered during participant recruitment, but ultimately the participants’ scores were included if their ImPACT scores were not marked invalid. The native languages selected at the time of the first test were English (n = 159), French (n = 5), Czech (n = 1), Portuguese (n = 1), Polish (n = 2), Spanish (n = 2), Russian (n = 1), and German (n = 1). For the second test, the native languages selected were English (n = 162), French (n = 4), Czech (n = 1), Polish (n = 1), Spanish (n = 2), Russian (n = 1), and German (n = 1). Increased English testing at time point 2 (T2) could reflect the participants’ attendance at an English-speaking university for ≥1 year, whereas at time point 1 (T1), they may have recently moved from a non–English-speaking country and perhaps did not feel comfortable taking the assessment in English.

Invalid ImPACT results were reviewed by a CIC. Invalid results are often caused by a lack of understanding of the instructions or a lack of effort.19  A total of 15 participants received an invalid ImPACT result during the data-collection window; all were retested ≥48 hours after their invalid test. Despite results marked valid by ImPACT, 3 participants were excluded from the study by a CIC, as their T1 outcomes reflected a lack of effort and were inconsistent with the results of their second test. Only valid ImPACT results of athletes who were required to retake the test were analyzed.

Statistical Analysis

All statistical analyses were performed using SPSS (IBM Corp). The ImPACT composite scores for verbal memory, visual memory, visual motor speed, reaction time, and impulse control were evaluated for both skewness and kurtosis. Visual memory, verbal memory, and visual motor speed composite scores were parametric and were analyzed using a 1-way repeated-measures multivariate analysis of variance (ANOVA). Reaction time scores were parametric but did not have the same scoring scale as visual memory, verbal memory, and visual motor speed; therefore, a 1-way repeated-measures ANOVA was performed. Impulse control scores were nonparametric and as such, were evaluated separately using a Wilcoxon signed rank test. The α value was set to .05.

The MDC scores of each of the 5 ImPACT composite scores were examined via 95%, 90%, and 80% CIs. We chose the MDC for this study as it exemplifies the meaningful change in scores across 2 testing times or, in other words, the minimum change required for scores to be clinically significant. The CIs were selected based on the methods of prior authors.7,20,21  Following the methods of literature in the field, we calculated the MDC using the standard error of measurement (SEM), which estimates variability across individuals in a sampling cohort20,21 :
where s represents the SD and r represents the reliability coefficient;

The ImPACT Composite Scores

The mean values for each of the ImPACT composite scores at each time point were as follows: 90.68 ± 8.14 at T1 and 92.1 ± 8.11 at T2 for verbal memory, 80.47 ± 11.89 at T1 and 91.94 ± 12.02 at T2 for visual memory, 41.18 ± 5.81 at T1 and 41.63 ± 5.86 at T2 for visual-motor speed, 0.60 ± 0.07 at T1 and 0.61 ± 0.08 at T2 for reaction time, and 4.22 ± 2.89 at T1 and 4.54 ± 2.90 at T2 for impulse control. Mean and SD values are shown in Table 2.

Table 2.

Composite Scores and Minimal Detectable Change (MDC) Values at the 95% CI for the Composite Scores Compared With Results From Similar Literature and Immediate Postconcussion Assessment and Cognitive Testing Optimal Scores

Composite Scores and Minimal Detectable Change (MDC) Values at the 95% CI for the Composite Scores Compared With Results From Similar Literature and Immediate Postconcussion Assessment and Cognitive Testing Optimal Scores
Composite Scores and Minimal Detectable Change (MDC) Values at the 95% CI for the Composite Scores Compared With Results From Similar Literature and Immediate Postconcussion Assessment and Cognitive Testing Optimal Scores

One-Way Repeated Measures Multivariate ANOVA

The multivariate model for verbal memory, visual memory, and visual motor speed composite scores was not significant (F3,169 = 2.12, P = .10). This indicates that the composite scores did not differ between the 2 testing time points.

Repeated-Measures ANOVA

Mean scores for the reaction time composite score were not different (F1,171 = 1.02, P = .31). This suggests no overall difference between the mean reaction times at the 2 testing time points.

Wilcoxon Signed Rank Test

Impulse control scores did not differ across the 2 testing periods (Z = –1.19, P = .23). The median impulse control score was 4.00 at both time points.

Minimal Detectable Change

For each ImPACT composite score, the MDC was calculated at 80%, 90%, and 95% CIs. The MDC values were MDC95 = 18.6, MDC90 = 15.66, and MDC80 = 12.17 for verbal memory; MDC95 = 24.44, MDC90 = 20.57, and MDC80 = 15.99 for visual memory; MDC95 = 8.76, MDC90 = 7.73, and MDC80 = 5.73 for visual motor memory; MDC95 = 0.14, MDC90 = 0.12, and MDC80 = 0.09 for reaction time; and MDC95 = 6.13, MDC90 = 5.16, and MDC80 = 4.01 for impulse control (Table 3).

Table 3.

Minimal Detectable Change (MDC) Values at 95%, 90%, and 80% CIs for Immediate Postconcussion Assessment and Cognitive Testing Composite Scores Over Both Assessment Periods

Minimal Detectable Change (MDC) Values at 95%, 90%, and 80% CIs for Immediate Postconcussion Assessment and Cognitive Testing Composite Scores Over Both Assessment Periods
Minimal Detectable Change (MDC) Values at 95%, 90%, and 80% CIs for Immediate Postconcussion Assessment and Cognitive Testing Composite Scores Over Both Assessment Periods

The purpose of our study was to determine the MDC of remotely administered ImPACT in Division I student-athletes across 2 consecutive athletic seasons. All 5 ImPACT composite scores were similar to those previously described in the literature at both remote testing time points.5,7  The MDC values at the 95% CI were similar to those in an earlier investigation; however, some discrepancies could be attributed to differences in population, statistical methods, and test administration.7  Minimal detectable change values are meant to serve as a clinician’s guide to relevant changes in performance for repeated tests. Increased understanding of meaningful changes in ImPACT scores will aid clinicians in determining if score changes are due to decreases in neurologic function as a consequence of concussion or natural fluctuations in performance. Our hopes are to offer a clinician’s guide for interpreting remotely administered ImPACT baseline score changes and recommend that the MDC be used for future evaluations of ImPACT test-retest reliability. This research could be furthered by examining the differences in composite scores across various sports. In addition, with the ongoing coronavirus disease 2019 pandemic, remote ImPACT administration has become essential to the functioning of many athletic departments. Therefore, more exploration of the test-retest reliability of remote ImPACT is needed to confirm these findings.

Our ImPACT composite scores and SDs were similar to those in the literature5,7  and similar to optimal ImPACT scores (Table 2).6  The only area in which these values notably differed was the visual memory composite score: our participants’ average score was 80.47 versus previously observed values of 77.675  and 76.7  Our participants’ scores were higher and closer to optimal values (80). The differences in composite scores among these studies could be attributed to the larger sample size or differences in test administration.

The 1-way repeated-measures multivariate ANOVA, repeated-measures ANOVA, and Wilcoxon signed rank tests showed no differences in ImPACT composite scores across the 2 testing time points. Earlier authors examined in-person ImPACT administration7  or a combination of remote and in-person administration5  with no changes across the testing time points. Our findings suggest that the remote ImPACT test can be administered at home with written instructions and result in satisfactory scores between testing time points. Two sets of researchers also examined ImPACT test results from 2 consecutive athletic seasons; therefore, combined with our results, ImPACT appears to be reliable across 2 testing athletic seasons or an approximate calendar year.

The MDC results were similar to those of Mason et al7  who used bootstrapping on a smaller cohort (n = 48) of mostly male (64%) NCAA Division I athletes. We found MDC values at the 95% CI to be 18.6 for verbal memory (Mason et al7  = 14.19), 24.44 for visual memory (Mason et al7  = 17.25), 8.76 for visual motor (Mason et al7  = 11.07), 0.14 for reaction time (Mason et al7  = 0.17), and 6.13 for impulse control (Mason et al7  = 7.38; Table 2). Our participants differed significantly from those in the earlier study with a larger (n = 172) and predominantly female (73.8%) cohort. In addition to differences in populations, we obtained data from unsupervised, remote ImPACT, whereas Mason et al7  had certified athletic trainers administer the test. We calculated the MDC using the SEM, matching previous MDC literature.21,22  Mason et al7  did not explicitly state how they calculated their MDC; thus, the methods may not be identical. In addition, the more compact range of composite scores of Mason et al7  may have reflected increased overall effort and an ideal, supervised testing environment. We instructed participants to find a quiet, distraction-free environment for their test, but whether they followed those directions is unknown. The differences in population and testing environments most likely account for the discrepancies in MDC values.

Due to clinicians’ reliance on ImPACT to make an informed SRC diagnosis, the temporal stability of the test has been assessed on numerous occasions with inconsistent results. Some researchers found good test-retest reliability,16  some found low to moderate test-retest reliability,13,17  and others found poor reliability in specific assessment categories.7,8,18  We did not aim to evaluate the effectiveness of ImPACT in terms of test-retest reliability. Given the similarity in scores across both time points to similar studies,5,7  our results seem to suggest that ImPACT is reliable across 2 remote time points. Even so, it is best practice to conduct postinjury ImPACT assessments in person to aid in ensuring an ideal testing environment and allow administrators to monitor the test takers’ symptoms, if needed.

Limitations

This investigation has limitations related to participants’ demographics. Sports such as football and soccer traditionally present a higher risk for concussion; however, they represented a small portion of our participants (2.3% and 10.5%, respectively). During the study period, the football and soccer teams completed in-person ImPACT evaluations via batch testing and thus could not be included. The sport with the largest representation was cheerleading (23.3%), which is typically classified as high risk for head impact. Also, the cohort was largely female (73.8%); nevertheless, because the composite scores and variability seen here were similar to those reported earlier, sex and ImPACT performance may not be correlated. Men perform slightly better on verbal working memory tasks, but our participants’ average verbal memory composite score was in the optimal range of 90 to 99 across testing time points, suggesting that sex effects were not prominent.23  We relied on self-reported ADHD and dyslexia diagnoses, and athletes with a diagnosis of ADHD or dyslexia might have been included if they did not disclose their diagnosis when completing ImPACT at either time point. This study did not exclude participants with a concussion history, but data from postinjury ImPACT were removed. An additional limitation was that not all participants used the same testing device across both testing time points. Some individuals used a mouse for their first test and a trackpad for the second test or vice versa, which may have influenced speed scores. As noted previously, whether participants took the examination in a distraction-free environment is unknown. Although invalid scores were excluded, it is possible that participants scored well enough to receive a valid score but did not put all their cognitive effort into taking the test.

Consistent with the existing literature, ImPACT composite scores did not differ between the remote testing time points across 302 ± 46.35 days, indicating that ImPACT results did not differ when testing was self-administered. As the MDC examines the meaningful change in repeated test scores, we suggest that medical professionals should use the MDC as a guideline for evaluating changes in baseline ImPACT scores and making clinical judgments regarding SRCs.

1.
Chandran
A
,
Boltz
AJ
,
Morris
SN
, et al.
Epidemiology of concussions in National Collegiate Athletic Association (NCAA) sports: 2014/15–2018/19
.
Am J Sports Med
.
2022
;
50
(2)
:
526
536
.
2.
Wallace
J
,
Beidler
E
,
Covassin
T.
Assessment and management of sport-related concussion teaching trends in athletic training programs
.
Athl Train Educ J
.
2018
;
13
(2)
:
112
119
.
3.
Harmon
KG
,
Drezner
JA
,
Gammons
M
, et al.
American Medical Society for Sports Medicine position statement: concussion in sport
.
Br J Sports Med
.
2013
;
47
(1)
:
15
26
.
4.
Concussion safety protocol management
.
NCAA
.
5.
Netzel
L
,
Moran
R
,
Hopfe
D
,
Salvatore
AP
,
Brown
W
,
Murray
NG.
Test-retest reliability of remote ImPACT administration
.
Arch Clin Neuropsychol
.
2022
;
37
(2)
:
449
456
.
6.
Version 4 administration and interpretation manual
.
ImPACT Applications, Inc
.
7.
Mason
SJ
,
Davidson
BS
,
Lehto
M
,
Ledreux
A
,
Granholm
AC
,
Gorgens
KA.
A cohort study of the temporal stability of ImPACT scores among NCAA Division I collegiate athletes: clinical implications of test-retest reliability for enhancing student-athlete safety
.
Arch Clin Neuropsychol
.
2020
;
35
(7)
:
1131
1144
.
8.
Resch
J
,
Driscoll
A
,
McCaffrey
N
, et al.
ImPact test-retest reliability: reliably unreliable?
J Athl Train
.
2013
;
48
(4)
:
506
511
.
9.
Elbin
RJ
,
Schatz
P
,
Covassin
T.
One-year test-retest reliability of the online version of ImPACT in high school athletes
.
Am J Sports Med
.
2011
;
39
(11)
:
2319
2324
.
10.
Schatz
P
,
Maerlender
A.
A two-factor theory for concussion assessment using ImPACT: memory and speed
.
Arch Clin Neuropsychol
.
2013
;
28
(8)
:
791
797
.
11.
Covassin
T
,
Elbin
RJ
Stiller-Ostrowski
JL
,
Kontos
AP.
Immediate post-concussion assessment and cognitive testing (ImPACT) practices of sports medicine professionals
.
J Athl Train
.
2009
;
44
(6)
:
639
644
.
12.
Huang
S-L
,
Hsieh
C-L
,
Wu
R-M
,
Tai
C-H
,
Lin
C-H
,
Lu
W-S.
Minimal detectable change of the timed “Up & Go” test and the Dynamic Gait Index in people with Parkinson disease
.
Phys Ther
.
2011
;
91
(1)
:
114
121
.
13.
Alsalaheen
B
,
Stockdale
K
,
Pechumer
D
,
Broglio
SP.
Validity of the immediate post concussion assessment and cognitive testing (ImPACT)
.
Sports Med
.
2016
;
46
(10)
:
1487
1501
.
14.
Johnson
E
,
Pardini
J
,
Sandel
N
,
Lovell
M.
Do athletes with dyslexia differ at baseline and/or at concussion post-injury assessment on a computer-based test battery?
Arch Clin Neuropsychol
.
2014
;
29
(6)
:
590
591
.
15.
Stokes
M
,
Zynda
AJ
,
Chung
J
,
Silver
C
,
Cullum
M
,
Miller
S.
Do learning disorders impact clinical measures following concussion?
Neurology
.
2020
;
95
(20 Supplement 1)
:
S16
S17
.
16.
Broglio
SP
,
Katz
BP
,
Zhao
S
,
McCrea
M
,
McAllister
T
;
CARE Consortium Investigators
.
Test-retest reliability and interpretation of common concussion assessment tools: findings from the NCAA-DoD CARE Consortium
.
Sports Med
.
2018
;
48
(5)
:
1255
1268
.
17.
Houston
MN
,
Van Pelt
KL
,
D’Lauro
C
, et al.
Test-retest reliability of concussion baseline assessments in United States Service Academy Cadets: a report from the National Collegiate Athletic Association (NCAA)–Department of Defense (DoD) CARE Consortium
.
J Int Neuropsychol Soc
.
2021
;
27
(1)
:
23
34
.
18.
Bruce
J
,
Echemendia
R
,
Meeuwisse
W
,
Comper
P
,
Sisco
A.
1 year test-retest reliability of ImPACT in professional ice hockey players
.
Clin Neuropsychol
.
2014
;
28
(1)
:
14
25
.
19.
Bailey
CM
,
Echemendia
RJ
,
Arnett
PA.
The impact of motivation on neuropsychological performance in sports-related mild traumatic brain injury
.
J Int Neuropsychol Soc
.
2006
;
12
(4)
:
475
484
.
20.
Howell
DR
,
Seehusen
CN
,
Wingerson
MJ
,
Wilson
JC
,
Lynall
RC
,
Lugade
V.
Reliability and minimal detectable change for a smartphone-based motor-cognitive assessment: implications for concussion management
.
J Appl Biomech
.
2021
;
37
(4)
:
380
387
.
21.
Oldham
JR
,
Difabio
MS
,
Kaminski
TW
,
Dewolf
RM
,
Howell
DR
,
Buckley
TA.
Efficacy of tandem gait to identify impaired postural control after concussion
.
Med Sci Sports Exerc
.
2018
;
50
(6)
:
1162
1168
.
22.
Haley
SM
,
Fragala-Pinkham
MA.
Interpreting change scores of tests and measures used in physical therapy
.
Phys Ther
.
2006
;
86
(5)
:
735
743
.
23.
Zilles
D
,
Lewandowski
M
,
Vieker
H
, et al.
Gender differences in verbal and visuospatial working memory performance and networks
.
Neuropsychobiology
.
2016
;
73
(1)
:
52
63
.