Research on the Advanced Placement (AP) program generally shows that students scoring 4s and 5s on AP exams outperform their non-AP peers in subsequent college courses. However, faculty and academic advisors often suggest that students with AP credit should repeat prerequisite courses in college before attempting advanced coursework. We compared grades of 20,409 students in 42 subsequent courses across three groups: students who used AP credit as a prerequisite, students who earned AP credit but repeated the prerequisite courses in college, and students without AP credit. Results with two-level cross-sectional multilevel modeling showed that AP students performed similarly in subsequent courses whether they chose to repeat prerequisites or not; both groups outperformed non-AP students with similar academic backgrounds.

Advanced Placement (AP) courses and exams have become a fixture in American high schools, with parents, educators, and policymakers increasingly encouraging students to enroll in the hope that participation in AP courses will improve their odds of first being admitted to, and eventually being successful in, college (Klopfenstein & Thomas, 2010). Early research on the AP program found encouraging correlations between AP participation and college success, but more recent studies have questioned those results (Sadler, 2010). In fact, students arriving at college with AP credit may be advised to repeat the equivalent introductory coursework (Sadler & Sonnert, 2010; 2018).

Approximately 8% of students begin college with at least some AP credits (Evans, 2019), and those thousands of students must make a choice: do they accept their AP credit and move ahead to more advanced courses to reduce their time to degree, as proponents of the AP program argue they are prepared to do (Klopfenstein, 2010)? Or do they heed the warnings of faculty members who believe students should repeat AP credit at the college level to gain greater depth of understanding (Hansen et al., 2006; National Research Council, 2002)? Students, and the academic advisors who guide them, need more than anecdotal information to make evidence-based decisions that promote academic success in college. Therefore, this study aimed to fill a gap in the existing literature on the AP program by providing evidence about whether students with AP credit are prepared to pursue advanced coursework in college or if they would be better served by repeating introductory courses at the collegiate level. We also provide a model for institutions to assess their own students' experiences related to AP credit, so that advisors can use reliable information when guiding students on these important decisions early in their college careers.

The College Board first offered Advanced Placement exams in the early 1950s in a collaboration between elite college preparatory schools and university faculty members, in which high schools designed their own curriculum to prepare students for a common exam graded by college professors (Lacy, 2010). Since its inception, the AP program has grown substantially in the number of subjects offered and in the availability of AP-designed courses in high schools (Ackerman et al., 2013). The high school graduating class of 2019 included 1.25 million students from over 22,000 high schools who took at least one AP exam out of the 38 exams available (College Board, n.d., AP program participation and performance data 2019).

One reason for this significant growth in AP participation is that parents, educators, and educational policymakers believe success in the AP program leads to success in college (Klopfenstein & Thomas, 2010). Indeed, the College Board's promotional materials claim that successful AP students earn higher GPAs and are more likely to graduate from college in four years than non-AP students (College Board, n.d., Benefits of AP). With the expectation that AP courses prepare students for college, federal and state governments have enacted policies (such as subsidizing examination fees for low-income students) to increase AP participation among high school students (U.S. Department of Education, 2006). At the collegiate level, legislation mandating the awarding of AP credit for scores of 3 or higher, as recommended by the College Board, has been enacted in 22 states (College Board, n.d., State and systemwide AP credit and placement policies). Most universities award credit for at least some scores on some exams, though individual policies vary greatly (Ackerman et al., 2013). However, whether an individual student earns credit depends on both their exam score and the policy of the institution (Evans, 2019). Some selective institutions have stopped granting credit for AP exam scores or have raised exam score thresholds so that only students who score 5s receive credit (Burkholder & Wieman, 2019; Drew, 2011). Furthermore, how students are advised to use or repeat pre-college credit (e.g., AP credit, dual credit) depends on their specific degree plans and career goals, so students who expect to graduate more quickly because of AP credit may be disappointed (Witkowsky et al., 2020).

Success in the AP program could benefit college students in one of two primary ways: reduced time to degree or higher grades (Klopfenstein, 2010); however, students must often choose between these two potential benefits. They can use AP credit to move ahead in the curriculum, potentially saving money, or they can repeat introductory courses, potentially earning higher grades given their prior exposure to the material. Empirical evidence supporting the claim that students use AP credit to reduce time to degree is limited in the literature. For example, Evans (2019) found that, except for Pell grant recipients who may be more motivated than their higher-income peers to save money by graduating more quickly, there was no statistically significant relationship between AP credits earned and reduced time to degree.

If students choose to repeat coursework instead of moving ahead, it may often be the case that an academic advisor or faculty member recommended that they do so (Sadler & Sonnert, 2010). University faculty members who recommend students repeat credit earned by AP exam or dual credit are often concerned that courses taken in high school are not truly equivalent to college-level courses (Hansen et al., 2006; Troutman et al., 2018). College courses are faster paced and cover content that is not included in typical AP courses (Conley, 2007; Eykamp, 2006). Beyond content coverage, faculty members argue that AP courses are focused on learning procedures rather than engaging deeply with the concepts of a discipline (Hansen et al., 2006; Wade et al., 2016). However, evaluating the position that AP students should repeat introductory courses before moving on to subsequent courses is difficult, because most AP research on college success compares students with and without AP credit; there is limited research comparing student outcomes depending on whether students with AP credit choose to use it or not (De Urquidi et al., 2015). There is even less research about the guidance provided by advisors to students who enter college with significant amounts of credit (Witkowsky et al., 2020).

Two studies investigated whether students should accept AP credit or repeat the course in college; while these studies provide helpful insights, neither was sufficient to answer the question broadly because both were limited to one subject. First, De Urquidi et al. (2015) studied the outcomes of AP calculus students who accepted all, partial, or no AP credit. They found that students generally benefitted from accepting all earned credit, although results varied somewhat by first mathematics course taken and SAT Math scores (De Urquidi et al., 2015). Second, Hansen et al. (2006) compared writing samples from three groups: students who used AP credit as the prerequisite for a sophomore-level English course, non-AP students who took a first-year composition course prior to the sophomore-level course, and students who earned AP credit and also took a composition course. They found no difference between the first two groups but evaluated the third group as superior to the others. Unfortunately, this study did not control for student background characteristics, such as high school grades or test scores; failing to include these common covariates of college grades may have confounded the results of the study. Methodological limitations such as these are common in much of the existing AP research (Sadler, 2010).

The College Board has conducted extensive research on the AP program, including validity studies and comparisons of AP and non-AP students in college illustrating that AP students earn higher grades and graduate at higher rates (Warne, 2017). However, scholars have identified methodological limitations of much of the College Board-sponsored research on Advanced Placement (Klopfenstein & Thomas, 2009; Sadler & Tai, 2007). The predominant concern is that most studies were simple comparisons of students who had or had not earned college credit based on their AP exam scores, ignoring other potentially confounding differences in student characteristics (Sadler & Tai, 2007). The problem with this type of analysis is ignoring potential moderator or mediator variables that may account for both AP course-taking and college performance differences. For example, AP courses are more prevalent in wealthier school districts, and students self-select into AP courses (Sadler & Tai, 2007); those who pass AP exams tend to be highly motivated, high-achieving students who are likely to succeed in college regardless of their AP participation (Sadler, 2010).

Warne's (2017) review of AP literature concluded that non-College Board researchers generally found positive results based on AP performance even after controlling for differences in student backgrounds; effect sizes, however, were much smaller than those reported in less methodologically rigorous research. For example, Ackerman et al. (2013) found that although students with AP exam scores of 4 or 5 had higher college GPAs and graduation rates than non-AP participants, students who scored mostly 3s (or lower) did not experience greater college success after controlling for high school GPA and SAT/ACT scores. Patterson and Ewing (2013) used propensity score matching to control student background differences before comparing subsequent course grades of two groups of students: those who used AP credit to move directly into advanced coursework and non-AP students who took introductory courses at the collegiate level. Of the 10 AP exams included in the study, they found significant positive effects favoring students who used AP credit in only five (both Calculus exams, Physics C, Chemistry, and U.S. Government); the other five showed no significant differences between groups (Biology, Micro- and Macroeconomics, Psychology, and U.S. History). These results provided at least two important implications for future AP studies: 1) it is important to account for differences in student backgrounds outside of experiences with the AP program, and 2) there may be meaningful differences in subject area outcomes, so any analysis should account for variation in both courses and individuals.

Warne (2017) identified specific ways for scholars to advance the body of knowledge on Advanced Placement, among them: 1) use of advanced methodologies, 2) with awareness of variability in AP courses, and 3) asking new questions about policy and practice. In particular, Warne (2017) called for future AP research to use multilevel modeling to account for the nested nature of AP data, in which students are clustered in courses and schools or colleges, so they cannot be considered independent of each other as is required for traditional analyses using multiple regression.

We concur with Warne's (2017) suggestions. Applying two-level cross-sectional multilevel modeling, we sought to determine whether students who earn AP credit achieve grades in subsequent-level courses that are at least as high as those of students who only take the prerequisite course in college. Furthermore, we extended the question to address whether AP students should repeat prerequisite courses to ensure success before moving on. Specific research questions include:

  • RQ1.

    Are AP students equally well-prepared to succeed in advanced college courses as non-AP students even after controlling for high school grades and SAT scores?

  • RQ2.

    To what extent do outcomes for AP students vary across courses, and can course difficulty predict any of this variation?

  • RQ3.

    For students with AP credit, are outcomes in target courses better if they repeat the prerequisite course in college?

While little research has been conducted to explore the variation due to course characteristics, course difficulty level is a possible avenue for exploration (Wladis et al., 2017). In addition to addressing the first two avenues for future investigation identified by Warne (2017), this study addresses a new question about institutional policies and academic advising practice by considering the choices students with AP credit must make regarding whether to use their AP credit or repeat equivalent courses in college.

Data

We conducted a secondary analysis of existing institutional data retrieved from the University Registrar at a selective public research university in the Midwest. Students at this institution earn free elective credit for all AP exam scores of 3 or higher, as required by a state law passed in 2010, but the score required for credit equivalent to introductory courses varies by department; except for foreign language exams, students generally need exam scores of 4 or 5 to place into advanced courses. The retrieval of appropriate data for the investigation began with identifying the courses for which students could be granted credit based on AP exam scores. Those courses were used to develop a list of target courses, which were the subsequent courses in a sequence within the same department, e.g., if students earned AP credit for General Biology I and II, they could use that credit to meet the prerequisite for Microbiology (a target course). Students in the data set were domestic undergraduates who completed one or more of the target courses for a grade between fall 2015 and spring 2018; this group included all degree-seeking students whether they were beginner, transfer, or continuing students. International students were not included because relatively few international students at this institution have any AP credit. If students repeated a course, only the first attempt was included; if students completed multiple target courses, one course was randomly selected for the study, so no student was counted twice. The final data set included grades of 20,409 students from 42 courses in 13 departments.

Variables

Dependent Variable.

Following prior AP research (e.g., Patterson & Ewing, 2013), final grades earned in the target courses served as the dependent variable. Grades ranged from A+ to F and were converted to a numerical scale consistent with how the institution calculates GPA (4.0 for A+ and A grades, 3.7 for A-, 3.3 for B+, etc.).

Independent Variables.

The primary independent variables were two dichotomous variables indicating students' AP status: AP Only refers to students who earned AP credit for the prerequisite and enrolled directly in the target course, and AP Course refers to students who earned AP credit for the prerequisite but repeated it at the university prior to enrolling in the target course. Students with zeros on both variables comprised the Course Only reference group. To account for differences in student backgrounds (Sadler, 2010; Warne, 2017), we included several possible covariates when collecting the data. Core high school GPA and SAT total scores were included as student-level variables in addition to AP status to partially control for prior academic achievement. Core high school GPA is calculated by the institution's Office of Admissions and includes high school mathematics, English, laboratory science, foreign language, and social studies courses measured on a 4-point scale. Students in the sample reported both older and newer versions of SAT scores as well as ACT scores, so all test scores were converted to the most recent SAT scores using concordance tables published by the College Board (n.d., Concordance). One course-level variable, historical DFW rate, was included in the model. This represented the percentage of students who earned Ds, Fs, or Ws in any section of the course offered from fall 2015 to spring 2018 and provided an estimate of course difficulty.

We also considered including demographic variables, such as gender, underrepresented minority status (URM), whether students were Pell grant recipients, and if they identified as first-generation college students. When we added these variables to the final model, however, it failed to identify parameter estimates, which indicated they were likely not significant predictors. Additionally, the goal of the study was to provide evidence to inform student decisions regarding the choice to use or repeat AP credit. While academic advisors might reasonably suggest that students with higher grades and test scores could have more confidence in their ability to succeed in the next-level college course, they would not suggest that personal characteristics such as race or income should guide those decisions. We did find that among students with earned AP credit, female students and Pell grant recipients were slightly more likely than male students and non-Pell grant recipients to use their credit, after controlling for high school grades and SAT total scores. The effect sizes were quite small, however, and neither URM nor first-generation status significantly predicted the use of AP credit. Therefore, the estimates for the effects of using or repeating AP credit can be interpreted as average effects that apply to students from all demographic groups, after controlling for high school grades and SAT total scores.

Table 1 includes extensive course-level descriptive statistics; the means and standard deviations of grade distributions varied substantially by course, as did DFW rates, which ranged from 0.03 to 0.26. Approximately 75% of all students in the sample were in the Course Only group, but group distribution varied by course, with mathematics courses enrolling relatively high numbers of students in both AP groups. Of the students who had AP credit for prerequisite courses, nearly 80% chose to move directly into the target course; even in STEM disciplines, only 28% of students with AP credit chose to repeat prerequisite courses before moving on to more advanced coursework.

We conducted a series of analyses using two-level cross-sectional multilevel linear models (Raudenbush & Bryk, 2002) to examine the relationship between student AP status and grades in target courses after controlling for student- and course-level indicators using HLM7 software (Raudenbush et al., 2011). In the model, level 1 represents students and level 2 represents the target courses taken by the students; thus, the model captures the nested effect of AP status on students within target courses. The analysis began with an unconditional model to quantify variation in the target course grade across the 42 courses in the sample. The intraclass correlation indicated that 12.1% of the variance in target course grades was due to course-level characteristics, which supports multilevel modeling as an appropriate analytic method (Hedges & Hedberg, 2007).

The first conditional model added the key variables indicating students' AP status categories. Initially all random effects were included to allow the effects of predictors to vary across courses, but the AP Course indicator was later fixed due to non-significant variation across courses. Subsequent conditional models added student characteristics at level 1 and course DFW rate at level 2. Restricted maximum likelihood estimation was used for all models; see below for the final level 1 model.

Level 1 Model:

formula
where i = 1, 2, …, n students in course j; j = 1, 2, …, 42 courses; and rij = a residual error term for student i in course j. In the level 1 model, all variables were centered on the group mean. High school grades and test scores varied considerably across courses, so centering on the group mean allowed us to interpret student-level parameters within a group; thus, we could focus on differences in AP status while controlling for measures of prior academic achievement within a specific course.

Level 2 Model:

formula
formula
formula
formula
formula
In the level 2 model (see above), the intercept, β0j, was the average grade in course j for the Course Only reference group; it was predicted by the grand mean, γ00 (average grade across all Course Only students in all courses), the DFW rate of course j, and random course-level error u0j. The four student-level predictors became part of the level 2 model, in which βpj represented the average effect of predictor p on grades in course j, plus the extent to which the DFW rate of course j moderated the effect of the given predictor. In the level 2 model, DFW rate was centered on the grand mean.

Full Model:

formula
The full model (see above) includes several interaction terms that indicate the extent to which the effect of each level 1 predictor varies depending on the DFW rate of the course. We were especially interested in whether course difficulty interacted significantly with the effect of being in the AP Only group (i.e., γ31). The AP Course indicator showed no significant variation across courses, so there is no interaction effect included for that indicator.

All four student-level predictors (high school GPA, total SAT scores, and membership in either the AP Only or AP Course groups) were positively correlated with grades in target courses and with each other. Membership in the Course Only group (i.e., reference group) was negatively correlated with target course grades (r = -0.16, p < .01), high school GPA (r = -0.19, p < .01), and SAT total scores (r = -0.41, p < .01) indicating that students who have not earned AP credits tend to have lower course grades, high school GPAs, and SAT scores compared to those students who have earned AP credits, though the correlations are only weak to moderate.

The results of the four models (see Table 2) address the three research questions. First, results indicate that students with AP credit were at least as well prepared as non-AP students for the target courses in the study; both the AP Only and AP Course predictors were associated with course grades over half a letter grade higher than the Course Only reference group (see Model 2). After controlling for high school GPA and SAT total scores, the advantage for each AP group was smaller but still significant (see Model 3). On average, the gap between AP and Course Only students was 0.41 for the AP Course group, t(19,725) = 13.39, p < 0.01, and 0.30 for the AP Only group, t(41) = 7.96, p < 0.01. This indicates that, even after controlling for two measures of prior academic preparation, both AP groups had average grades nearly half a letter grade higher than the Course Only group (e.g., a B+ for both AP groups compared to the non-AP student average grade of just below a B).

In the final model (Model 4) with the course-level explanatory variable of DFW rate, the main effect of AP remains significant. The 95% confidence interval for the effect of being in the AP Only group was 0.23 to 0.36; for the AP Course group, the 95% confidence interval was 0.35 to 0.47. These values indicate the performance difference in the target course between students who earned AP credit and those who did not.

Additionally, as mentioned above, there was significant variance in grades across courses for the AP Only group, but not for the AP Course group. This means the average gap of nearly half a letter grade between Course Only and AP Course students is a consistent estimate for all courses. However, the extent to which AP Only students differed from Course Only students did significantly vary across courses; course difficulty, as represented by DFW rate, was used to predict some of this variation. DFW rate was a significant predictor of grades on its own, and higher DFW rates increased the gap between the Course Only and AP Only groups, t(40) = -3.97, p < 0.01. The coefficient for this interaction effect was approximately 1.97; since the highest DFW rates were close to 25%, the total advantage for AP Only students in the most difficult classes was, on average, more than half a letter grade (see Model 4).

Finally, the results also addressed our third research question: were outcomes for students with AP credit better if they chose to repeat the introductory course prior to taking the target course? On average, after controlling for measures of prior academic achievement, students with AP credit earned slightly higher grades in target courses if they repeated the prerequisite in college (AP Course group) than if they moved directly into the subsequent college course (AP Only group), but the difference was, on average, very small (0.11). However, since the effect of the AP Only indicator varied across courses, and significantly increased as course DFW rates increased, the difference in performance between the two AP groups could potentially be smaller or even reversed in more difficult courses.

To highlight this finding, Table 3 illustrates the actual differences observed between groups based on AP status across three courses in the current data set, one each with low, moderate, and high difficulty as estimated by historical DFW rate.

In the course with historically lower DFW rates, the Course Only students earned a 3.6 on average, so while both AP groups outperformed non-AP students, the difference was small, and the 4.0 grading scale seems to limit the extent to which they could do so. In the course with moderate difficulty, students in both AP groups earned approximately one letter grade higher than non-AP students, with students in the AP Course group earning the highest grades. However, in the course with historically high difficulty, students in the AP Only group outperformed students in the AP Course group, who repeated the prerequisite course at the institution prior to taking the difficult target course. These results indicate that at this institution, students who earn AP credit are, on average, prepared to succeed in subsequent-level courses whether they repeat the prerequisite course in college or not, even after controlling for two measures of prior academic preparation.

Students expect that success in AP exams will translate to success in college, reduced time to degree, or both (Klopfenstein & Thomas, 2010). Once they arrive at college, however, AP students are often advised to repeat introductory courses before attempting more advanced coursework (Sadler & Tai, 2007). Unfortunately, there is little evidence to guide students and academic advisors when making this choice (De Urquidi et al., 2015). The results of this study answer important questions for students and those who care about student success, yet also raise new questions for future study.

First, AP students in our study earned higher grades than their non-AP peers even after controlling for measures of prior academic preparation, which aligns with Warne's (2017) review of the AP literature. Whether or not they repeat an introductory course, AP students at this institution performed, on average, about half a letter grade better in subsequent courses than their non-AP peers with similar academic backgrounds. It is worth noting that pre-college differences between AP and non-AP students that could contribute to success in college courses (e.g., access to high-quality AP courses in high school) were not included in the model, so the differences observed between AP and non-AP students could still be somewhat biased. Another important consideration about the generalizability of the finding is the institutional context; most of the students in this study had to score 4 or 5 on their AP exams to be eligible for advanced courses. Therefore, any interpretation of the finding that AP students seem to be better prepared than non-AP students should consider that at this institution, “AP students” typically means students who earned scores of 4 or 5.

The second research question addressed the extent to which the answer to the first question about student performance varied across courses, and whether course difficulty could account for any of that variation. Given STEM faculty members' concerns about AP student preparation for advanced coursework in traditionally difficult STEM subjects (National Research Council, 2002) and the use of pre-college credit to meet prerequisites for advanced STEM courses (Troutman et al., 2018), it is notable that students in the AP Only group increased their advantage over students in the Course Only group in courses with higher DFW rates, which were concentrated in the STEM disciplines.

The third research question, regarding whether students with AP credit should repeat prerequisite courses or move ahead, is probably the most important finding for students who have earned AP credit and want to take additional coursework in the same subject area. In this case, students with AP credit were successful, on average, whether they chose to repeat prerequisite courses or move directly into advanced coursework. To some extent, the results show that repeating is a safer choice, since students in the AP Course group outperformed Course Only students by nearly half a letter grade regardless of the course. On average, students with AP credit earned slightly higher grades in subsequent courses if they repeated prerequisite courses in college first rather than moving directly into the subsequent course. The difference is quite small, though (approximately a tenth of a letter grade) and may not be worth the additional time and resources required to repeat a course. Also, target course outcomes for students who used their AP credit varied by course, and students who chose not to repeat prerequisites outperformed those who did so in some courses. At a minimum, this study suggests that it is generally not harmful for students to use credit earned via AP exams.

The results of this study illustrate that, whether students chose to accept their AP credit and move ahead, or choose to repeat introductory courses at the collegiate level, both paths led to success in subsequent courses. This finding offers some implications for academic advising practice and institutional policy. First, when students begin college with a substantial amount of credit, they can be more challenging to advise because they may already have credit for the general education courses students typically take in their first year (Troutman et al., 2018). Academic advisors (both faculty and primary-role advisors) must consider multiple factors that could influence the decision to use or repeat AP credit, such as whether students want to reduce time to degree or maximize deep understanding of the material, whether they need to take certain courses in college due to future career goals such as attending medical school, or whether they feel prepared to succeed alongside students in advanced courses who have more experience in college (Witkowsky et al., 2020). Advisors should also consider academic preparation beyond AP participation and exam performance when recommending whether to accept AP credit, because, for example, higher SAT scores and high school GPAs were also correlated with higher grades in target courses. This study did not include AP exam score as a predictor, and in many courses, there was little or no variation in scores between the two AP groups (e.g., all physics courses required exam scores of 5 to earn credit). However, in the three mathematics courses that had large numbers of students who chose to repeat the prerequisite course before moving on, students in the AP Only group did have slightly higher AP exam scores than students in the AP Course group. This finding could indicate that students made decisions about using AP credit at least in part based on their exam scores.

Second, in some instances, student decisions about the use of AP credit are restricted by institutional policies that may not allow them to retake courses for which they have already earned credit by exam. In the institution where we carried out our study, departmental policy required students who earned AP credit for a foreign language, and who wanted to study the language in college, to register for the next subsequent course. This sort of policy could be driven by concerns about pedagogy or equity. From a pedagogical standpoint, students who earn AP credit for a course and choose to repeat it can be challenging to teach alongside students with much more limited prior knowledge of the material (National Research Council, 2002). Allowing students with AP credit to repeat introductory courses can lead to equity concerns as well, particularly in courses with norm-referenced grading policies, which assign grades based on performance relative to other students enrolled in the course. In such a course, is it fair for students taking calculus for the first time, for example, to have to compete with students who have earned a 5 on the AP exam that covered the same content? Grades have far-reaching and immediate consequences for students (such as admission to competitive majors in the sophomore year, as at this institution), and the results of this study show there is little difference in subsequent course grades based on the use of AP credit. Therefore, academic and faculty advisors should consider whether allowing, or even encouraging, successful AP students to repeat credit is fair to those students who did not have the opportunity to earn AP credit in high school. Additionally, we hope the results of this study will inspire academic advisors to investigate similar questions at their own institution to gain a deeper understanding of when and whether it makes sense for students to use or repeat AP credit.

One potential limitation of this study is that, like any statistical model, there are layers of complexity to the data that could not be included but may be important when making decisions at the individual student level. For example, prior studies (Ackerman et al., 2013; Sadler & Tai, 2007) showed outcome differences depending on exam scores, whereas we used a binary measure of whether the student had earned AP credit based on the institution's policies. Future research could incorporate differences in AP exam scores and other variables that might be relevant in analyzing college student outcomes.

Additionally, conducting studies like this one at a single institution may limit the generalizability of the findings (Sadler & Tai, 2007) to comparably selective large public universities. Given the limited investigation of the effect of AP credit use on course grades, our first goal was to identify trends within a specific institutional context and to provide a model for future investigations by addressing some limitations pointed out in the current literature base (Warne, 2017). Conducting similar research across multiple institutions may increase the validity and generalizability of the current findings and is a possible direction for future research. However, as the courses for which students earn credit, and the policies for granting AP credit, both vary across institutions (Ackerman et al., 2013), the effect of AP credit use on college success could be highly contextual, and therefore, it may be that there is no common effect of using AP credit that could be meaningfully interpreted without a deep understanding of institutional context. We urge future researchers to consider key attributes of sampled students and courses when applying this study's results to other institutions, including the nature of the institution (large, public, selective), the institution's policy of requiring scores of 4 or higher to earn credit for most non-foreign languages courses, and the fact that only domestic students were included in the sample. Any multi-institutional study would likely have to account for similar differences in local context to operationalize variables consistently across universities. Perhaps the ideal outcome is that future studies are conducted at other individual institutions to explore whether the nuances of local policies lead to different or similar outcomes, with other future studies conducted across multiple institutions to identify possible common effects of using AP credit.

Furthermore, while we were not able to incorporate demographic variables into the final model, it is natural to wonder if AP effects are the same across all demographic groups. While this study did not specifically examine the factors that influenced use of AP credit, it is notable that neither URM status nor first-generation status significantly predicted the use of AP credit, and female students and Pell grant recipients were slightly more likely to use AP credit than their counterparts, after controlling for high school grades and SAT total scores. This indicates that academic advisors are likely not discouraging these students from using their AP credit. Future research could focus on narrower populations, such as underrepresented minority students, to see if estimates of the effect of using AP credit are consistent with what we found in this study.

Finally, in this study, we focused on student attributes, including AP credit use, and course characteristics to explain variation in grades across courses. Literature suggests students might choose to repeat AP credit to strengthen their mastery of material or earn higher grades (Burkholder & Wieman, 2019; Sadler & Sonnert, 2010). In fact, academic advisors may recommend students repeat courses depending on students' future career plans or the advisor's knowledge of the student's curriculum (Troutman et al., 2018; Witkowsky et al., 2020). The factors in a student's decision to use or repeat AP credit and how academic advisors' suggestions influence those decisions, are still unknown; a follow-up study in which researchers explore how advisors assist individual students' decision-making around AP would be a valuable contribution to the literature on AP and college success.

This study addressed substantial gaps in the literature on the Advanced Placement program by including students who earn but do not use AP credit and by including course-level differences in our model. As increasing numbers of students participate in the AP program and arrive at college with AP credit, it is important to understand how those students fare in college courses so faculty and professional advisors can provide helpful guidance and develop institutional policies informed by student outcome data. Overall, the findings of this study indicate that most of the time, students and academic advisors should feel confident that it is safe for students who earn AP credit to use it and move directly into subsequent courses.

Ackerman,
P. L.,
Kanfer,
R.,
&
Calderwood,
C.
(2013)
.
High school advanced placement and student performance in college: STEM majors, non-STEM majors, and gender differences
.
Teachers College Record
,
115
(10)
,
1
43
.
Burkholder,
E. W.,
&
Wieman,
C. E.
(2019)
.
What do AP physics courses teach and the AP physics exam measure?
Physical Review Physics Education Research
,
15
(2)
,
020117
1
– 020117-11.
College Board.
(n.d.)
.
AP program participation and performance data
2019
.
College Board.
(n.d.)
.
State and systemwide AP credit and placement policies
.
Conley,
D. T.
(2007)
.
Redefining college readiness
.
Educational Policy Improvement Center
.
De Urquidi,
K.,
Verdin,
D.,
Hoffmann,
S.,
&
Ohland,
M. W.
(2015)
.
Outcomes of accepting or declining Advanced Placement calculus credit
.
In
2015 IEEE Frontiers in Education Conference (FIE)
(pp. 1–6). IEEE.
Drew,
C.
(2011
,
January
7)
.
Rethinking advanced placement.
The New York Times
.
Evans,
B. J.
(2019)
.
How college students use Advanced Placement credit
.
American Educational Research Journal
,
56
(3)
,
925
954
.
Eykamp,
P. W.
(2006)
.
Using data mining to explore which students use Advanced Placement to reduce time to degree
.
New Directions for Institutional Research
,
2006(131), 83–99.
Hansen,
K.,
Reeve,
S.,
Gonzalez,
J.,
Sudweeks,
R. R.,
Hatch,
G. L.,
Esplin,
P.,
&
Bradshaw,
W. S.
(2006)
.
Are Advanced Placement English and first-year college composition equivalent? A comparison of outcomes in the writing of three groups of sophomore college students
.
Research in the Teaching of English
,
20
(4)
,
461
501
.
Hedges,
L. V.,
&
Hedberg,
E. C.
(2007)
.
Intraclass correlation values for planning group-randomized trials in education
.
Educational Evaluation and Policy Analysis
,
29
(1)
,
60
87
.
Klopfenstein,
K.
(2010)
.
Does the Advanced Placement program save taxpayers money? The effect of AP participation on time to college graduation
.
In
Sadler,
P. M.
Sonnert,
G.
Tai,
R. H.
&
Klopfenstein
K.
(Eds.),
AP: A critical examination of the Advanced Placement program
(pp.
189
218
).
Harvard Education Press
.
Klopfenstein,
K.,
&
Thomas,
M. K.
(2009)
.
The link between advanced placement experience and early college success
.
Southern Economic Journal
,
75
,
873
891
.
Klopfenstein,
K.,
&
Thomas,
M. K.
(2010)
.
Advanced Placement participation: Evaluating the policies of states and colleges
.
In
Sadler,
P. M.
Sonnert,
G.
Tai,
R. H.
&
Klopfenstein
K.
(Eds.),
AP: A critical examination of the Advanced Placement
(pp.
167
188
).
Harvard Education Press
.
Lacy,
T.
(2010)
.
Access, rigor, and revenue in the history of the Advanced Placement program
.
In
Sadler,
P. M.
Sonnert,
G.
Tai,
R. H.
&
Klopfenstein
K.
(Eds.),
AP: A critical examination of the Advanced Placement program
(pp.
17
48
).
Harvard Education Press
.
National Research Council.
(2002)
.
Learning and understanding: Improving advanced study of mathematics and science in US high schools
.
National Academies Press
.
Patterson,
B. F.,
&
Ewing,
M.
(2013)
.
Validating the use of AP® exam scores for college course placement. Research Report 2013-2
.
College Board.
Raudenbush,
S. W.,
&
Bryk,
A. S.
(2002)
.
Hierarchical linear models: Applications and data analysis methods (2nd ed.)
.
Sage
.
Raudenbush,
S. W.,
Bryk,
A. S.,
Cheong,
Y. F.,
Congdon,
R. T.,
du Toit,
M.
(2011)
.
HLM 7: Hierarchical linear and nonlinear modeling
.
Scientific Software International
.
Sadler,
P.
(2010)
.
Advanced Placement in a changing educational landscape
.
In
Sadler,
P. M.
Sonnert,
G.
Tai,
R. H.
&
Klopfenstein
K.
(Eds.),
AP: A critical examination of the Advanced Placement program
(pp.
3
16
).
Harvard Education Press
.
Sadler,
P.,
&
Sonnert,
G.
(2010)
.
High school Advanced Placement and success in college coursework in the sciences
.
In
Sadler,
P. M.
Sonnert,
G.
Tai,
R. H.
&
Klopfenstein
K.
(Eds.),
AP: A critical examination of the Advanced Placement program
(pp.
119
138
).
Harvard Education Press
.
Sadler,
P.,
&
Sonnert,
G.
(2018)
.
The path to college calculus: The impact of high school mathematics coursework
.
Journal for Research in Mathematics Education
,
49
(3)
,
292
329
.
Sadler,
P. M.,
&
Tai,
R. H.
(2007)
.
Advanced Placement exam scores as a predictor of performance in introductory college biology, chemistry and physics courses
.
Science Educator
,
16
(2)
,
1
19
.
Troutman,
D.,
Hendrix-Soto,
A.,
Creusere,
M.,
&
Mayer,
E.
(2018)
.
Dual credit and success in college
.
University of Texas System
.
U.S. Department of Education.
(2006)
.
Advanced Placement test fee program
.
Guide to U.S. Department of Education Programs
.
Wade,
C.,
Sonnert,
G.,
Sadler,
P.,
Hazari,
Z.,
&
Watson,
C.
(2016)
.
A comparison of mathematics teachers' and professors' views on secondary preparation for tertiary calculus
.
Journal of Mathematics Education at Teachers College
,
7
(1)
,
7
16
.
Warne,
R. T.
(2017)
.
Research on the academic benefits of the advanced placement program: Taking stock and looking forward
.
SAGE Open
,
7
(1)
,
1
16
.
Witkowsky,
P.,
Starkey,
K.,
Clayton,
G.,
Garnar,
M.,
&
Andersen,
A.
(2020)
.
Promises and realities: Academic advisors' perspectives of dual enrollment credit
.
NACADA Journal
,
40
(2)
,
63
73
.
Wladis,
C.,
Conway,
K.,
&
Hacheym,
A. C.
(2017)
.
Using course-level factors as predictors of online course outcomes: A multi-level analysis at a US urban community college
.
Studies in Higher Education
,
42
(1)
,
184
200
.

Author notes

Sheila F. Hurt recently completed her Ph.D. in Educational Psychology and Research Methodology in the College of Education at Purdue University. After more than ten years as an academic advisor for exploratory students and two years in advising administration, she now serves as the Senior Program Director for the Boiler Success Team, a student success initiative housed in the Office of the Provost at Purdue University. She can be reached atsfhurt@purdue.edu.

Yukiko Maeda, Ph.D., is an Associate Professor of Educational Psychology and Research Methodology in the College of Education at Purdue University. Her research promotes understanding, establishing, and disseminating best practices in the use of data in educational research. She has expertise in meta-analysis and multilevel modeling. Her recent research contributions include understanding data use at school settings for decision-making and identifying and presenting remedies for methodological issues for data analysis. She can be reached atymaeda@purdue.edu.