ABSTRACT

During a look-back analysis, auditors review prior-period evidence to understand estimation inaccuracies and assess the reliability of management's estimation process. We find that evidence specificity moderates the relation between the consistency of an estimation inaccuracy with management's incentives and auditors' reliability assessments. The direction of an estimation inaccuracy has no effect on auditors' reliability assessments when the prior-period evidence is less specific. When prior-period evidence is more specific, auditors report the highest (lowest) reliability assessments of management's estimation process when an estimation inaccuracy is inconsistent (consistent) with management's incentives. Auditors' low reliability assessments in the more specific, consistent condition, however, do not translate to high risk assessments. Instead, specificity has a main effect on auditors' risk assessments. A follow-up experiment reveals, though, an inverse relation between auditors' reliability and risk assessments when auditors are provided procedures to address various levels of assessed misstatement risk.

I. INTRODUCTION

Audit planning is a critical step in the audit process. When planning the audit of complex estimates and other significant accounting estimates with high estimation uncertainty, auditors can perform a look-back analysis in which they compare the assumptions underlying the prior-period estimate to the subsequent realizations of those assumptions (Public Company Accounting Oversight Board [PCAOB] 2002, Auditing Standard [AS] 2502.27; PCAOB 2001, AS 2501.06e). Information gleaned through a look-back analysis is meant to aid auditors in their assessment of the reliability of management's estimation process and, in turn, their audit planning for the current period. This study examines how auditors' review of this prior-period audit evidence influences the current-period audit. Specifically, we investigate how the specificity of the prior-period evidence and the direction of an estimation inaccuracy identified during the review of this evidence affect auditors' perceptions of the reliability of management's estimation process and their subsequent risk assessments for the current audit.1

Audit standards prescribe a look-back analysis of prior-period assumptions related to complex estimates and other significant accounting estimates (e.g., warranty obligations, inventory obsolescence, estimates related to litigation) that were identified in the prior period as having high estimation uncertainty (PCAOB 2002, AS 2502.27; PCAOB 2001, AS 2501.06e; American Institute of Certified Public Accountants [AICPA] 2017a, AU-C Sec. 540 ¶9, ¶¶A38–A44; AICPA 2017b, AU-C Sec. 240 ¶32.b.ii). During a look-back analysis, auditors obtain new information regarding the accuracy of management's prior-period forward-looking assumptions. Armed with this new information, auditors can determine both the magnitude of an identified estimation inaccuracy and whether that estimation inaccuracy is directionally consistent with management's incentives. Auditors then reconsider the prior-period evidence provided by management to support these assumptions in light of this new information. Thus, this study strives to increase our understanding of how the evidence examined during a look-back analysis affects auditors' planning judgments. Further, as put forth in the auditing standards, the insight auditors gain into the reliability of management's estimation process through the look-back analysis should inform their risk assessments (PCAOB 2002, AS 2502.13; PCAOB 2001, AS 2501.05). As such, another purpose of this study is to examine the extent to which auditors' reliability assessments relate to their risk assessments and comply with the guidance provided in the auditing standards.

When planning and conducting an audit of complex estimates, auditors should consider the expertise and experience of individuals determining those estimates (PCAOB 2002, AS 2502.12; PCAOB 2001, AS 2501.06c). Additionally, they should consider the potential for bias in those estimates (PCAOB 2010c, AS 2810.27). These notions of expertise and potential for bias link closely to determinants of source credibility (i.e., competence and trustworthiness) examined in the persuasion literature (Berlo, Lemert, and Mertz 1969; Giffin 1967; Schweitzer and Ginsburg 1966; McCroskey 1966; Mercer 2005). The persuasion literature finds that perceived message quality has a positive effect on a message recipient's perceptions of the source's expertise (Slater and Rouner 1996).2 Greater specificity of detail is one factor that increases the perceived quality of a message (Rosenthal 1971; Nisbett and Ross 1980). This suggests that auditors' assessments of management's competence in deriving a complex estimate will increase with the specificity of the prior-period evidence supporting a given assumption. However, prior marketing (e.g., Friestad and Wright 1994) and accounting (e.g., Kadous, Koonce, and Towry 2005) research suggests that auditors are more likely to view management's use of specific detailed information in its support of a key assumption as a persuasion tactic if the estimation inaccuracy is consistent with management's incentives. Given the potential for bias in these estimates (PCAOB 2010c, AS 2810.27), we expect that auditors' assessments will be negatively impacted by any perceived persuasion attempt.

We predict that auditors' assessments of the reliability of management's estimation process will depend on an interaction between the specificity of prior-period evidence and the estimation inaccuracy's consistency with management's incentives. In particular, we predict that auditors will assess management's estimation process as the most reliable when the prior-period evidence is more specific and the estimation inaccuracy is inconsistent with management's incentives. However, when the prior-period evidence is more specific and the estimation inaccuracy is consistent with management's incentives, we predict that auditors will assess management's estimation process as the least reliable. Following Kadous et al. (2005), we do not predict that incentive-related information will affect auditors' reliability assessments when the prior-period evidence is less specific.

With respect to risk assessments, prior research finds that individuals, uncomfortable with uncertainty in a decision-making scenario, often seek to reduce doubt by conducting additional testing or seeking additional information (Dawes 1988; Lipshitz and Strauss 1997). Auditors are no exception. Cannon and Bedard (2017) find that auditors increase their risk assessments of and, thus, their planned audit work for complex estimates as estimation uncertainty increases. Given that estimation uncertainty and the reliability of the estimation process are inversely related, we predict that higher (lower) reliability assessments will lead to a lower (higher) assessed risk of material misstatement of the current-period estimate.

To test our predictions, we conducted an experiment with practicing audit seniors in which they assumed the role of an audit senior responsible for planning the audit of an impairment analysis of a trademark. Auditor participants were given a look-back analysis related to the trademark's estimated fair value. All were given year-to-date information related to the realization of the forward-looking assumptions one period hence and a sensitivity analysis of the impact of any prior-period estimation inaccuracy on the trademark's estimated fair value. Using a 2 × 2 full-factorial design, we manipulated the specificity of the prior-period evidence and the direction of the estimation inaccuracy revealed ex post. Half of the participants reviewed prior-period workpapers containing less specific evidence provided by management describing the company's plans to achieve its revenue growth rate assumption. Remaining participants reviewed more specific evidence that provided additional details regarding the company's plans, including a description of research that supported the feasibility of the company's approach to growing revenue. We manipulated estimation inaccuracy direction at two levels. Half of the participants were told of an estimation inaccuracy that overstated the asset's fair value. We define this overstatement as being consistent with management's incentives at the time of the original estimate. For remaining participants, the estimation inaccuracy led to an understated fair value, or was inconsistent with management's incentives at the time of the original estimate. The absolute value of the estimation inaccuracy was held constant across conditions.

As expected, we find that evidence specificity moderates the relation between the direction of the estimation inaccuracy and auditors' reliability assessments. The direction of the estimation inaccuracy has no effect on auditors' reliability assessments when the prior-period evidence is less specific. When prior-period evidence is more specific, though, auditors report the highest (lowest) reliability assessments of management's estimation process when the estimation inaccuracy is inconsistent (consistent) with management's incentives. However, we do not find support for our prediction that auditors' risk assessments follow directly from their reliability assessments. Instead, auditors' risk assessments are driven by the specificity of the prior-period evidence. Further analysis reveals that this is due to the fact that auditors who provide the lowest reliability assessments (i.e., auditors in the more specific, consistent condition) do not provide the highest risk assessments.

A potential reason for low reliability assessments not mapping to high risk assessments could relate to the nature of uncertainty in deriving complex estimates. Some estimation processes are inherently unreliable (e.g., complex valuation methods, lack of observable inputs). For estimates with high estimation uncertainty, auditors may deem a large range of estimates to be reasonable (Bell and Griffin 2012). It can be difficult for auditors to challenge an estimate that falls within this reasonable range. Consequently, the risk of a material misstatement in a given estimate may actually decrease as the range of reasonable estimates increases, even though the reliability of the estimation process decreases as estimation uncertainty increases.

Another reason for the significant main effect of specificity on auditors' risk assessments could be auditors' insufficient valuation knowledge and resulting reluctance to challenge management's assumptions (Griffith, Hammersley, and Kadous 2015). More specific evidence likely exacerbates auditors' overreliance on management's assertions, given our finding that auditors' perceptions of management's competence increase with the specificity of management-provided evidence. To investigate this further, we conducted a second experiment in which auditor participants were given the more specific, consistent condition materials from the first experiment. Prior to providing their risk assessments, half of the participants were also provided with a list of procedures typically used to address both a high and a moderate risk assessment. The remaining participants received no procedural information. In the absence of procedural information, we observe an inverted-U relation between auditors' reliability and risk assessments. However, providing auditors with procedural information leads to a more inverse relation between reliability and risk assessments. Thus, providing information about how the audit could be adapted to address various risk assessments leads auditors to assess an estimate as higher risk when management's estimation process is deemed to have less reliability.

This study makes a significant contribution to the literature and practice. First, we investigate auditors' planning for an area of the audit that is difficult. Auditors attribute much of the difficulty in auditing complex estimates to the subjectivity and inherent uncertainty in the forward-looking assumptions underlying a complex estimate (Cannon and Bedard 2017). Given that appropriate audit planning is crucial for an effective audit (Mock and Wright 1993), it is important to understand how auditors facing this difficulty plan the audits of complex estimates. Interestingly, we find that lower reliability assessments of an estimation process do not always translate into higher risk assessments when planning the audit. Rather, auditors' risk assessments appear to increase up to a point where there are moderate levels of assessed reliability before leveling out or declining as reliability decreases with greater estimation uncertainty. This finding could relate to auditors' lack of valuation expertise leading to a reluctance to challenge management's estimation process (Griffith et al. 2015), especially as auditors' perception of management's competence increases. A complementary explanation is the difficulty of arguing against a single point estimate that falls within a large range of estimates deemed to be reasonable, given high estimation uncertainty.

Second, the look-back analysis provides a unique setting in which to investigate the manner in which auditors' review of prior-period evidence influences the current-period audit. Prior research demonstrates that auditors tend to over-rely on prior-period workpapers and conclusions at the expense of the consideration of new evidence during the substantive testing phase of the audit (e.g., Joyce and Biddle 1981; Brazel, Agoglia, and Hatfield 2004). Still, we know very little about how prior-period evidence affects the planning phase of an audit. Our study provides new insight into this area. In particular, we find that greater specificity of prior-period evidence reexamined during this phase decreases auditors' risk assessments regardless of whether the new evidence obtained reveals an estimation inaccuracy that is consistent or inconsistent with management's incentives.3 Thus, there is the potential that management could strategically increase the specificity of the evidence provided to support an estimate in a given year to hedge against auditors' perceptions of estimation risk one period hence. We find, however, that enhancing auditors' understanding as to how the audit could be tailored to address various risk assessments leads to a more inverse relation between auditors' perceptions of the reliability of management's estimation process and their risk assessments.

Last, our study is one of the first to investigate the look-back analysis process. Audit standards require the use of look-back analyses for any significant accounting estimate, including complex estimates, where auditors identified a high estimation uncertainty in the prior-period audit (PCAOB 2002, AS 2502.27; PCAOB 2001, AS 2501.06e). Standards also require a look-back analysis of management assumptions when auditors are reviewing accounting estimates for bias that could indicate a risk of material misstatement due to fraud (AICPA 2017b, AU-C Sec. 240 ¶32.b.ii). Our findings inform researchers, standard setters, and practitioners about how evidence examined during a look-back analysis may affect audit planning for estimates with high estimation uncertainty, as well as auditors' consideration of fraud in a financial statement audit.

II. BACKGROUND, THEORY, AND HYPOTHESES DEVELOPMENT

When identifying and assessing the risk of material misstatement for accounting estimates, auditors should conduct a look-back analysis to obtain an understanding of management's estimation process and the data on which the estimate is based (AICPA 2017a, AU-C Sec. 540 ¶9, ¶¶A38–A44). In conducting a look-back analysis of a complex estimate, auditors compare the underlying assumptions of a fair value estimate made in the prior period to their subsequently realized values in the current period. Auditors then review the prior-period evidence provided by management to support those assumptions to more closely evaluate management's prior-period estimation process. This provides auditors with insight into the reliability of management's estimation process underlying the current-period estimate (PCAOB 2002, AS 2502.27; PCAOB 2001, AS 2501.06e). Auditors use these reliability assessments when assessing the risk that the current-period estimate is materially misstated (PCAOB 2002, AS 2502.13). Below, we develop and test theoretically motivated hypotheses regarding how the specificity of prior-period evidence supporting a key assumption and the accuracy of that assumption realized one period hence affect auditors' assessments of the reliability of management's estimation process and the risk of material misstatement in the current-period estimate.

Assessed Reliability of Management's Estimation Process

In the case of complex estimates, the reliability of management's estimation process is a key determinant of auditors' risk assessments and their planned responses to those risks (PCAOB 2002, AS 2502.13; PCAOB 2001, AS 2501.05). The standards note that auditors should consider the source's credibility when assessing the reliability of the estimation process (PCAOB 2002, AS 2502.12; PCAOB 2001, AS 2501.06c). Data regarding the credibility of a source can come from prior knowledge of the source, the source's credentials, or source-provided information (Slater and Rouner 1996). In the look-back analysis setting, auditors can gain information about management's credibility in relation to the estimation process from their reexamination of the prior-period evidence. Thus, we theorize about how a characteristic of this prior-period evidence interacts with information known in the current period to affect auditors' perceptions of management's estimation process.4

First, we consider the specificity of the prior-period evidence. There is research in both the persuasion and managerial accounting literatures indicating that message specificity can increase perceptions of the source's credibility. Slater and Rouner (1996) find that message quality increases the perceived expertise of the source, leading to increased message persuasion. The perceived quality of the message increases with greater detail included in the message, ceteris paribus (Rosenthal 1971; Nisbett and Ross 1980). In a managerial setting, Kadous et al. (2005) show that including quantified information, a form of specificity, in a project proposal increases the recipient's perceptions of the competence of the proposal preparer. Taken together, these findings suggest that in a look-back analysis setting, auditors' assessments of management's competence will increase with greater specificity of the prior-period evidence. However, we posit that under some conditions, it is unlikely that more specific evidence will translate to more positive assessments of management's estimation process, such as when auditors obtain new incentive-related information during the look-back analysis that makes them believe strategic persuasion is a main purpose of more specific evidence.

After comparing the underlying assumptions determined in the prior period to their subsequently realized values in the current period, auditors can determine whether an estimation inaccuracy is directionally consistent or inconsistent with management's incentives, which, ordinarily, are to overstate assets. Anderson, Kadous, and Koonce (2004) and Kadous et al. (2005) both examine how specificity, in the form of quantification, interacts with a source's incentives in preparing a message to affect a recipient's evaluations of the source's message. In an auditing setting, Anderson et al. (2004) examine the persuasiveness of management's explanation for an unexpected fluctuation in realized revenue in the presence of high or low incentives to manage earnings. In this scenario, the primary purpose of management's explanation is to justify a non-misstatement reason for the revenue-increasing fluctuation. Consequently, there is likely a significant role for management's incentives to play in influencing auditors' perceptions of management's explanation, regardless of whether the explanation is quantified. Consistent with this logic, Anderson et al. (2004) find that management's incentives, as opposed to quantification, are the only factor that reliably affects auditor-recipients' evaluations of the message.

In a managerial setting, however, Kadous et al. (2005) find that the specificity (i.e., quantification) of a proposal interacts with the preparer's incentives to jointly influence manager-recipients' perceptions of the preparer and the persuasiveness of the proposal.5 More specifically, they predict and find that management is most likely to accept a proposal when the information is quantified and the preparer's incentives are consistent with the firm's objectives. However, following Friestad and Wright (1994), Kadous et al. (2005) also posit and find that quantified information is more likely to be viewed as a persuasion attempt when the preparer's incentives are inconsistent with the firm's objectives. This perceived persuasion attempt, in turn, motivates management to critically analyze the message for disconfirming evidence. Consequently, Kadous et al. (2005) find that a perceived persuasion attempt negates the positive effect of message specificity on the proposal's persuasiveness.

In a look-back analysis setting, the evidence in question is management's rationale for its estimate of a future outcome. Management's support for its forward-looking assumptions can vary in terms of the amount of reason-based qualitatively and quantitatively rich evidence that is included. Further, the amount and direction of a difference in assumptions used and realized outcomes is unknown at the time this evidence is provided to the auditor. Thus, we argue that the evidence evaluated by participants in our setting is more analogous to the evidence examined in Kadous et al. (2005), rather than that examined in Anderson et al. (2004).6

During a look-back analysis, it may be more difficult for auditors to determine whether management was trying to justify an aggressive assumption when there are fewer details provided to support management's expectations. As such, consistent with Kadous et al. (2005), we do not expect that auditors' perceptions of management's estimation process necessarily will be affected by the incentive-related information obtained in the current period when the prior-period evidence is less specific. However, when the prior-period evidence is more specific, we expect incentive-related information to influence auditors' reliability assessments. On one hand, greater specificity can provide additional insight into management's reasoning, leading auditors to understand and accept management's estimate as reasonable. On the other hand, more specific evidence can be viewed as a persuasion attempt, particularly when that evidence is used to support an estimate that is consistent with management's incentives (Kadous et al. 2005). Therefore, holding the magnitude of the estimation inaccuracy constant, consistent with Kadous et al. (2005), we expect that auditors' perceptions of management competence will increase with the specificity of the prior-period evidence. However, we also expect that auditors will interpret increased specificity as an attempt by management to persuade them to accept an aggressive assumption when a prior-period estimation inaccuracy is consistent with management's incentives. Given that auditors must consider the potential for bias in these estimates (PCAOB 2010c, AS 2810.27), we expect that auditors' judgments will be negatively impacted by any perceived persuasion attempt. Consequently, we predict that auditors' reliability assessments will be lowest when the prior-period evidence is more specific and the estimation inaccuracy is found to be inaccurate in a direction that overstates the fair value estimate. However, in the absence of a perceived persuasion attempt, we predict the highest reliability assessment when prior-period evidence is more specific and the prior-period forward-looking assumption is inaccurate in a direction that understates the fair value estimate. Stated formally:

H1:

Auditors' assessments of the reliability of management's estimation process will be highest (lowest) when prior-period evidence is more specific and the direction of the prior-period estimation inaccuracy is inconsistent (consistent) with management's incentives. When the evidence is less specific, auditors' reliability assessments will fall in the middle and will not differ based on the incentive-related information.

Auditors' Assessments of Misstatement Risk

When planning their audit, auditors follow a risk-based audit approach whereby they identify the risks of material misstatement for each significant account (PCAOB 2010a, AS 2110.59) and design an audit to specifically address those risks (PCAOB 2010b, AS 2301.08). The auditing standards stress that both the reliability of the estimation process (PCAOB 2002, AS 2502.24) and the risk of material misstatement associated with a specific estimate (PCAOB 2001, AS 2501.05) vary with the complexity and subjectivity involved in the estimation process. While auditors are to consider their assessments of the reliability of the estimation process when assessing the risk of misstatement (PCAOB 2002, AS 2502.13), the standards are somewhat vague as to what the relationship between auditors' reliability and risk assessments should look like.

Per the auditing standards, estimation uncertainty may not have opposing effects on auditors' reliability and risk assessments (i.e., lower reliability assessments and higher risk assessments). Estimation uncertainty, or lack of measurement precision, varies with the nature of the estimate, the estimation model, and the subjectivity involved in the estimation process (AICPA 2017a, AU-C 540.A4).7 For estimates with high estimation uncertainty, auditors may deem a large range of estimates to be reasonable (Bell and Griffin 2012). Consequently, estimation uncertainty decreases the reliability of the estimation process (PCAOB 2002, AS 2502.24). However, as long as management's estimate falls within that reasonable range, it is difficult to argue that it is unreasonable. Indeed, standards explain that “a difference between the outcome of an accounting estimate and the amount originally recognized or disclosed in the financial statements does not necessarily represent a misstatement of the financial statements; rather it could be an outcome of estimation uncertainty” (AICPA 2017a, AU-C 540.04). Thus, the risk of a material misstatement in a given estimate may decrease as the range of reasonable estimates (i.e., estimation uncertainty) increases. Following this logic, the relationship between auditors' reliability and risk assessments may not be inverse in the presence of estimation uncertainty.8

Psychology research indicates, however, that individuals, uncomfortable with uncertainty, tend to take action to reduce the uncertainty prior to decision making (Lipshitz and Strauss 1997). This can include conducting additional testing or seeking additional information (Dawes 1988). Cannon and Bedard (2017) find that auditors are no exception. In particular, they find that auditors' risk assessments are positively associated with estimation uncertainty, even for estimates with high estimation uncertainty. Given the positive relationship between risk assessments and planned audit work, this evidence suggests that auditors will cope with uncertainty in complex estimates by planning to gather more extensive substantive evidence.9

Discussions with practicing audit partners suggest that the relation between estimation uncertainty and auditors' risk assessments likely depends on the standardization of the estimation process. For estimates with a more standardized, less complex estimation process (i.e., bad debt expense), audit partners note that high estimation uncertainty does not necessarily equate to a significant risk. As a result, in these instances, auditors may not perceive there to be an inverse relationship between auditors' reliability and risk assessments. However, for those estimates lacking a generally accepted estimation method or model (i.e., most complex estimates), audit partners indicate that they are more likely to respond to higher levels of estimation uncertainty with higher assessments of misstatement risk. Such a response is consistent with the related findings in both the psychology and auditing literatures (Cannon and Bedard 2017; Lipshitz and Strauss 1997). Thus, while the auditing standards are vague as to the effect of estimation uncertainty on auditors' risk assessments, the psychology and auditing literatures present corroborative findings that are also supported by the audit partners' reporting. Consequently, we rely on the consistent evidence provided by audit partners and that found in the prior literature to predict an inverse relation between auditors' reliability and risk assessments.

The look-back analysis provides useful evidence regarding the reliability of management's estimation process and, in turn, the estimation uncertainty in future estimates derived from the same process. Consistent with prior psychology and accounting research, we expect auditors to take action to address this uncertainty in the form of heightened risk assessments. Therefore, we expect that when planning the audit of a complex estimate, auditors will perceive a more (less) reliable estimation process as indicative of a lower (higher) risk of material misstatement. Consequently, we predict that the effect of prior-period evidence specificity and the direction of a subsequently revealed estimation inaccuracy on auditors' reliability assessments will flow through to auditors' assessments of the risk of a material misstatement in the current-period estimate. Stated formally:

H2:

Auditors' assessments of the risk of material misstatement will be lowest (highest) when prior-period evidence is more specific and the direction of the prior-period estimation inaccuracy is inconsistent (consistent) with management's incentives. When the evidence is less specific, auditors' risk assessments will fall in the middle and will not differ based on the incentive-related information.

It should be noted that there is considerable a priori tension in this hypothesis because a less reliable estimate should not necessarily be assessed as having a higher misstatement risk. The reason for this is that some estimation processes are inherently unreliable. Thus, it is difficult for auditors to challenge the reasonableness of an estimate derived from a process that by its very nature has little to no potential to be reliable. If management's estimate is not unreasonable, then the auditing standards stipulate that no misstatement exists regardless of the subsequent realization. By contrast, if the estimation process has the potential to be reliable, but is not, due to factors such as lack of management competence, then unreliability should increase the assessed risk of misstatement. Following this logic, estimate uncertainty may be a boundary condition on the predicted inverse relationship between auditors' reliability and risk assessments.

III. RESEARCH METHOD

Participants

We test our hypotheses using a 2 × 2 full-factorial design. Practicing auditors were recruited in coordination with the Center for Audit Quality's Access to Audit Personnel Program from four Big 4 firms and four non-Big 4 firms. Consistent with prior research examining audit planning decisions (e.g., Houston 1999; Low 2004), we enlisted the participation of experienced audit seniors.10 Of the 117 auditors who completed the case, we excluded six participants due to lack of experience planning an audit. The remaining 111 participants averaged 49 months of audit experience, with 86 percent having experience auditing fair value estimates and 58 percent reporting experience auditing Level 3 assets and liabilities.

Procedure and Design

We distributed our experiment to auditor participants with the assistance of the Center for Audit Quality. Using an online instrument administered by Qualtrics, participants were requested to complete the case study in one session. We asked all participants to assume the role of an audit senior on the hypothetical audit of Vermont Cheeses, a company in the specialty food industry. Participants were responsible for planning the audit of the current period's impairment analysis of the Chatsworth Cheddar trademark. The case included the accounting standard that governs the accounting for trademarks and the applicable guidance for auditing estimates. We also provided participants with management's valuation model and assumptions used to derive the fair value estimate of the trademark.11 As described in the case, the total amount of royalties (i.e., cash flows) Vermont Cheeses receives related to the Chatsworth Cheddar trademark in any given year is driven by two factors, i.e., the number of locations selling the trademarked cheese and the revenues per location. Therefore, the key assumptions in management's discounted cash flow model are (1) the growth rate of stores selling the trademarked cheese, (2) the per-store revenue growth rate, and (3) the discount rate.

After participants reviewed the valuation model, they moved on to the look-back analysis. All participants received the prior year's workpapers containing the evidence provided as support for each assumption underlying the trademark's estimated fair value at that time, as well as actual year-to-date information for each of the three key assumptions. Only the actual year-to-date per-store revenue growth differed significantly from the assumption underlying the prior-period estimate. All participants received an explanation from management for this discrepancy that included information regarding a difference in the actual timing of both the new branding initiative and the price increase compared to original expectations. Participants also received information about an unexpected price change implemented by Chatsworth Cheddar's main competitor. In other words, all participants were provided with information that could lead them to attribute the inaccuracy to a deficiency in management's estimation process, an exogenous environmental shock, or some combination of both.12 Finally, all participants were provided with a sensitivity analysis detailing the effect of the estimation inaccuracy in the prior-period assumption on the trademark's estimated fair value.

We manipulated our two independent variables within the evidence related to the per-store revenue growth rate assumption. We manipulated our first independent variable, specificity of prior-period evidence, at two levels.13 Participants in our less specific condition reviewed evidence provided by management in the prior period that described the company's plans to achieve the average industry per-store revenue growth rate by increasing sales prices and customer demand. The other half of our participants reviewed more specific prior-period evidence that also included a discussion of research supporting the viability of the planned price increase, as well as details on the company's efforts to increase customer demand. See Appendix A for the manipulation of prior-period evidence specificity included in the case materials.

We also manipulated the direction of the estimation inaccuracy at two levels. Determination of whether the trademark is impaired hinges on the estimate of the fair value of the Chatsworth Cheddar trademark as it compares to its book value. In this impairment analysis setting, management ordinarily has an incentive to overstate the fair value of the trademark to avoid uncertainty regarding the impairment decision. Consequently, management should prefer to estimate a fair value that is far greater than (versus close to) the asset's book value. Thus, an estimation inaccuracy in a prior-period assumption that leads to a larger fair value estimate of the trademark can be characterized as consistent with management's incentives, and vice versa. As such, participants in the consistent (inconsistent) condition reviewed a look-back analysis that revealed that the per-store revenue growth rate was overstated (understated) by 1 percent.14 See Appendix B for the manipulation of estimation inaccuracy direction included in the look-back analysis workpaper.

After completing their review of the look-back analysis, the participants provided their assessments of the reliability of management's estimation process and the risk of material misstatement in the current-period estimate. We captured participants' assessments of the reliability of management's estimation process on a scale from 0 (not at all reliable) to 10 (completely reliable). Participants also assessed the risk of material misstatement in the current-period estimate on a scale from 0 (extremely low) to 10 (extremely high).

During the final part of the experiment, all participants answered post-experimental and demographic questions. We asked our participants to assess management's competence and trustworthiness, as well as to provide their assessments of the support provided by the prior-period evidence for the forward-looking assumption made by management in the prior period. These measures were intentionally captured after the reliability assessment and risk assessment to avoid priming participants to think about factors they may not normally consider when making these judgments.

IV. RESULTS

Comprehension Checks

To verify that participants understood how the key assumptions impacted the fair value estimate, we asked participants to indicate how an increase in each of the assumptions would impact the overall fair value of the trademark. Ninety-one percent of our participants passed all three comprehension check questions. We include all participants in the analyses described below. However, the results of our hypotheses testing are qualitatively similar if we exclude those participants who failed the comprehension check questions.

Test of Hypotheses

Reliability of Management's Estimation Process

We test our hypothesis related to auditors' assessments of the reliability of management's estimation process using auditors' reliability assessments as the dependent variable, and the specificity of prior-period evidence and the direction of the estimation inaccuracy as the independent variables. We find that auditors from non-Big 4 firms assess management's estimation process as more reliable than auditors employed at Big 4 firms. However, the size of the audit firm in which participants are employed does not interact with either of our independent variables (all two-tailed p ≥ 0.458). Therefore, we control for auditors' employment in our ANCOVA model.15 Panel A of Table 1 provides the cell means and standard deviations of the dependent variable, Panel B details the ANCOVA model, Panel C provides the results of the planned contrasts, and Panel D presents simple main effects.

TABLE 1

Auditors' Assessments of the Reliability of Management's Estimation Process

Auditors' Assessments of the Reliability of Management's Estimation Process
Auditors' Assessments of the Reliability of Management's Estimation Process

H1 predicts that auditors for whom prior-period evidence is more specific and an estimation inaccuracy realized in the current period is inconsistent (consistent) with management's incentives will provide the highest (lowest) reliability assessments of management's estimation process. Additionally, H1 predicts that the reliability assessments of auditors reviewing less specific evidence will fall in the middle, but will not differ based on the direction of the estimation inaccuracy. In other words, we expect evidence specificity to moderate the relation between the consistency of an estimation inaccuracy with management's incentives and auditors' assessments of the reliability of management's estimation process.

We formally test our hypothesis using a linear contrast of cell means whereby we give the more specific, inconsistent condition a contrast weight of +1, the more specific, consistent condition a contrast weight of −1, and the two less specific conditions a contrast weight of zero (Buckless and Ravenscroft 1990).16 As reported in Table 1, Panel C, we find that the observed pattern of reliability assessments are consistent with the predicted pattern (t106 = 1.92, one-tailed p = 0.029). In particular, in Panel A, auditors in the more specific, inconsistent condition provide the highest reliability assessments (mean = 7.73), while auditors in the more specific, consistent condition provide the lowest reliability assessments (mean = 7.17). The reliability assessments of auditors in the less specific conditions fall in the middle (meanless specific, inconsistent = 7.37, meanless specific, consistent = 7.62) and do not differ with the direction of the estimation inaccuracy (t106 = 0.77, two-tailed p = 0.441; Panel D). Further, we find that the between-cells variance (i.e., residual) not captured by the planned contrast is insignificant (two-tailed p = 0.961), indicating that the contrast is a good fit. Our contrast explanation factor (k = 5.944) indicates that approximately six times more of the variance can be attributed to the contrast rather than other sources. Finally, specificity is a significant moderator of the relationship between the consistency of the estimation inaccuracy with management's incentives and auditors' reliability assessments (b = −0.93, SE = 0.47, one-tailed p = 0.030) after controlling for Big 4 employment.17 Thus, H1 is supported.18,19

We asked several post-experimental questions to glean further insight into the cognitive process underlying H1. These questions measured auditors' perceptions of management's credibility, the prior-period evidence, and the prior-period assumption. More specifically, we asked auditors to assess management's competence (0 = Not at all competent; 10 = Completely competent) and trustworthiness (0 = Not at all trustworthy; 10 = Completely trustworthy). Auditors also evaluated the amount of support provided by management for the prior-period assumption (0 = No support; 10 = Complete support) and rated the reasonableness of that prior-period assumption (0 = Not at all reasonable; 10 = Completely reasonable).

As predicted, untabulated results indicate that auditors perceive management competence as higher when prior-period evidence is more (versus less) specific (t106 = 1.66, one-tailed p = 0.022). However, we find no difference in auditors' assessments of management trustworthiness across conditions. Results do indicate, though, that auditors in the more specific, consistent condition deem the prior-period evidence to provide less support for management's revenue growth rate assumption (t106 = 1.65, one-tailed p = 0.053) and the assumption to be significantly less reasonable (t106 = 2.65, one-tailed p = 0.005), as compared to the more specific, inconsistent condition.20

When the prior-period evidence is less specific, auditors' perceptions of evidential support for the prior-period revenue growth rate (t106 = 0.56, two-tailed p = 0.458) and their reasonableness assessments of that assumption (t106 = 0.63, two-tailed p = 0.532) do not differ with the direction of the estimation inaccuracy. Further, auditors in the less specific conditions believe there to be moderately less support for the growth rate assumption (t106 = 1.58, one-tailed p = 0.059) and that assumption to be less reasonable (t106 = 1.65, one-tailed p = 0.051), compared to auditors in the more specific, inconsistent condition. Thus, consistent with our theory, auditors in the more specific, inconsistent condition appear to have the most positive perceptions of the management-provided prior-period evidence and growth rate assumption. On the other hand, when the prior-period evidence is less specific, auditors' perceptions of the support for the growth rate assumption in question do not differ from those auditors in the more specific, consistent condition (t106 = 0.31, two-tailed p = 0.755). Yet auditors in the more specific, consistent condition assess the prior-period revenue growth rate assumption as moderately less reasonable (t106 = 1.43, one-tailed p = 0.078), compared to auditors in the less specific conditions.

Risk of Material Misstatement in the Current-Period Audit

We predict in H2 that the effect of evidence specificity and the direction of the prior-period estimation inaccuracy on auditors' reliability assessments will flow through to their risk assessments. Therefore, we expect that auditors' assessments of the risk of material misstatement will be lowest (highest) when prior-period evidence is more specific and the direction of the prior-period estimation inaccuracy is inconsistent (consistent) with management's incentives. To test this, we run an ANCOVA model with auditors' risk assessments as the dependent variable, the specificity of prior-period evidence and the direction of the estimation inaccuracy as the independent variables, and Big 4 employment as our covariate. Panel A of Table 2 provides the cell means and standard deviations of the dependent variable, Panel B details the ANCOVA model, Panel C provides the results of the planned contrast, and Panel D presents simple main effects.

TABLE 2

Auditors' Assessments of the Risk of Material Misstatement

Auditors' Assessments of the Risk of Material Misstatement
Auditors' Assessments of the Risk of Material Misstatement

We test our second hypothesis using a linear contrast of cell means (Buckless and Ravenscroft 1990). We use a contrast weight of −1 for the more specific, inconsistent condition, +1 for the more specific, consistent condition, and zero for the two less specific conditions. The means presented in Table 2, Panel A are not consistent with our expectations, and the planned contrast reported in Panel C does not support this hypothesis (t106 = 0.48, p-value is NA given the direction of the means). Thus, H2 is not supported. However, as reported in the ANCOVA model in Panel B, we find a significant main effect of specificity on auditors' risk assessments (F1,106 = 3.41, two-tailed p = 0.068). In particular, in Panel A, auditors receiving more specific prior-period evidence perceive there to be a lower risk of material misstatement in the current-period estimate (mean = 5.36) than those receiving less specific evidence (mean = 6.04), regardless of the direction of the estimation inaccuracy.21

Additional Analyses

As noted in Figure 1, the interactive effect of our two variables of interest on auditors' reliability assessments does not flow through to auditors' risk assessments. Rather, we find that auditors reviewing more specific evidence during the look-back analysis assess the risk of material misstatement to be lower than those reviewing less specific evidence, regardless of the direction of an estimation inaccuracy. Further investigation reveals that the mapping from auditors' reliability assessments to their risk assessments appears to fall apart when prior-period evidence is more specific and the direction of the prior-period estimation inaccuracy is consistent with management's incentives. Auditors in this condition provide the lowest reliability assessments of management's estimation process, but, contrary to our expectations, they do not provide the highest risk assessments. Rather, their risk assessments do not differ from the risk assessments of auditors in the more specific, inconsistent condition (F1,106 = 0.24, two-tailed p = 0.629). Moreover, we find that where auditors' assessed risk of material misstatement negatively correlates with their reliability assessments in the more specific, inconsistent condition (r = −0.29, one-tailed p = 0.078), these measures are not significantly correlated in the more specific, consistent condition (r = −0.13, one-tailed p = 0.249).

FIGURE 1

Comparison of Auditors' Assessments of the Reliability of Management's Estimation Process and the Risk of Material Misstatement

FIGURE 1

Comparison of Auditors' Assessments of the Reliability of Management's Estimation Process and the Risk of Material Misstatement

One possible reason for this finding could be auditors' lack of sufficient valuation knowledge to engage in “the necessary critical analysis of management or specialist models” (Griffith et al. 2015, 835). Griffith et al. (2015) explain that this lack of knowledge, and related lack of confidence in their own ability to question management's estimate, makes auditors reluctant to challenge management. Further, given our finding that auditors' perceptions of management's competence are higher when prior-period evidence is more specific, regardless of the direction of an estimation inaccuracy in the assumption, auditors are arguably even more susceptible to overreliance on management's assertions as the specificity of the supporting evidence increases.

We investigate whether auditors' lack of knowledge and/or confidence in auditing complex estimates has an effect on this finding. Participants reported their confidence in their abilities to assess the reasonableness of inputs used in estimating fair values and inputs underlying the estimate of a trademark. Additionally, participants assessed their knowledge of discounted cash flow modeling. All three variables load on a single factor (eigenvalue = 2.19).22 An untabulated moderation analysis indicates that this factor does not moderate the above relationship (b = 0.01, SE = 0.14, two-tailed p = 0.964).23 In other words, auditors' self-assessed confidence and knowledge in auditing complex estimates do not appear to be driving our results.

Supplemental Experiment: Relation of Reliability and Risk Assessments

We conduct a second experiment with 32 experienced senior auditors to investigate further how auditors' reliability assessments translate to their risk assessments. Participants in this study were alumni of a large state university, possessed an average of 44 months of audit experience, and all had experience planning an audit.24 Seventy-eight percent had experience auditing fair value estimates, and 44 percent reported experience auditing Level 3 assets and liabilities. Using a 2 × 1 design, participants completed the materials from the more specific, consistent condition used in the original experiment. We focus on this condition as it is the one in which auditors' low reliability assessments do not translate to high risk assessments. Immediately prior to assessing the risk of material misstatement in the current-year estimate, participants in the procedures condition were provided with procedures typically used to address a high risk estimate and a moderate risk assessment.25 The remaining half of our participants received no procedural information.

We test whether auditors' risk assessments are more responsive to their reliability assessments after being told what procedures are typically used to address high and moderately risky estimates. To do this, we first mean-center auditors' reliability assessments within each procedure condition to avoid collinearity problems (Aiken and West 1991). We then run a regression where auditors' risk assessments are the dependent variable, and the independent variables are auditors' reliability assessments and the procedure condition (coded as 0 = no procedures, 1 = procedures given). The formal regression equation is as follows:
\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\bf{\alpha}}\)\(\def\bupbeta{\bf{\beta}}\)\(\def\bupgamma{\bf{\gamma}}\)\(\def\bupdelta{\bf{\delta}}\)\(\def\bupvarepsilon{\bf{\varepsilon}}\)\(\def\bupzeta{\bf{\zeta}}\)\(\def\bupeta{\bf{\eta}}\)\(\def\buptheta{\bf{\theta}}\)\(\def\bupiota{\bf{\iota}}\)\(\def\bupkappa{\bf{\kappa}}\)\(\def\buplambda{\bf{\lambda}}\)\(\def\bupmu{\bf{\mu}}\)\(\def\bupnu{\bf{\nu}}\)\(\def\bupxi{\bf{\xi}}\)\(\def\bupomicron{\bf{\micron}}\)\(\def\buppi{\bf{\pi}}\)\(\def\buprho{\bf{\rho}}\)\(\def\bupsigma{\bf{\sigma}}\)\(\def\buptau{\bf{\tau}}\)\(\def\bupupsilon{\bf{\upsilon}}\)\(\def\bupphi{\bf{\phi}}\)\(\def\bupchi{\bf{\chi}}\)\(\def\buppsy{\bf{\psy}}\)\(\def\bupomega{\bf{\omega}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\begin{equation}\tag{1}Risk\,{\it of}\,Material\,Misstatement\,Assessments = {b_0} + {b_1}Reliability\,Assessments + {b_2}Procedure\,{\rm{\ }}Condition + {b_3}Reliability\,Assessments \times Procedure\,{\rm{\ }}Condition \end{equation}
In this regression, b0 captures the average risk assessments across the procedure conditions at the mean reliability assessments. The average effect of auditors' reliability assessments on their risk assessments is captured by b1, while b2 reflects the difference between the risk assessments of those auditors who received procedures and those who did not. Finally, b3 captures the difference in the effect of auditors' reliability assessments on their risk assessments between the no procedures and procedures conditions. As reported in Panel A of Table 3, we find that auditors' risk assessments are moderately more responsive to their reliability assessments (i.e., their risk assessments increase as their reliability assessments decrease) in the presence of procedures (b3 = −0.79, one-tailed p = 0.061).26
TABLE 3

Supplemental Experiment Regression Results

Supplemental Experiment Regression Results
Supplemental Experiment Regression Results
Additionally, we analyze separately the shape of the relation between auditors' reliability assessments and risk assessments for the procedures condition and the no procedures condition. Using auditors' risk assessments as the dependent variable and auditors' mean-centered reliability assessments as our independent variable, we compute the regression below.
\begin{equation}\tag{2}Risk\,{\it of}\,Material\,Misstatement\,Assessments = {b_0} + {b_1}Reliability\,Assessments + {b_2}Reliability\,Assessment{s^2} \end{equation}
We are particularly interested in whether this relation is linear or curvilinear. In Panels B and C of Table 3, we report the results of the regression for the no procedures condition and the procedures condition, respectively. For the no procedures condition, the quadratic function of auditors' reliability assessments is negative and significant (two-tailed p = 0.004) and the linear function is not (two-tailed p = 0.520). This indicates an inverted-U relation between auditors' reliability and risk assessments in the absence of procedural information (see Figure 2, Panel A). However, when auditors are informed about the procedures used to address varying levels of risk, the linear function of auditors' reliability assessments is negative and significant (two-tailed p = 0.008) and the quadratic function is not (two-tailed p = 0.393). Thus, we have evidence that providing auditors with the procedures typically used to address high versus moderate risk assessments results in an inverse relation between auditors' reliability assessments and risk assessments (see Figure 2, Panel B).

Relation of Auditors' Assessments of the Reliability of Management's Estimation Process and the Risk of Material Misstatement

FIGURE 2
FIGURE 2

Panel A: No Procedures Condition

Panel A: No Procedures Condition

Panel B: Procedures Condition

Panel B: Procedures Condition

As discussed in the development of H2, one explanation for our findings in the no procedures condition is that auditors do not believe that the risk of a material misstatement always increases as the reliability of the estimation process decreases. For an estimate with high estimation uncertainty, the estimation process is less reliable, but it could also be more difficult for an auditor to argue that management's point estimate is unreasonable. As such, auditors' risk assessments may increase up to a point where there are moderate levels of complexity and subjectivity in the estimation process, and then level out or decline as their assessment of the reliability of the estimation process decreases further. However, when auditors are provided with procedures that could be used to address varying levels of risk, they are prompted to think about doing more work to test a given estimate as the risk increases. Auditors then naturally respond to their contemplation of performing this additional work by increasing their risk assessments, leading to the inverse relationship observed in the procedures condition.

A complementary explanation for our findings is that auditors believe a given estimation to be more complex when the evidence provided to support that estimate is more specific. This perceived increased complexity combined with auditors' lack of valuation knowledge (Griffith et al. 2015) results in auditors being reluctant to challenge an estimation model for which management provides quantitatively and qualitatively rich support. Auditors then assess a lower risk of material misstatement because they do not know how to address the estimation uncertainty. However, when auditors are provided guidance in addressing various levels of risk, they may be more willing to assess risk at a level that is more commensurate with their true beliefs (e.g., inversely related to their reliability assessments).

While we cannot say whether either or both of these two alternative explanations are driving our findings, what this second experiment shows is that providing auditors with procedural information changes the relation between their reliability and risk assessments for estimates with high estimation uncertainty. However, it is important to note that auditing standards are not clear regarding the relationship between reliability and risk assessments. Thus, we encourage future research to explore this question.

V. SUMMARY AND DISCUSSION

In this study, we examine how the specificity of prior-period evidence examined during a look-back analysis, and the direction of an estimation inaccuracy revealed during this analysis, affect auditors' planning for the current-year audit of a complex estimate. We find that the specificity of prior-period evidence interacts with the direction of a subsequently revealed estimation inaccuracy to influence auditors' assessments of the reliability of management's estimation process. In particular, auditors provide the highest (lowest) reliability assessments when the direction of an estimation inaccuracy is inconsistent (consistent) with management's incentives and the prior-period evidence is more specific. Moderation analysis indicates that specificity moderates the relation between the consistency of an estimation inaccuracy with management's incentives and auditors' reliability assessments.

While auditors' assessments of the reliability of management's estimation process are meant to inform their assessments of the risk of a material misstatement (PCAOB 2001, AS 2501.06), we find that auditors' risk assessments are not always responsive to their reliability assessments. In particular, when the estimation inaccuracy is consistent with management's incentives, auditors reviewing more specific prior-period evidence assign the lowest reliability assessments, but do not provide the highest risk assessments. To further investigate this finding, we conduct a second experiment where we find that auditors' risk assessments in this scenario are more responsive to their reliability assessments when they are provided a list of typical audit procedures used to address high and moderate risk estimates. While the relation between reliability and risk assessments is inverse for auditors who are provided procedural information, there is evidence of a nonlinear relation in the absence of that procedural information.

We propose two complementary explanations for these findings on the relation between auditors' assessed reliability of management's estimation process and risk of material misstatement in the more specific, inconsistent condition, although we cannot determine whether either or both might be driving our findings. One explanation is that auditors do not believe that risk of material misstatement always increases as the reliability of an estimate process decreases. Auditors' risk assessments may increase until some level of complexity and subjectivity in the estimation process is reached, then level out or decline because it becomes difficult to argue that management's estimate is unreasonable given high estimation uncertainty. When provided with procedural information, though, auditors are prompted to consider doing more work. Auditors then naturally increase their risk assessments as a result of their consideration of this additional work. Another plausible explanation for our findings is that auditors lack confidence and/or knowledge about how to address an estimate derived from an unreliable estimation process. Receiving guidance about efforts that could be taken to address a high risk of misstatement increases their confidence and/or knowledge to respond with a risk assessment that is more commensurate with their reliability assessment. We encourage future research to explore this open question.

In conclusion, we believe this study makes important contributions that are informative to practitioners, regulators, and researchers. We provide some of the first research examining the judgments auditors make during a look-back analysis, which is an important component of audit planning for any significant estimate that has high estimation uncertainty (PCAOB 2001, AS 2501.06e). Moreover, a look-back analysis is a unique setting to examine auditors' use of previously relied upon audit evidence in their planning for the current-period audit. Prior research has shown that auditors tend to over-rely on prior-period workpapers and conclusions during the substantive testing phase of the audit (e.g., Joyce and Biddle 1981; Brazel et al. 2004). However, little is known regarding the effect of prior-period evidence on current-period audit planning. Our study provides important insights into the effect of one characteristic of prior-period evidence on auditors' planning decisions. We also contribute to the audit planning research in our investigation of the link between various judgments made in the context of planning the audit of a complex estimate.

REFERENCES

REFERENCES
Aiken,
L. S.,
and
West
S. G.
1991
.
Multiple Regression: Testing and Interpreting Interactions
.
Thousand Oaks, CA
:
Sage
.
American Institute of Certified Public Accountants (AICPA).
2017
a.
AU-C Section 540: Auditing Accounting Estimates, Including Fair Value Accounting Estimates, and Related Disclosures
.
Washington, DC: AICPA.
American Institute of Certified Public Accountants (AICPA).
2017
b.
AU-C Section 240: Consideration of Fraud in a Financial Statement Audit
.
Washington, DC: AICPA.
Anderson,
U.,
Kadous
K.,
and
Koonce
L.
2004
.
The role of incentives to manage earnings and quantification in auditors' evaluations of management-provided information
.
Auditing: A Journal of Practice & Theory
23
(
1
):
11
27
.
Backof,
A. G.,
Carpenter
T. D.,
and
Thayer
J.
2018
.
Auditing complex estimates: How do construal level and evidence formatting impact auditors' consideration of inconsistent evidence?
Contemporary Accounting Research
35
(
4
):
1798
1815
.
Bell,
T. B.,
and
Griffin
J. B.
2012
.
Commentary on auditing high-uncertainty fair value estimates
.
Auditing: A Journal of Practice & Theory
31
(
1
):
147
155
.
Berlo,
D.,
Lemert
J.,
and
Mertz
R.
1969
.
Dimensions for evaluating the acceptability of message sources
.
Public Opinion Quarterly
33
(
4
):
563
576
.
Brazel,
J. F.,
Agoglia
C. P.,
and
Hatfield
R. C.
2004
.
Electronic versus face-to-face review: The effects of alternative forms of review on auditors' performance
.
The Accounting Review
79
(
4
):
949
966
.
Buckless,
F. A.,
and
Ravenscroft
S. P.
1990
.
Contrast coding: A refinement of ANOVA in behavioral analysis
.
The Accounting Review
65
(
4
):
933
945
.
Cannon,
N. H.,
and
Bedard
J. C.
2017
.
Auditing challenging fair value measurements: Evidence from the field
.
The Accounting Review
92
(
4
):
81
114
.
Christensen,
B. E.,
Glover
S. M.,
and
Wood
D. A.
2012
.
Extreme estimation uncertainty in fair value estimates: Implications for audit assurance
.
Auditing: A Journal of Practice & Theory
31
(
1
):
127
146
.
Dawes,
R. M.
1988
.
Rational Choice in an Uncertain World
.
New York, NY
:
Harcourt Brace Jovanovich
.
Friestad,
M.,
and
Wright
P.
1994
.
The persuasion knowledge model: How people cope with persuasion attempts
.
Journal of Consumer Research
21
(
1
):
1
31
.
Giffin,
K.
1967
.
The contribution of studies of source credibility to a theory of interpersonal trust in the communication process
.
Psychological Bulletin
68
(
2
):
104
120
.
Griffith,
E. E.,
Hammersley
J. S.,
and
Kadous
K.
2015
.
Audits of complex estimates as verification of management numbers: How institutional pressures shape practice
.
Contemporary Accounting Research
32
(
3
):
833
863
.
Guggenmos,
R. D.,
Piercey
M. D.,
and
Agoglia
C. P.
2018
.
Custom contrast testing: Current trends and a new approach
.
The Accounting Review
93
(
5
):
223
244
.
Hammersley,
J. S.,
Bamber
E. M.,
and
Carpenter
T. D.
2010
.
The influence of documentation specificity and priming on auditors' fraud risk assessments and evidence evaluation decisions
.
The Accounting Review
85
(
2
):
547
571
.
Hayes,
A. F.
2018
.
Introduction to Mediation, Moderation, and Conditional Process Analysis: A Regression-Based Approach
.
New York, NY
:
The Guilford Press
.
Houston,
R. W.
1999
.
The effects of fee pressure and client risk on audit seniors' time budget decisions
.
Auditing: A Journal of Practice & Theory
18
(
2
):
70
86
.
Joyce,
E. J.,
and
Biddle
G. C.
1981
.
Anchoring and adjustment in probabilistic inference in auditing
.
Journal of Accounting Research
19
(
1
):
120
145
.
Kadous,
K.,
Koonce
L.,
and
Towry
K. L.
2005
.
Quantification and persuasion in managerial judgement
.
Contemporary Accounting Research
22
(
3
):
643
686
.
Lipshitz,
R.,
and
Strauss
O.
1997
.
Coping with uncertainty: A naturalistic decision-making analysis
.
Organizational Behavior and Human Decision Processes
69
(
2
):
149
163
.
Low,
K. Y.
2004
.
The effects of industry specialization on audit risk assessments and audit-planning decisions
.
The Accounting Review
79
(
1
):
201
219
.
McCroskey,
J.
1966
.
Scales for the measurement of ethos
.
Speech Monographs
33
(
1
):
65
72
.
Mercer,
M.
2005
.
The fleeting effects of disclosure forthcomingness on management's reporting credibility
.
The Accounting Review
80
(
2
):
723
744
.
Mock,
T. J.,
and
Wright
A.
1993
.
An exploratory study of auditors' evidential planning judgments
.
Auditing: A Journal of Practice & Theory
12
(
2
):
39
61
.
Nisbett,
R.,
and
Ross
L.
1980
.
Human Inferences: Strategies and Shortcomings of Social Judgment
.
Englewood Cliffs, NJ
:
Prentice Hall
.
Public Company Accounting Oversight Board (PCAOB).
2001
.
Auditing Accounting Estimates. PCAOB Interim Auditing Standard (AS) 2501
.
Washington, DC: PCAOB.
Public Company Accounting Oversight Board (PCAOB).
2002
.
Auditing Fair Value Measurements and Disclosures. PCAOB Interim Auditing Standard (AS) 2502
.
Washington, DC: PCAOB.
Public Company Accounting Oversight Board (PCAOB).
2010
a.
Identifying and Assessing Risks of Material Misstatement. No. 2010-004 (August 5). Auditing Standard (AS) 2110
.
Washington, DC: PCAOB.
Public Company Accounting Oversight Board (PCAOB).
2010
b. The Auditor's Responses to the Risks of Material Misstatement. No. 2010-004 (August 5). Auditing Standards (AS) 2301. Washington, DC: PCAOB.
Public Company Accounting Oversight Board (PCAOB).
2010
c.
Evaluating Audit Results. No. 2010-004 (August 5). Auditing Standards (AS) No 2810
.
Washington, DC: PCAOB.
Rosenthal,
P. I.
1971
.
Specificity, verifiability, and message credibility
.
Quarterly Journal of Speech
57
(
4
):
393
401
.
Rowe,
C.,
Shields
M. D.,
and
Birnberg
J. G.
2012
.
Hardening soft accounting information: Games for planning organizational change
.
Accounting, Organizations and Society
37
(
4
):
260
279
.
Schweitzer,
D.,
and
Ginsburg
G.
1966
.
Factors of communicator credibility
.
In
Problems in Social Psychology: Selected Readings
,
edited by
Backman,
C. W.,
and
Secord
P. R.
,
94
102
.
New York, NY
:
McGraw-Hill
.
Slater,
M. D.,
and
Rouner
D.
1996
.
How message evaluation and source attributes may influence credibility assessment and belief change
.
Journalism and Mass Communication Quarterly
73
(
4
):
974
991
.

APPENDIX A

Prior-Period Evidence—Specificity Manipulation

APPENDIX B

Look-Back Analysis—Inaccuracy Direction Manipulation

1

While the accuracy of a complex estimate can only be known at the time the asset (liability) is sold (paid), a look-back analysis indicates the accuracy of the assumptions underlying a prior-period estimate. As such, we define estimation inaccuracy as a difference between an assumption used to derive the prior-period estimate and the subsequent realization of that assumption (e.g., future revenue growth rate assumed in the prior-period estimate was 8 percent, but realized revenue growth for the current period is 7 percent). An estimation inaccuracy in an underlying assumption does not necessarily indicate that the resulting estimate is misstated, however. A misstatement is indicated only if the resulting estimate is deemed unreasonable. However, consistent bias in the sign of inaccuracies in forward-looking assumptions can indicate that the estimate is unreasonable.

2

A natural response to a high-quality argument is to judge its source as knowledgeable, authoritative, and competent (Slater and Rouner 1996). While Slater and Rouner (1996) find an effect of message quality evaluation on recipients' assessments of source expertise, they do not find an effect of message quality evaluation on source bias.

3

Rowe, Shields, and Birnberg's (2012) research examining individuals' or groups' work to “harden” or solidify soft accounting information (i.e., that which is subjective and lacks sufficient verifiability) provides potential insight into findings in our study. Our findings suggest that auditors' reliability assessments are consistent with the practical arguments game. That is, auditors are “open-minded but critical” (Rowe et al. 2012, 264) of prior-period evidence, particularly when an estimation inaccuracy is consistent with management's incentives. However, auditors' risk assessments are more in line with a faith game, whereby increased specificity in prior-period evidence increases auditors' perceptions of management's competence or “faith in the wisdom of experts” (Rowe et al. 2012, 263). Additional research could prove fruitful in furthering our understanding of the “hardening” process that impacts when and how evidence examined during the look-back analysis affects auditors' judgments.

4

Prior research has predominantly investigated the influence of source credibility on individuals' evaluation of, and reliance on, source-provided information (see Slater and Rouner 1996). By contrast, we investigate how characteristics of the information reviewed in a look-back analysis affect auditors' perceptions of the source's credibility.

5

The conflicting findings of Anderson et al. (2004) and Kadous et al. (2005) could stem from the two varied contexts being examined. Kadous et al. (2005) examine a general management scenario where the purpose of the evidence in question is to provide the preparer's reasoning for the proposed action. Further, the evidence being evaluated in Kadous et al. (2005) consists of both benefits and costs of the proposed action, so quantification does not solely support the proposed action. This stands in contrast to management's explanation provided to auditors in Anderson et al. (2004), which justified a non-misstatement reason for a revenue-increasing fluctuation. Therefore, one could argue that there is likely more room for the characteristics of the message to influence the recipient's evaluation of the message in Kadous et al. (2005), given that the preparer's incentives may not appear to dominate the message itself.

6

In our setting, auditors reexamine evidence provided by management in the prior year as support for its estimate of the unknown future. In contrast, the evidence evaluated in Anderson et al. (2004) is produced by management to answer auditors' concerns about a significant fluctuation in revenues that has already occurred. We argue that this difference could lead auditors to weight management's incentives differently in our setting. Additionally, our construct of specificity is broader than quantification. We manipulate specificity by providing additional qualitative and quantitative evidence that provides further insight into the reasoning and logic behind management's derived assumptions. Given (1) the different purpose of the explanation being reviewed in Anderson et al.'s (2004) study, as compared to the evidence examined in a look-back analysis, and (2) the nature of our manipulation of specificity, we expect that there is more opportunity for evidence specificity to impact auditors' judgments in our setting compared to Anderson et al. (2004).

7

Estimation uncertainty has multiple sources, including process deficiencies, model complexities, input volatility and subjectivity, and macroeconomic risks (Christensen, Glover, and Wood 2012).

8

For instance, auditors' risk assessments may increase up to a point where an estimate has a moderate level of estimation uncertainty and, hence, a moderately reliable estimation process. However, as estimation uncertainty increases beyond this moderate point, auditors may perceive that the risk of misstatement either levels out or declines as the complexity and subjectivity involved in the process approach a maximum point.

9

Admittedly, auditors' business risk or reputation represents at least one possible reason to collect more evidence even if risk of material misstatement stays the same.

10

The appropriate Institutional Review Board reviewed the case materials and approved the use of human subjects in all experiments reported in this paper.

11

This case is adapted from the case used in Backof, Carpenter, and Thayer (2018). Two practicing audit partners and two practicing audit managers from different Big 4 accounting firms reviewed the current case materials for relevance and representativeness of actual audit workpapers. Neither the audit partners nor the audit managers participated in the actual experiment. All four practitioners indicated that the valuation model and supporting evidence used in our experimental instrument were representative of the information that management typically provides to auditors as part of their audit of the impairment analysis of an intangible asset. In addition, we incorporated feedback from the CAQ's Access to Audit Personnel Program Proposal Review Committee and the results from a pilot test of the case materials with 35 Big 4 audit seniors at a national training session in preparing the experimental materials used in this study.

12

When assessing the reliability of management's estimation process, auditors' determination of the extent to which inaccuracies in prior-period estimates arise due to internal or external factors is important (PCAOB 2010a, AS 2110.05). Inaccuracies attributable to deficiencies in management's process have greater potential to persist and, thus, should be incorporated into auditors' risk assessments for the current-period estimate. By providing all participants with information regarding an exogenous external shock, the design of our study works against us finding an effect of our manipulated factors on auditors' risk assessments.

13

Our manipulation of specificity is analogous to similar manipulations used in Hammersley, Bamber, and Carpenter (2010). Our less specific condition provides a general description of the most available sources of revenue growth, while Hammersley et al.'s (2010) summary condition describes the general sources of fraud risk (i.e., incentives, opportunities, and pressures) for a company. Our more specific condition includes a discussion of the specific evidence supporting the market's tolerance for increased prices and specific ways a company can garner increased customer demand, while Hammersley et al.'s (2010) specific condition details specific fraud risks that can be attributed to the general sources of fraud risk that the company faces.

14

As noted in Section I, we define an overstatement or understatement as being consistent or inconsistent with management's incentives at the time of the original estimation (i.e., the prior period).

15

This study is not designed to examine Big 4 versus non-Big 4 differences. However, we include this variable in our analysis to control for differences in the level of responses provided by our Big 4 auditor participants compared to our non-Big 4 auditor participants.

16

Guggenmos, Piercey, and Agoglia (2018) recommend this coding scheme to test an interaction where two lines slope upward and downward, respectively, from a central point. In addition to a significant contrast test, a comprehensive custom contrast analysis should also be supported by an insignificant between-cells residual test and a measure of effect size.

17

This moderation analysis was performed using the “process” macro for SPSS (Model 1) (http://www.processmacro.org/index.html) and described in Hayes (2018).

18

In the experiment's more specific, consistent condition, the sensitivity analysis provided in the experimental materials reveals that the cushion between the estimated fair value and the reported book value has been shrinking over the past three years. We gathered data from 33 additional participants to examine whether the presence of this trend impacted auditors' judgments. Participants were provided with an adaptation of the more specific, consistent condition materials, where the fluctuation in the prior years' cushion is more consistent with noise in the estimation process (i.e., small fluctuations in the difference between the recorded book value of the trademark and management's estimate each year), rather than a trend. Of these 33 participants, we excluded four who had no experience planning an audit. Compared to auditors in the original more specific, consistent condition, auditors in the more specific, consistent with noise condition do not have significantly different perceptions of management's competence (t56 = 0.82, two-tailed p = 0.415), management's trustworthiness (t56 = 0.17, two-tailed p = 0.869), the reliability of management's estimation process (t56 = 0.36, two-tailed p = 0.720), or the risk of material misstatement (t56 = 1.27, two-tailed p = 0.211). Thus, the results of our study do not appear to be impacted by the presence of a trend in the historical cushion.

19

Of the two factors examined in Anderson et al. (2004), only management incentives (not quantification) significantly affects auditors' evaluations of management-provided information. See footnote 6 for key differences in Anderson et al. (2004) and our study that likely contribute to the differing findings.

20

Kadous et al. (2005) posit and find that even though quantification, a form of specificity, increases perceptions of preparer competence, information recipients perceive a proposal that is both quantified and prepared with a potential incentive to mislead (e.g., the prior-period evidence examined in our more specific, consistent condition) as less persuasive. This is because such information is viewed as a persuasion tactic. Information recipients cope with this perceived persuasion tactic through increased critical analysis of the proposal, which leads to the proposal being evaluated as less persuasive. In our study, auditors' perceptions of lower reliability, support, and reasonable assessments in the more specific, consistent condition are consistent with auditors coping with a perceived persuasion tactic through enhanced critical analysis. Thus, although we do not find a decrease in perceived trustworthiness for auditors in the more specific, consistent condition, our results are consistent with Kadous et al. (2005).

21

We gathered data from 60 additional participants to isolate the impact of the specificity of the prior-period evidence on auditors' judgments. Half of the participants reviewed experimental materials that included less specific prior-period evidence, while the other half reviewed more specific prior-period evidence. Importantly, regardless of their assigned experimental condition, none of these additional participants reviewed case materials that provided any information related to the direction of the prior-period estimation inaccuracy. Of the 60 participants in these two no-look-back conditions, we excluded two who had no experience planning an audit. Absent any information regarding the comparison of prior-period assumptions to current year-to-date results, we find that evidence specificity alone does not impact auditors' perceptions of management's competence (t56 = 0.28, two-tailed p = 0.783), their perceptions of management's trustworthiness (t56 = 1.13, two-tailed p = 0.293), their evaluations of the reliability of management's estimation process (t56 = 0.25, two-tailed p = 0.806), or their assessed risk of material misstatement (t56 = 0.83, two-tailed p = 0.412). Thus, our study indicates that the specificity of the prior-period evidence alone does not impact auditors' judgments regarding the current-period audit of complex estimates. One possible explanation is that without the information found in the sensitivity analysis portion of the look-back analysis, it is difficult for participants to recognize whether small changes in assumptions would be material. If participants assumed that small changes would not be material, then they may not have been sufficiently sensitive to differences in specificity at the provided levels.

22

Confidence and knowledge do not differ by condition (all F ≤ 1.773, all p ≥ 0.186).

23

This moderation analysis was performed using the “process” macro for SPSS (Model 1) downloaded from Andrew Hayes' website (see http://www.processmacro.org/index.html) and described in Hayes (2018).

24

We sent an email to 302 accounting alumni asking for their participation in this study. We received responses from 10.6 percent of those emailed.

25

Providing auditors with the typical procedures used to address high and moderate risk estimates can address issues arising from both a lack of confidence and a lack of knowledge in planning an audit of complex estimates. Auditors who may know how to address a high or moderate risk estimate, but who lack confidence in this procedural knowledge, are more likely to assess risk at a level commensurate with their actual beliefs when provided with procedural guidance that confirms their knowledge. On the other hand, auditors lacking knowledge regarding the appropriate procedures to address varying risk levels should also be more likely to assess risk at a level commensurate with their true beliefs when told how the audit firm typically addresses that risk assessment.

26

Untabulated analyses of the data confirm that auditors' assessments of the reliability of management's estimation process do not differ between conditions (meanno procedures = 6.25 versus meanprocedures = 6.31, t30 = 0.112, p = 0.911). However, we find that auditors in the procedures condition assess a higher risk of material misstatement compared to those in the no procedures condition (meanprocedures = 5.56 versus meanno procedures = 4.50), with the difference being moderately significant (t30 = 1.34, one-tailed p = 0.094). Further, we find that when auditors are told what procedures are typically performed to address varying levels of risk, auditors' subsequent risk assessments negatively correlate with their reliability assessments (r = −0.64, one-tailed p = 0.004). However, consistent with our original results, there is no such correlation in the no procedures condition (r = −0.05, two-tailed p = 0.847). Finally, these correlations are significantly different, with auditors' reliability assessments more negatively correlated with their risk assessments in the procedures condition (z = 1.776, one-tailed p = 0.038).

Author notes

The authors appreciate the helpful comments of workshop participants at the 2016 Deloitte/Kansas University Auditing Symposium and The University of Texas at Austin. We also appreciate the helpful comments of Ashley Austin, Kendall Bowlin, Tina Carpenter, Vicky Hoffman, Steve Kachelmeier, Lisa Koonce, Ben Van Landuyt, Jennifer Winchel, Julia Yu, two anonymous reviewers, as well as the feedback from Mark Peecher on the development of the case materials. We gratefully acknowledge support from the Center for Audit Quality's Access to Audit Personnel Program and funding provided by the McIntire School of Commerce at the University of Virginia.

Our data were gathered through the Center for Audit Quality's Access to Audit Personnel Program. The views expressed in this article and its content are those of the authors alone and not those of the Center for Audit Quality.

Ann G. Backof and Roger D. Martin, University of Virginia, McIntire School of Commerce, Department of Accounting, Charlottesville, VA, USA; Jane Thayer, Georgia Institute of Technology, Scheller College of Business, Department of Accounting, Atlanta, GA, USA.

Editor's note: Accepted by Mark E. Peecher, under the Senior Editorship of Mark L. DeFond.