The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man.
—George Bernard Shaw
I am delighted about the launch of this new journal and grateful for the opportunity to oversee the inaugural issue on behalf of the Financial Accounting and Reporting Section (FARS) of the American Accounting Association (AAA). The mission statement of the Journal of Financial Reporting follows:
The Journal of Financial Reporting (JFR) is open to research on a broad spectrum of financial reporting issues related to the production, dissemination, and analysis of information produced by a firm's financial accounting and reporting system. JFR welcomes research that employs empirical archival, analytical, and experimental methods, and especially encourages less traditional approaches such as field studies, small sample studies, and analysis of survey data. JFR also especially encourages “innovative” research, defined as research that examines a novel question or develops new theory or evidence that challenges current paradigms, or research that reconciles, confirms, or refutes currently mixed or questionable results. (emphasis added)
JFR takes a broad view of the types of research that will advance the field. We hope this fresh perspective encourages risk-taking and motivates researchers to pursue new and innovative questions or revisit questions with new methods and approaches. JFR is committed to publishing these types of studies and has also established well-articulated review processes and journal policies that support this mission.
The inaugural issue provides examples of the research that JFR “especially encourages.”1 The examples are useful to clarify the concepts of innovation and less common approaches and to establish that JFR maintains top-tier journal standards for execution quality. I describe how each example fits JFR's mission in the fifth section. This editorial begins with an overview of the objective of the journal (the second section), a more detailed discussion of the gaps in the literature it seeks to address (the third section), and the policies and structures of the journal that are meant to help achieve its mission (the fourth section).
JFR was created to satisfy the demand for additional journal space for top-quality scholarly research on financial reporting topics. Among business school disciplines, accounting researchers exhibit the lowest average number of articles per faculty in premier journals at 0.023 on average over the period 1994–2003, compared to 0.046, 0.064, and 0.041 for finance, management, and marketing, respectively.2 Moreover, although financial reporting research dominates in terms of number of pages in the general interest accounting journals (except Accounting, Organizations and Society), the acceptance rate of financial reporting research conditional on submissions is the lowest among the accounting fields.3
Following a survey of FARS members that revealed support for a section journal, the FARS steering board assembled an ad hoc committee to create one. The Committee members were Mark Bradshaw, Ted Christensen, Marlene Plumlee, Darren Roulstone, Jennifer Wu Tucker, Alfred Wagenhofer, and I. The decision process can (but only in retrospect) be described as involving three phases: (1) an analysis of the current publication situation including identification of more desirable conditions and thus opportunities for a new journal to complement the existing landscape; (2) an appraisal of the likely reasons for gaps between the current situation and more desirable publication conditions; and (3) development of a sustainable journal structure that will enable JFR to fill these gaps and hence move the field toward the preferred conditions.
As part of the first task, the Committee identified two opportunities. First, the profession needs more innovative research, defined as studies that examine a novel question, or develop new theory or evidence that challenges current paradigms, or studies that reconcile, confirm, or refute currently mixed or questionable results. Second, the profession needs more research using nontraditional data collection methods such as field studies, small sample studies, and analysis of survey data. We are not the first to identify either opportunity. Accounting scholars have noted these problems, and the underrepresentation of innovative ideas and approaches has also been documented in other social science disciplines.4
The Committee's second task was to understand why innovative ideas and approaches are underrepresented in existing publication outlets. We concluded that these gaps result from reviewer norms, which in turn developed in large part due to professional career concerns. Reviewers can have professional reasons to push back against innovation and they get no benefit from encouraging it. A chicken-and-the-egg problem results. Researchers, particularly junior faculty, anticipate the difficulties of publishing a paper with a paradigm shifting idea or one that presents results that conflict with established beliefs, and they stop working on innovative research.
The Committee's third task was to figure out how to structure a journal that could change the status quo. The Committee attempted to institutionalize changes to the publication process by (1) providing an outlet for new types of journal content, and (2) offering well-articulated guidance to reviewers meant to mitigate biases against innovative research. Admittedly, JFR's reviewer guidance is not conceptually unique, but the requested procedures are different from the “norm” for referee reports. As noted by one of the reviewers for a paper in this issue: “The charter for this journal (and its reviewers) is not really all that different from how things work at other journals, when they work” (emphasis in the original). He continued to say that JFR's process is a thoughtful articulation of how a good process should work and that he hopes it will have positive externalities for the field.
The preceding discussion makes the process sound more linear and dispassionate than it was. As the process evolved, there was a growing sense of frustration among the Committee members with how stagnant our research had become. At the same time, however, there was a growing sense of optimism that if part of the explanation was author (mis)perceptions that there are no outlets for innovative research, then a new journal that established policies meant to encourage innovation in ideas or approaches could shift the tide.5
ANALYSIS OF CURRENT CONDITIONS
The Committee's gut feel was that the bulk of financial reporting research is stagnant. Innovation decreases as scholarly fields mature if the number of high-quality studies outpaces journal space in top-tier outlets.6 Young faculty with fresh ideas feel pressured to work on projects with the best prospects for timely publication. Studies using the most popular approaches and that incrementally extend a popular existing idea are perceived as the most publishable, and authors (rationally) pursue these paths at the expense of innovation. Over time, the concentrations in popular approaches and paradigms intensify. Absent any intervention, I expect the current concentrations to worsen as doctoral students pursue training only in popular topics and approaches.
I will emphasize this point with two well-worded statements by senior faculty who have similar and strong views on this issue. In his commentary on innovation in accounting research, Greg Waymire (2012, 1079; emphasis in the original) states:
When I speak with doctoral students and junior faculty members at various AAA events, I am struck by their willingness to defer work on research projects they believe to be interesting and important. The typical sentiment expressed is that working on projects outside the perceived mainstream is too “risky” and should be done only by faculty after getting tenure. The ultimate effect of this view is to encourage conformist thinking and intellectual stagnation.
Second, in my effort to find papers for this inaugural issue, I explained JFR's mission to a prominent researcher and asked if she was aware of any innovative working papers that were facing challenges at the established top-tier journals. She responded that she had discussed several interesting projects with a junior colleague but that they abandoned the projects because the projects would not be “palatable” to the journals that “count” for this junior colleague. “It's awfully hard to say we're in a scholarly profession when you see this sort of junk.” (Anonymous)
The next step was to articulate the specific gaps in the current literature, which feels stagnant. We cannot ask researchers to produce “risky” research; research is only risky because it is challenging to get it published (or “safe” because it is easier to get it published). Risky is not a description of the research per se. Articulating the specific types of research that will lead to innovation requires characterizing the ideal and comparing it to current conditions.
As a framework for assessing the ideal conditions, consider the stages in a research program that follows the scientific method, which is meant to lead to discovery and advancement in science (Figure 1).7 Starting at the top, but keeping in mind that the arrows from the bottom circle back up, the stages in the process are (1) make observations, or use existing knowledge to stimulate new ideas; (2) develop a theory, formally using a model, or informally that generates a hypothesis; (3) collect and analyze data to test the hypothesis and interpret the findings; (4) if the evidence supports the hypothesis, repeat the third stage to enhance reliability; and (5) if the evidence is inconsistent with the hypothesis, refine or recreate the hypothesis. Start again. An important takeaway from Figure 1 is that each stage involves multiple individual research projects.
I organize the discussion of the gaps and, importantly, why these gaps have come to exist, by method. Table 1 summarizes data on submission and acceptance rates by method of data collection and analysis for The Accounting Review as a representative outlet.
The low representation of analytical models is an obvious concern given the importance of new hypotheses to the research process. Submission rates of analytical research have remained fairly constant since 2009 but acceptance rates have decreased. Hamermesh (2013) describes a similar declining trend in economics starting in the decade 1983–1993 and continuing, and even accelerating, through 2011. His explanation for the decline is based on the observation that “thinkers reach their peak productivity at younger ages in areas requiring more mathematical thinking,” and that fewer young researchers are using analytical methods (Hamermesh 2013, 170). He suggests two reasons that young researchers avoid analytical modeling: (1) new generations of young researchers include more females, who are less likely to use analytical methods; and (2) analytical research is less likely to be coauthored, but increasing publication lags and greater career concerns increase incentives for young researchers to coauthor studies.
These explanations may partially explain the declining trend in analytical financial reporting research, but they do not explain why submissions have been stable since 2009 but acceptance rates have been low (and declining). One explanation is that reviewers underweight innovation. All models involve assumptions. JFR is willing to give weight to innovative ideas with less-than-perfect model assumptions.
Another plausible explanation for the decline in acceptance rates is that the standard for what constitutes a “contribution” has shifted toward studies with identifiable (and immediate) implications for practice, and this shift affects studies using analytical models more than empirical methods. I suspect (hope) that most accounting researchers agree that our research should have practical relevance. My favorite definition of what practical relevance means is from Kinney (1986, 339), who states that ultimately we want to answer the question: “Does how we as a firm or as a society account for things make a difference?” Answering this question, however, involves research from the entire chain in the process depicted in Figure 1, and the distance from the analytical model (and from basic or “pure” research that provides a foundation for hypotheses) to the ultimate communication to practitioners can be great.
JFR's mission is to be an outlet for scholarly research targeted at an academic audience and thus to publish research at all stages of the process that, when taken together, generates reliable evidence about issues of practical relevance. JFR does not intend to be an outlet for articles that aggregate and communicate research in a way that is most suited to practitioners' interests. While communicating to practitioners is an important (essential) element of a research program, JFR takes the position that there should be distinct publication outlets dedicated to practitioners. A journal that tries to serve both academic and practitioner audiences serves neither well. Manuscripts meant for academic researchers need to contain sufficient discussion of the study's methods to be useful to other researchers who want to evaluate and even replicate the analysis. JFR does not want authors to avoid rigorous discussion in an attempt to make the manuscript more attractive to practitioners. Practitioners deserve their own publications with manuscripts that effectively summarize the research in an accessible manner and build a context for understanding what researchers have learned about an issue of practical importance.8 That being said, the onus is on authors to explain how their analytical model can inform a question of practical relevance.
The heavy concentration in empirical archival methods shown in Table 1 is not surprising. Financial reports, the ultimate output of the financial reporting system, are observable and provide the appropriate data, often aggregated in machine readable form, to address a broad range of important questions. However, while I do not claim to know the right mix of methods, the proportion of empirical archival work seems too high. More troubling, however, is the Committee's casual observation that the studies have converged to a safe and formulaic approach: Propose a hypothesis based on a verbal interpretation or small extension of an existing economic model; conduct an analysis claiming to provide causal evidence; and find evidence consistent with the hypothesis. This focus leaves a number of gaps in the empirical archival literature.
The first gap is research that rigorously documents or characterizes a phenomenon that cannot be explained by existing theory (i.e., a “puzzle”). The first part of Lundholm and Rogo (2016) is an excellent example of empirical archival research that is an essential element of an innovative research program (the “Observation” stage in Figure 1) but outside the typical mold of an empirical archival study. The second gap is research that tests a statistical, rather than economic, model. Ryan (2016) emphasizes the importance of such studies specifically in the context of the fundamental analysis literature.
I attribute both of these gaps to the increasing importance reviewers have placed on the pursuit of causal evidence over the last decade.9 Of course, accounting researchers should strive to provide causal evidence when conducting tests of economic hypotheses. It is a logical fallacy, however, to interpret this statement to mean that accounting researchers should only strive to test economic hypotheses. Reviewers who expect all empirical archival papers to be testing an economic hypothesis can challenge the paper's contribution, arguing that the evidence is merely “descriptive,” which has a pejorative connotation. Authors are often to blame for the misinterpretation, however, because they cater to anticipated reviewer preferences by promoting their evidence as causal when it is not.
The most significant gap in the literature is innovative and provocative ideas, and this gap can also be attributed to the recent focus on establishing causality. Authors rationally expect that reviewers focus on a paper's flaws and limitations. Hence, a study that uses a less-than-perfect experiment will face significant challenges given current reviewer norms. The safe strategy is to customize the question to the setting.
JFR is willing to give weight to innovative and forward-looking ideas when evaluating hypothesis tests with less-than-perfect experimental settings. Kennedy (2002) provides an interesting take on the compromises that social scientists must make when they work with imperfect experiments, which he calls “sinning in the basement.” In a reply to critical commentary, Kennedy (2002) states, “The main premise of my paper is that it is an unavoidable fact of life that applied econometricians will sin. Under this circumstance, the paper asks what rules should be taught to students to bound this sinning?” He provides ten commandments. The second commandment is “Thou shalt ask the right questions” with a corollary that “Thou shalt place relevance before mathematical elegance” (Kennedy 2002). This commandment reflects an important theme that requiring a “perfect” experiment can lead researchers to abandon interesting questions (and even common sense) for the sake of econometric purity.
JFR's philosophy of tolerance, however, does not imply lower standards of execution quality. Researchers must use the best possible methods conditional on the setting's limitations. In fact, authors will need to be more vigilant in several ways to overcome the inherent limitations of imperfect settings. Authors will likely need to conduct more sensitivity or robustness analysis than would otherwise be necessary and to present a complete picture of the data,10 for example by presenting confidence intervals and figures that illuminate the nuances of the findings (Basu 2012). Finally, authors need to carefully and clearly outline the limitations of the analysis and avoid inappropriate interpretation of the findings.
Another gap in the empirical archival literature is studies that combine a small modeling component or rigorous logical arguments that propose a new theory along with empirical data analysis.11 It is challenging to navigate these types of studies through the publication process. One explanation is that neither an analytical modeler nor an empirical archival reviewer is satisfied with the paper's incremental contribution. One result of this gap is poorly specified hypotheses, which in turn leads to threats to internal validity (Bertomeu et al. 2015). Not all hypotheses need to be generated by a mathematical model (Jensen 1982, 241–242). When verbal arguments are used, however, researchers need to recognize and communicate the key assumptions and limitations of the “theory.”
Research that contains inconsistent evidence is also underrepresented or ignored (Bamber, Christensen, and Gaver 2000). This gap is particularly troubling because conflicting evidence is the branch in Figure 1 that ultimately generates new hypotheses.12 A common reason for reviewers to challenge publication is that inconsistent evidence can reflect a lack of statistical power rather than an economically meaningful finding. Recent advances in data analysis techniques provide opportunities to enhance power, but accounting researchers tend to lag economics and finance in the adoption of new methods, such as structural modeling, that would allow researchers to provide more compelling evidence (Gow, Larcker, and Reiss 2015).
Finally, robustness tests (sensitivity analyses) of empirical archival evidence, which JFR defines as a form of replication, are underrepresented. A common explanation for the lack of replication across disciplines is that journals want to publish “new” ideas.13 Since the mid-2000s, many of the major journals in econometrics, economics, finance, and accounting have instituted policies requiring some form of data/code availability. Data access is predicted to make replicating an analysis less costly, which will in turn motivate authors to attempt replications and hence increase the threat of detection. Making data more accessible, however, is not enough. Authors do not know ex ante whether the replication will confirm results—and thus face publication challenges—or whether the new data analysis will lead to potentially publishable new findings. Because of this publication risk, researchers not only need data access but also need outlets that commit to publish replications whatever the outcome. The American Economic Review claims to have taken the lead in 2005 by instituting a data and code-sharing policy and simultaneously encouraging submissions of replication/robustness studies.
JFR is committing to publish replications as part of its content. To ease the tension between the desire to allocate limited journal space to new findings and the scientific need for replication, JFR's structure allows for shorter articles and encourages electronic content. See a further discussion of the expectations for replications in the fifth section. JFR is not alone in this effort. The Journal of Finance has created a new section for replications and Corrigenda. The Critical Finance Review is promoting a series of issues dedicated to replicating specific influential empirical papers in financial economics. Behavioral Research in Accounting, another AAA section journal, has added a section called “Research Notes,” with the stated purpose of providing space for replications and studies with nonsignificant results.
Research using experimental methods suffers from many of the same challenges as analytical models or studies using archival data. Some experimental studies are close to “pure” research (e.g., human decision-making biases when dealing with quantitative data). These observations can inspire theories and hence testable hypotheses of practical relevance, but as in the case of analytical models, the immediate and direct connection to practice may not be obvious. In addition, studies that exactly replicate an experiment are rare and studies that are similar to an existing study but find either consistent or conflicting evidence are underrepresented, facing the same challenges as for replications of archival studies.
An observation specific to experimental research is that acceptance rates of research using experimental methods have actually increased in recent years. Given that experimental research would have been considered an underrepresented method in the 1990s, this trend suggests that the current situation can be reversed. In fact, I actively sought out an experimental study for this issue to set an example that JFR is open to all data collection methods. I identified several interesting prospects, but when I checked the status of the papers on the authors' cv, all of them were already listed as R&R at an established journal. The bad news, however, is that submission rates have remained fairly constant. This trend suggests that wary doctoral students have not yet seen the increasing opportunities to publish studies using experimental methods. Perhaps as more students view experimental methods as “safe,” more will get the necessary training to properly use experimental methods, which will eventually lead to an increase in submissions.
All Other Methods
The AAA defines all other methods to include surveys, field studies, and case studies. Given that these approaches might be most appropriate for generating observations that inspire new theory as depicted in Figure 1, the low rates of all other methods is consistent with the earlier point that we do not observe provocative “puzzles.” JFR is not alone in recognizing this gap and there are some promising signs. For example, Bloomfield, Nelson, and Soltes (2015) provide a framework for the use of alternative data collection methods in financial reporting research and a comprehensive discussion of examples and considerations in choosing one method over another, which is likely to encourage the use of these methods. Also, since the publication of “The Economic Implications of Corporate Financial Reporting” by Graham, Harvey, and Rajgopal (2005)—which is a Web of Science “highly cited” paper—authors increasingly view survey articles as publishable in the established top-tier journals and casual observation suggests that well-executed survey articles are now considered a more accepted data collection method than in the past.14
Not as much progress has been made in the use of field studies in financial reporting research. Part of the problem could be terminology. Field studies include (1) observation of existing conditions, and (2) controlled experiments in which the researcher manipulates conditions in a natural setting. Observational field studies can also be called case studies, which should not be confused with a “case.” A case is a tool used to convey knowledge, commonly in a classroom setting, by requiring readers to understand a phenomenon through active problem solving rather than through passive observation. A case study is an analysis meant to address a research question according to the scientific method in which the data are gathered in the field. Soltes (2014) is an excellent example of a recently published scholarly observational field study (case study). JFR is not intended to be an outlet for cases. JFR does, however, encourage case studies that are an important element of a scholarly research program.
There are practical explanations for the lack of financial reporting research that collects data using a controlled field experiment. It is impossible to manipulate a single firm's financial reports and evaluate variation in capital market reactions across treatment and control groups of traders. Laboratory markets can be used as an alternative, but financial reporting researchers have not followed economists in the increasing use of this approach.15 Elliott, Hobson, and White (2015) present an excellent example of a recently published experimental market study.16
Controlled field experiments involving multiple firms are challenging because the researcher must gain participation from enough firms to adequately control for firm-specific characteristics. Gaining sufficient participation can be difficult, however, because firms may be reluctant to participate in an experiment that is predicted to have capital market consequences. Financial reporting researchers can look for natural experiments, but natural experiments with randomized designs are rare.17 Hence, researchers must address the design issues that result because the experiment is only a quasi-natural experiment (e.g., selection bias). In addition, researchers are constrained to address questions that the data from the naturally occurring situation can answer, inhibiting innovation. Progress is possible, but researchers will have to be creative and actively seek opportunities, either privately or with the cooperation of regulators, to access this important source of data.
JFR's STRUCTURES AND POLICIES
After assessing the situation, the Committee decided that providing an outlet for innovative research in ideas and approaches is an important first step to motivate researchers, but it is not enough. The Committee recognized that JFR would have to structure the journal's procedures and content in a way that would change current reviewer norms and author mindsets. A summary of JFR's important structural features follows.
The Peer Review Process
Reviewers Are Asked to Consider the Paper's Merits in Addition to Its Limitations
Two forces can lead reviewers to challenge innovative studies. First, reviewers (and humans in general) have an inherent bias against paradigm shifting ideas (Armstrong 1997). If a statement conflicts with our existing beliefs, we require more evidence to change our priors than we would to establish new beliefs. Thus, reviewers naturally push back harder against conflicting evidence. Second, reviewers (and junior reviewers in particular) are tempted to focus their attention on the flaws of a paper, perhaps in an effort to impress editors with a thorough review (Armstrong 1997). After writing a negative report, it is difficult to justify a “revise” decision, and the path of least resistance is to reject the paper.
To avoid this trap, JFR specifically asks that reviewers include in their report (1) a discussion of the paper's merits, (2) a discussion of the paper's limitations, and (3) an assessment of the feasibility of addressing the limitations. The Committee hopes that explicitly asking reviewers to identify merits will motivate reviewers to take note of innovation in ideas or approaches, which can help align their incentives with JFR's goal to publish innovative research. Requests (2) and (3) are meant to remind reviewers that it is acceptable for papers to have limitations. Keep in mind, however, that tolerating limitations does not imply that JFR will tolerate sloppy work or sacrifice execution quality.
Reviewers Do Not Make an Editorial Recommendation
Reviewers will be asked to provide an assessment of the paper, but not a recommendation about publication. JFR's coeditors are responsible for weighing the paper's merits against the limitations to ultimately make a reject/revise/accept decision, leaving the “risky” decision making to them. The literature on peer review suggests that reviewers, especially junior reviewers, often believe that acceptance of controversial or provocative papers will look weak to editors, and this career concern creates pressure to reject. The coeditors will value reviewers' separate identification of a manuscript's merits and limitations but will not ask for a publication recommendation.
The Committee's decision to increase the decision-making responsibilities of the editor is connected to its decision to have three coeditors.19 Having multiple editors is a practical decision given that we expect the editor's involvement in each paper to be more time consuming. More importantly, however, having multiple editors can facilitate innovation (Armstrong 1997, 77). Editors face a risk of making Type I and Type II errors. The individual cost of rejecting a “good” paper is low (i.e., who will know), but the cost of publishing a “bad” paper is high (Pencavel 2008). While editors may be aware of the bias against innovation that this outlook creates, it is nonetheless difficult to overcome. Having multiple editors allows for collaboration, not so that they can share the blame, but so that they can share information, which can reduce the sense of risk they are taking when they accept an innovative paper.
A novel feature of JFR's content is “Commentaries” for some published manuscripts. Commentaries, expected to be written by reviewers in many cases, can be used to provide (1) a balanced exposition for studies that present results or ideas that conflict with existing evidence or paradigms, (2) independent and objective insights about how the findings in an individual paper can be a meaningful contribution when viewed as part of the broader literature, and/or (3) emphasis on the limitations of a study. Commentaries are a valuable element of scholarly dialogue and numerous scholars across disciplines lament the decline in their use.20 The most generous explanation for the decline is that intensified review processes have removed the need. An unfortunate explanation is that their decline is consistent with the decline in innovative ideas that inspire debate. More sinister explanations are that the decline is the result of (1) impact factor management, because citations per page of original discovery research are higher than for commentaries, and (2) a desire by editors for insider citations, which they can more easily force on discovery research.
One might expect that JFR will eventually succumb to the publication pressures that have been used to explain the decline and that JFR will eliminate commentaries as well. I expect, however, that JFR's ongoing leadership will keep in mind that commentaries can increase the visibility of the original discovery research. Thus, the original articles are cited more than they otherwise would have been, which increases the overall impact of the journal. Unfortunately this statement cannot be verified.
A second motivation for including Commentaries is that the opportunity to write a commentary can help align reviewers' incentives with JFR's mission to encourage innovation. Time constraints make it impossible for reviewers and editors to devote the time to ensure that each paper is properly messaged; rejection is easier. Providing the opportunity to write a commentary serves two purposes: It decreases the reviewer's risk of supporting an innovative paper, and it publicly acknowledges the reviewer's contribution to the balanced messaging in the paper.
JFR will publish themed issues in addition to its regular issues each year. Themed issues are dedicated to studies that can help launch a new question or move the literature in an existing area forward. The themed issues will have a guest editor who is an expert in the topical area. In the start-up phase, the plan is to have one themed issue per year. The long-term goal is to publish themed issues as pressing topics and contributors are identified. The first themed issue, expected in Spring 2017, will be dedicated to research on standard setting, and Katherine Schipper will be the guest editor.21
Perspectives and Discussions
A third important feature is designated space for content called “Perspectives” and “Discussions.” Perspectives are articles that are thought provoking about a new issue in financial reporting or suggest directions for future research. Discussions are summaries of lessons learned from existing research. Most “Perspectives” and “Discussions” will be solicited, but JFR is open to unsolicited submissions of “Perspectives” and “Discussions.”
JFR will encourage the use of electronic content that is supplemental to the journal content. The space, intended primarily for extensive sensitivity analysis and robustness tests of results, is useful for two reasons. First, as discussed previously, JFR will place weight on innovation and thus tolerate less-than-perfect experiments. One way to mitigate the inherent limitations of such studies is through extensive sensitivity analysis, and online appendices provide the space. Second, the availability of online space allows the editors and reviewers to reinterpret the ultimate question from whether to publish to how much space should be given to a paper (Armstrong 1997). As noted by Basu (2012), authors feel a need to “bloat” their papers to fit the standard length of papers observed in the existing journals. The result, at best, is that the paper is longer than needed to convey the paper's main findings and, at worst, is that the authors over interpret the results in an attempt to support a larger contribution. Making online space available allows editors (and authors) to match a paper's length to its contribution.
THE INAUGURAL ISSUE ARTICLES
Before I discuss how the articles in this issue provide examples of the kind of research JFR seeks to encourage, I have a few general comments about the selections.
The examples do not illustrate the “broad spectrum of financial reporting issues related to the production, dissemination, and analysis of information produced by a firm's financial accounting and reporting system” (see the mission statement) that JFR will publish in future issues. I focused on finding high-quality examples of studies with innovative ideas and/or approaches. I was unable to diversify on both approach and topic, and I put less weight on the need to set an example of topic diversity.
While it is true that the authors in this issue are visible and recognizable researchers with established publication records, readers cannot and should not infer favoritism. Editorial “favoritism” implies that the publication standards were lowered to benefit these authors. The article selection should instead be viewed as the outcome of editorial “activism.” Despite public announcements about the new journal, I had only a small voluntary submission flow. I had to encourage authors to submit their studies, all of which were then subject to peer review.22
In fact, I am grateful to the authors who were willing to work through the publication process prior to the official start of submissions via AllenTrack and, more importantly, to publish in a new journal without an established reputation. I am also grateful to the authors of the commentaries included in this issue: Anderson and Hopkins (2016); Barron (2016); Hribar (2016); Lee (2016); Wagenhofer (2016); and Welker (2016). It is a tribute to the journal's mission that distinguished researchers in the field were willing to submit articles. In other words, rather than assuming that having recognized authors in this issue reflects editorial favoritism, readers should attribute this outcome to the willingness of the authors to support the new journal, which shows their confidence in the longer-term prospects for the journal's success.
All of the discovery research articles in this issue are accompanied by a “Commentary,” which will not necessarily be the case in ongoing issues. The authors took different perspectives on the tone/nature/depth of a commentary. The variety shows the various ways that commentaries can contribute to a scholarly dialogue; there is no formula.
The examples are presented in order of submission date.
“Asymmetric Reporting” by Christopher S. Armstrong, Daniel J. Taylor, and Robert E. Verrecchia
This analytical model is an example of a paradigm-shifting idea. Armstrong et al. (2016) construct a model that can generate an existing/known phenomenon—asymmetric reporting—that heretofore is commonly assumed to be the outcome of a different source. Of course, the model depends on several assumptions and abstractions, thus whether this theory can help explain observed asymmetric reporting, on average or for some subset of firms or in certain time periods, is yet to be determined. The authors, however, do not hide the assumptions that are essential to the model's predictions. As such, in addition to an innovative idea, this paper exemplifies guidance in JFR's tips for authors: “Given JFR's tolerance for compromises, it is essential that authors are transparent about the assumptions/limitations of the analysis, especially the results of sensitivity analysis.” This guidance applies to both analytical models and empirical work.
“Do Compustat Financial Statement Data Articulate?” by Ryan J. Casey, Feng Gao, Michael T. Kirschenheiter, Siyi Li, and Shailendra Pandit
This paper by Casey et al. (2016) illustrates two features of a good “methods” paper, which serve an essential role in the research process for a field like financial reporting in which researchers are forced to use imperfect proxies. First, the findings affect a high proportion of financial reporting researchers because they relate to Compustat data, which is a source for at least some data in most archival financial reporting studies. Second, it is not proposing a “shortcut” method for conducting research. The study provides a thorough analysis of the data so that a researcher facing a measurement choice can make an informed decision about the likelihood of errors and potential biases.
“A Re-Examination of the Cost of Capital Benefits from Higher-Quality Disclosures” by Frank Heflin, James R. Moon, Jr., and Dana Wallace
As the paper's title indicates, this analysis re-examines existing evidence on the relation between cost of capital and disclosure quality. Early studies documented an association, but a later study shows that the result is ameliorated when controlling for the fundamental quality of the earnings being disclosed. Heflin et al. (2016) do not claim that the findings from any of the prior studies are wrong. Rather, they provide a thorough, well-executed analysis that attempts to reconcile mixed findings from studies that use different samples and settings and that measure theoretical constructs in different ways. Importantly, the paper ends with explanations for why the results did not reconcile, which provide a clear path for future researchers to investigate the nature of the relation between disclosure and cost of capital.
“Do Analyst Forecasts Vary Too Much?” by Russell J. Lundholm and Rafael Rogo
This paper provides an example of research that can stimulate innovation by carefully documenting and characterizing a previously unknown phenomenon. Lundholm and Rogo (2016) develop a method to measure an entirely new property of analyst forecasts, variance bound violations. Observations of violations can generate new theories about how analysts forecast and how traders process and use information. The authors conduct a rigorous empirical estimation of the variance bound violations. The estimation technique is an innovation on its own given that these authors are the first to attempt to estimate variance bound violations for analyst forecasts. The study includes carefully and thoroughly executed tests of the association between the estimated variance bound violations and analyst characteristics and between the estimated violations and stock price patterns. A reviewer could assert that the analysis does not establish causality, either about analysts' behaviors or traders' decisions. However, asking for this paper to provide causal evidence is unreasonable for two reasons. First, the paper would have to decrease the rigor of the discussion of the econometric estimation of the variance bound violations, which would significantly decrease the paper's contribution. Second, requiring causal evidence would delay publication of this innovative finding, which then delays the development of new theories and hypotheses (Figure 1, Stage 2) and the use of this measure as a new approach for causal testing of existing hypotheses (Figure 1, Stage 3).
“Experienced Financial Managers' Views of the Relationships among Self-Serving Attribution Bias, Overconfidence, and the Issuance of Management Forecasts: A Replication” by Robert Libby and Kristina M. Rennekamp
Two features of Libby and Rennekamp's (2016) article make it an excellent example of a replication.23 First, they improve on the sample selection but otherwise exactly replicate the original experiment (i.e., a survey to participants). Second, this example illustrates that replications should be concise. As noted in JFR's editorial policies, a replication study should provide:
A limited review of the essential features of the analysis being replicated: the research issue addressed, the contribution of the original article, and the key differences between the manuscript's analysis and the replicated study. The remainder of the paper only needs to provide a limited summary of the analysis that restates the central theory and hypotheses or research questions addressed in the replicated study. Authors should provide more detail about the sample, if using a new sample is the purpose of the replication, or about any new variables. Sufficient results should be presented to support conclusions drawn regarding the comparison of the results of the current paper to the replicated study.
Libby and Rennekamp (2016) thoroughly describe the selection process for the subject pool, which is the new aspect of the study; they provide a brief discussion of the survey instrument since that is not new; and they provide a concise summary of the results, placing emphasis on the one result that is significantly different.
Perspectives on the State of Fundamental Analysis Research
Peter Easton (2016) and Stephen Ryan (2016) were both given the topic for a “Perspective.” Fundamental analysis research is a foundation in financial reporting yet such studies are a smaller part of the literature than they used to be. Peter and Stephen were given limited instructions: The article should be their views on the issue, not just a cataloguing of prior findings (i.e., not a survey of the literature). Having two authors write about the same topic illustrates JFR's desire to inspire scholarly dialogue. I knew each author would provide a thoughtful perspective, but I also expected their views to be different. Both authors fulfilled the goal, which is to suggest innovative paths for improving our understanding of the role of accounting information in fundamental analysis.
LOOKING TO THE FUTURE
JFR's editors (Mary Barth, Anne Beatty, and Rick Lambert), the FARS ad hoc Committee, the newly formed FARS Publications Committee, and I are optimistic that JFR can move the profession in a desired direction. JFR is not simply more pages. It provides more pages for certain types of research that are an important part of a scholarly dialogue but that are underrepresented in the current literature. This inaugural issue is dedicated to defining by example the types of studies that are necessary ingredients for progress. The idea to have an issue that requires finding the types of research that are not being done seems fundamentally flawed. Fortunately, there are enough researchers that continue to pursue innovative projects despite the anticipated publication challenges. The Committee hopes that providing an outlet encouraging such research will change the expectations of authors and reviewers and ultimately increase the supply of innovative research.
JFR's desire for greater innovation is neither new nor unique to the financial reporting discipline. In fact, initial discussions about JFR were inspired by the goals and success of the Critical Finance Review. Within the field of financial reporting research, the established top-tier accounting journals are also victims of the naturally occurring concentration in formulaic research approaches and safe topics, and they have made efforts as noted in this editorial that are helping to reverse the cycle. The Committee hopes that the content and policy choices for JFR expand and strengthen their efforts.
While we are optimistic that JFR has instituted structures that will increase innovation, there are two contributing factors to the current situation of stagnation that JFR cannot hope to affect through its design, at least in the short run. The first is career concerns created by the promotion decision process. In the early days of the journal, regardless of the actual quality of the articles in JFR, a JFR publication will not “count” at universities that mechanically count numbers of publications in outlets on their list. Thus, JFR will not immediately benefit all junior scholars as a new publication outlet. However, JFR has a long-run view. In the start-up phase, JFR editors will maintain high standards for execution quality, and the FARS Publications Committee will actively promote JFR's high-quality content in order to provide the published papers with an opportunity to have a measurable impact. JFR will need senior faculty to invest in the new journal by submitting high-quality innovative studies that fit the journal's mission. The intent is to be a journal that “counts.”
The second contributing factor that JFR cannot directly address through its design is the lack of doctoral student training in certain approaches. Students pursue training in popular methods because they perceive there to be better publication prospects. The result is that the concentration in studies using the popular methods intensifies, not because studies using less popular approaches would be unacceptable, but because researchers do not conduct research using these methods or, if they do, the execution quality is low. JFR does not intend to publish poorly executed studies just because they use an underrepresented method. Sacrificing execution quality to send a signal of diversity is not in the long-run best interest of the journal. JFR's long-run view is that especially encouraging such research will motivate doctoral students and junior faculty to learn new methods and approaches.
I am sympathetic to the career concerns that are at least partially to blame for the current situation. I wish I could convince junior faculty that pursuit of publication, rather than the pursuit of knowledge, is a short-term and myopic strategy.24 But just making that plea will not change the situation. JFR is putting these words into action by providing an outlet to publish innovative research. —Catherine M. Schrand University of Pennsylvania
Because JFR is a new journal with no submission flow, I identified most (not all) of the articles by scouring ssrn.com, google.com, and scholar.google.com. I then contacted the authors to inform them about the journal and asked them to submit the paper. The submitted manuscripts were subject to peer review by one or more external reviewers.
These percentages are based on Valacich, Fuller, Schneider, and Dennis (2006, Tables 3 and 5), excluding allocations of Management Science articles across fields. The journal list for each discipline follows: (1) Accounting: The Accounting Review, Journal of Accounting & Economics, Journal of Accounting Research; (2) Finance: The Journal of Finance, Journal of Financial Economics; (3) Management: Academy of Management Journal, Academy of Management Review, Administrative Science Quarterly, Strategic Management Journal; and (4) Marketing: Journal of Consumer Research, Journal of Marketing, Journal of Marketing Research. Some premier journals are missing from these lists. Valacich et al. (2006) note that the lists were constructed to make consistent comparisons with earlier studies.
There is a large literature within and outside of accounting that provides a similar message; this editorial cites only some of the relevant articles. I will quote extensively from Armstrong (1997), who provides a survey of studies on the consequences of peer review in the natural and social sciences. In summary, the analysis provides evidence (albeit dated) that supports the Committee's intuition about current conditions in the financial reporting field. Many of the messages are consistent with two articles written after the AAA's strategy retreat in 2011 on stagnant accounting research (i.e., Basu 2012; Waymire 2012) and with Moizer (2009).
This state of stagnation is not unique to financial reporting research. For business disciplines more generally, Armstrong (1997, 79) concludes that the reduced likelihood that important controversial findings will be published or delayed “is expected to become worse as the number of submissions increases. Paradoxically, then, the increase in the number of submissions is expected to lead to a decrease in the proportion of published papers with innovative findings … One would expect the most serious losses to occur for the leading journals.”
There are many such figures that illustrate the scientific method. The process is commonly depicted as a circle with no entry or exit point.
Examples of practitioner-oriented journals include Accounting Horizons, Financial Analysts Journal, and Journal of Applied Corporate Finance.
The increasing focus on causality in accounting research followed a similar trend in economics. For example, Angrist and Krueger (2001, 72) describe “a flowering of recent work” that uses instrumental variables, although these techniques had long been available. Accountants seem to have adopted the procedures with less scholarly debate about the limitations and hence more misunderstanding about the implications for inferences. Poor execution and inference in studies that purport to provide causal evidence is a significant problem in the literature (Bertomeu, Beyer, and Taylor 2015). This problem, however, is distinct from the problem I am discussing, which is how an overemphasis on causality can have detrimental effects on innovation.
This comment applies to studies that use experimental methods as well.
See Basu and Park (2014) for evidence of unusual p-value distributions in published accounting research, which suggests that authors feel enough pressure to produce positive results that they are willing to engage in what the authors generously call “selective” reporting of results, defined as “reporting of statistically significant results or searching across research designs for p-values for test variables below conventional significance levels.” Their findings sadly highlight the need for replication.
The importance of publishing replications has gained momentum recently due to several highly publicized cases of irreproducible findings in both the natural and social sciences including psychology, political science, and economics. Pashler and Wagenmakers (2012) call the situation in psychology a crisis of confidence.
I identified two papers using survey data that I wanted to include as examples in this inaugural issue, but both author groups believed they would be able to publish in an established journal that would count for them.
According to Noussair (2011), the number of published papers in the broadly defined area of experimental economics has increased 75 percent in 2006–2010 over 2001–2005, but most of the increase has been in specialized experimental journals rather than general interest economics journals. Experimental markets studies represent approximately one-fourth of the broad category.
Again, I contacted two author groups asking if they would submit a laboratory market study. Neither group was willing to submit because some of the co-authors are from economics departments and publication in an accounting journal, despite potentially greater impact, would not count for them.
A fairly pure example is the SEC's pilot test designed to evaluate Regulation SHO. The SEC selected every third stock in the Russell 3000 index ranked by volume to trade without short sale price tests. Numerous studies use data from this experiment to investigate questions related to the impact of short sale activity/restrictions on prices (e.g., Diether, Lee, and Werner 2009). The SEC also conducted a pilot program for XBRL initiation, but participation in the program was voluntary and rewarded with expedited processing for documents selected for review, thus the experiment is not as clean as the Regulation SHO experiment.
In the long run, editors will serve staggered three-year terms, renewable once.
The idea of themed issues is similar in spirit to the “Big Issues Initiative” proposed at the AAA strategic retreat in 2011, as reported in Waymire (2012, 1090): “It was suggested that once every two to three years, a standing AAA committee would identify a major issue where more research is needed. Then, two or three years hence, prizes would be awarded to best papers, possibly with publication to follow in an AAA journal, in much the same way that the Competitive Manuscript award winner is published in The Accounting Review.”
According to Laband and Piette (1994), such editorial activism is associated with higher overall impact of published articles rather than suggesting lower standards for certain authors. Based on an analysis of more than 1,000 articles in the top 28 economics journals in 1984, they conclude, “on balance their use of professional connections enables them (editors) to identify and ‘capture' high-impact papers for publication” (Laband and Piette 1994, 194).
This manuscript is a replication of one element of a published article for which data were provided by James E. Hunton. See the Commentary by Spencer Anderson and Pat Hopkins (2016) for further discussion.
The effort to start the journal involved tremendous work by an ad hoc FARS Committee (Mark Bradshaw, Ted Christensen, Marlene Plumlee, Darren Roulstone, Jennifer Wu Tucker, and Alfred Wagenhofer). Many of my comments are the result of our discussions as a Committee. I appreciate the assistance of Chris Ittner, Brian Bushee, Chris Armstrong, and Rodrigo Verdi in providing preliminary reviews of articles (not ultimately solicited for submission) for this issue. I appreciate conversations with Greg Waymire and Shyam Sunder related to the journal's mission. I thank various AAA staff, especially David Boynton, Elizabeth Garrett, and Diane Hazard.