Objective

The purpose of this report is to provide a summary of the methods used and feedback from reviewers about the peer review process for the 2023 Association of Chiropractic Colleges Educational Conference and Research Agenda Conference (ACC-RAC).

Methods

After the peer review process was complete, the 2023 ACC-RAC peer review committee members were invited to provide feedback through an anonymous electronic form. The survey included a Likert scale to rate items about the peer review process and an option for open-ended comments.

Results

Of the 166 peer reviewers, 77 (46%) completed the survey. The reviewers represented 9 countries, with the greatest number from North America. The majority (95%) of respondents rated the process of peer review in topic groups as good to excellent, and the majority (92%) of respondents rated the overall 2023 peer review process as good to excellent. The critical comments that were submitted are addressed in this report.

Conclusion

Overall, peer reviewer satisfaction with the process used for the 2023 ACC-RAC was high. We will include information from this report as part of the continuous quality improvement of peer review process, an important part of improving chiropractic education, research, and scholarly activities.

The Association of Chiropractic Colleges Educational Conference and Research Agenda Conference (ACC-RAC) provides a meeting for discussion and presentation of the shared educational, scholarly, and research-related needs of the chiropractic academic and research community.15  The goal of the scientific component of this conference is to provide a venue for scholarly activity through the process of peer review and presentations of completed educational, clinical, and basic science research in order to promote contributions to the peer-reviewed literature.

Peer review is an important process typically associated with manuscript submissions to scientific journals.6 

Peer review is the critical assessment of manuscripts submitted to journals by experts who are usually not part of the editorial staff. Because unbiased, independent, critical assessment is an intrinsic part of all scholarly work, including scientific research, peer review is an important extension of the scientific process.7 

When applied to scientific conferences, peer review can assist with the following:

  1. Selection of the submissions of the highest quality and relevance to the mission of the conference.

  2. Identifying significant flaws in submissions, which may jeopardize the quality of the scientific conference.

  3. Improving the critical appraisal skills of peer reviewers.

Peer review is complex and involves many factors that must be implemented properly. Issues that need to be minded regarding the abstract submissions include conflicts of interest, bias, ethical breeches, and scientific misconduct. Reviewers need to be recruited, and they volunteer their time to reduce bias and create a fairer environment by not having any remuneration that may influence their decisions. A blinded review process attempts to reduce bias; however, since humans are involved, unconscious biases may remain.

Reflection and review are part of the continuous quality improvement processes that keep conference peer review processes up-to-date and relevant. Over the years, the conference has improved based on the needs of chiropractic education. Obtaining feedback from the peer reviewers who participate in the process helps to inform and guide what happens with peer review for future conferences. Therefore, the purpose of this report is to provide a summary of the methods used for the peer review process for the 2023 ACC-RAC and feedback from reviewers. This information will be used to inform the peer review process for future conferences.

Peer Review Process

A call for peer reviewers was distributed to the global chiropractic academic and research community. This year we used a batching process like other large scientific conferences. To provide a fairer review process, we clustered abstracts in the following topics used for abstract submissions:

  • Education Research (research in the teaching, learning, education programs)

  • Clinical and Basic Science Research (experimental research, quantitative studies)

  • Public Health and Epidemiology Research

  • Clinical Case Studies (case reports, case series)

Peer reviewers were asked to choose which topic represented their area of expertise and were assigned to the corresponding cluster of abstracts. Thus, this approach assigned abstracts to reviewers who self-identified as being most experienced in a particular area.

After blinding the abstracts, batches of 20 to 30 abstracts were sent to the respective topic teams. Each reviewer was asked to score and make a recommendation for platform, poster, or reject for each abstract. For reject recommendations, they were asked to provide comments to support their decision. Reviewers were asked to recuse themselves if they were involved with the study or were aware of any bias.

The reviewers were asked to rate abstracts for quality, relevance, and award suitability. Each abstract was reviewed by 10 or more reviewers. Once all review scores were collected, we compared data within the batch to assess reviewers' scoring trends and consider any outliers. From the collated review of the rating scores and comments, decisions were made for each topic category for platform or poster presentation. The review process is shown in Figure 1. A list of the titles and authors of accepted abstracts was sent to ACC headquarters for the ACC-RAC Planning Committee to include in the program. To be included on the program and in the conference proceedings, presenters of accepted abstracts were required to register with the ACC for the conference. All accepted abstracts that had registered presenters were included in the final publication of the 2023 proceedings.8 

Figure 1

- Flow of abstract submissions into topic areas, peer reviewers reviewed within these topic areas. The abstracts were selected for presentation based on the results of the reviews.

Figure 1

- Flow of abstract submissions into topic areas, peer reviewers reviewed within these topic areas. The abstracts were selected for presentation based on the results of the reviews.

Close modal

Peer Review Feedback

After completing the review process, the 2023 peer review committee members were invited to complete an anonymous electronic survey to provide feedback on the process. The survey consisted of questions rating their satisfaction and an option for comments, which were collected using SurveyMonkey (Momentive Global Inc, San Mateo, CA, USA).

The call for peer reviewers resulted in responses from 168 experts. Of these, 166 (99%) completed their reviews, representing 8 countries (Fig. 2) and a wide variety of affiliations, including 29 different chiropractic programs (Fig. 3). The names of the peer reviewers are listed in the published conference proceedings.8 

Figure 2

- Number of peer reviewers from various countries. The majority were from the United States.

Figure 2

- Number of peer reviewers from various countries. The majority were from the United States.

Close modal
Figure 3

- Peer review committee members' affiliations that they declared during peer reviewer registration.

Figure 3

- Peer review committee members' affiliations that they declared during peer reviewer registration.

Close modal

Peer Review Feedback

Seventy-seven of 166 members (46%) of the peer review committee responded to the survey, considered a good response rate. The majority (95%) of respondents rated the process of peer review in topic groups as good to excellent. The majority (92%) of respondents rated the overall 2023 peer review processes as good to excellent.

Constructive Comments

Survey comments from the peer reviewers were helpful and included suggestions for improving peer review, which could be implemented for future conferences. These recommendations were summarized into the following 3 areas:

  1. Provide training sessions for peer reviewers (in advance of doing the reviews).

  2. Provide an instruction guide for how to do peer review.

  3. Provide a more detailed rubric for grading abstracts.

When considering these suggestions for future peer review processes, it is important to consider the following questions as these relate to resources and development. Are peer reviewers willing to go through extra training to do peer review? If provided, will peer reviewers read an instruction guide? Will there be low tolerance or resistance by peer reviewers to using a more detailed rubric? We will implement improvements and observe the response.

Additional supportive comments were provided in an optional field. These comments supported the process that we used, and this feedback will be considered for replication for future conferences. The following comments are a sample of those provided:

  • “This format and categorization of the abstracts made the process efficient and enjoyable.”

  • “Appreciate the option for the open comments.”

  • “This process was comparable to my experience with other scientific venues. I enjoyed reviewing the abstracts, and reviewing went smoothly.”

  • “Being able to review the topics as a group was helpful in that it allowed me to focus on the abstracts in one area, instead of the grab-bag method of topics jumping all over the place.”

  • “The process was very easy this year. Excellent experience.”

  • “Thanks for your continued efforts to mitigate the angst of our making choices between so much good research!”

  • “I found the new format to be less time-consuming than the past method. Once I learned to navigate the spreadsheet, it was easy.”

  • “I like the ability to save the Excel spreadsheet and go back and work at a later date and change things as needed.”

  • “I found the spreadsheet format easy to use and flexible enough that I could make any comments I wanted to make.”

Critical Comments

Although a majority supported the current peer review process, other comments offered criticisms of the processes we used this year. These criticisms are important to consider as this gives insight into how some peer reviewers are thinking. As we work to improve the peer review process, the criticisms will also be kept in mind. Included here are the criticisms with responses to address each concern.

Confusion About the Number of Reviews

One reviewer said that a colleague said that he or she had “received triple the amount of abstracts to review.” We reviewed our records, and nobody received a triple review assignment. Each person received only the number of reviews he or she signed up to do. Only 2 reviewers signed up for 2 batches. The number of abstracts was clearly described in the sign-up form as well as instructions to reviewers: “… each peer reviewer will judge a batch of approximately 20 abstracts within one topic domain.” If a reviewer received more than the expected amount, then he/she should have contacted the peer review chair if there was a concern.

Claim of Worst Reviewers

One reviewer claimed that “peer reviewers who choose to review case reports are the worst reviewers” and added, “the worst reviewers will pick case studies or other certain categories making it so that those categories get worse reviews and worse abstracts get accepted in those categories.” It is surprising to read such a strong opinion. None of our data support the statement that those who review cases are worse than other reviewers. The study design or topic should not dictate the quality of the peer reviewer. This year, we used teams of reviewers with as many as 20 reviewers per team, so it is unlikely that any one reviewer could strongly influence the outcome scores for an abstract category. Even if there were a “worst” reviewer, this should not have strongly influenced the scores.

Regarding the review process, it is prudent to consider that reviewers with the most experience in a particular topic or study design are likely the best suited to review those abstracts. Most would agree that it would be better to have people with experience reviewing in their area of expertise. It is suggested that the same should be applied to topics and types of studies—including case reports. For our peer review committee, those who self-selected to review case studies were the ones reviewing the case abstracts. Thus, we propose that the best people for the job were the ones reviewing those abstracts.

Including Case Reports

One comment stated, “Why are case reports allowed in the conference? They are not even ‘real' research.” There are many opinions about case studies. Some argue that case studies should not be allowed at conferences since they are not experimental research. Others argue conversely that there is not enough relevant and practical clinical content in the scientific presentations and that even more case studies are needed to provide context to the growing knowledge base. Certainly, case studies should never be confused with experimental research studies. However, case studies fulfill an important purpose.

For decades, case studies have been recognized as essential teaching and learning tools in health professions education. Also, cases are an important means of entry for chiropractic faculty and practitioners to present their scholarly work at conferences, thus building scholarly capacity for the profession. These experiences are particularly relevant to residents to learn the skills needed to present and publish. Case reports help authors develop critical thinking skills, evidence-based practice knowledge, and writing skills. Therefore, case studies, even though they are not research per se, are essential and beneficial to the growth of scholarship for the chiropractic profession. Therefore, they are included in this conference in both platform and poster formats.

Cases are only one set of presentations available. Many recognize and appreciate that the ACC-RAC has multiple tracks in the conference program that allow people to choose which sessions they attend. Thus, the hard-core researchers may choose to submit to and attend the experimental research tracks. In turn, health care providers who are interested in practical applications may choose to attend the case studies track. The case study track is like a grand rounds session that focuses on evidence-based care and is beneficial to clinical practitioners. We propose that the benefits of including case studies in the conference program outweigh perceived drawbacks. The number of case reports accepted for podium presentations is adjusted in a proportionate manner to the number of accepted research abstracts.

Expert in Many Areas

One respondent said, “Forcing each reviewer to only select a single topic group to which they belong was unhelpful and misdirected.” This reviewer also stated that they felt “pigeonholed” because they had “extensive background in multiple areas of expertise.” Since the survey was anonymous, we cannot argue against or support this respondent's claim. However, extensive background in multiple areas of expertise is uncommon. An expert is defined as a person who has a comprehensive and authoritative knowledge of, or skill in, a particular area, and true experts have published in the topic area and are recognized by their peers as experts. It is quite rare to find one person who is a true expert across multiple topic areas. In the peer review process, we must consider the pool of volunteers on the peer review committee. Not everyone is an “expert in multiple areas.” To accommodate this, all reviewers were given the opportunity to select the topic area that they wanted to review. And, if anyone wished, they had the opportunity to sign up to peer review for more than one topic area.

Academic Degrees

One reviewer suggested that “it would have been nice to know the academic level of the authors.” This is an interesting comment and touches on the concept of open versus blinded peer review. It is not clear how knowing author information, such as academic level, degrees, or affiliations, could possibly bias the review process or affect fairness. In open review, the names, institutions, locations, and sometimes degrees of the authors are included in the decision-making process. In a blind review, the authors' information is not present, thus the peer reviewer only considers the quality of the abstract itself.

The concept of considering authors' degrees for peer review is a bit of a challenge. How would knowing degrees affect the peer review results? Which degrees should be considered? Typically, more advanced study designs are done by a group of authors who have varying academic degrees. Studies led by doctoral students should be under the direct supervision of experienced professors who have advanced degrees. If we were to consider degrees, then the academic level of all authors should be considered, since all authors should have legitimately contributed to the study. Also, when considering degrees, we need to be cautious about the manipulation of author order (putting a junior first) as a maneuver to obtain a “new researcher” award in a contest or higher ratings if based on academic level, which would be an ethical concern.

Also, just because someone may possess multiple degrees does not mean that their work is any better than someone with fewer degrees. Over the years, we have seen some very poorly written abstracts submitted by some high-profile authors with multiple degrees. Should we judge the abstract based on the popularity, status, or degrees of an author? This leads to the question “How much weight should author degrees be given in selection of abstracts for presentation at a conference?” For 2023, we used a blinded review process to help reduce bias and increase fairness. This is an interesting topic for analysis and consideration for future conferences.

More Critical Process

One comment stated, “This process doesn't appear to be identifying the difference between the higher quality studies. Suggest a more critical process.” This survey comment did not make it clear what was meant by “higher quality studies.” We interpreted this comment to mean that the concern was about separating the higher quality from the lower quality abstracts within a given topic or study design. We also assumed that this comment was only referring to “quality” and not to “study design” (eg, clinical trial vs descriptive report). To judge an abstract only on its study design and not the quality would be inappropriate, of course.

The primary purpose of the conference abstract peer review process was to reach a decision on whether the abstract should be categorized as accepted for platform, for poster, or for rejection. In the current peer review process, abstracts are limited to 195 words or fewer. Thus, we need to be very cautious about making any additional conclusions owing to limited information in the abstracts. Doing an advanced critical review process within these parameters is therefore not feasible.

Could a more critical review process be created? Absolutely, but modifications to the process would be needed, and there would be more work for both reviewers and authors. To have reviewers complete a more critical review process, a more robust representation of the study would be required; authors would need to submit a long abstract of 1000 to 2000 or more words as is done for other conferences and as we have done with prior ACC conferences. The upside of the optional long abstract is that authors have more words to show the quality of their work, the reviews are much more thorough, and the selection is more accurate based on quality. The downside is that it takes more time for the authors to prepare a long abstract, and it is more work and time for reviewers to do their critical review, not to mention the peer review chair gets to hear lots more complaining from authors and reviewers about length and time.

The question remains, should we go through a more complicated critical review process to simply select which abstracts are platform, poster, or rejection? This is an interesting question for a future discussion. To serve the purpose for the 2023 conference, which was for reviewers to inform the selection of platform, poster, or rejection, we used the short abstract review process.

Review Criteria

One respondent stated, “The biggest challenge for me is the absence of clear criteria.” For this year, we had criteria, but they were not detailed to a specific study design. Because of the great breath of the abstract topics and study designs that are typically submitted, providing a more detailed rubric is nearly impossible. For example, to do a detailed rating of a systematic review would be different than rating a laboratory study of biomechanics. Thus, we must rely on the peer reviewers' expertise and knowledge of established study design reporting guidelines when they submit their scores and comments. The instructions to reviewers stated,

The purpose of the ACC-RAC peer review process is to select the highest quality abstracts for presentation. Please judge each abstract on its own merit. Please consider: completeness, the strength of contribution, scholarly rigor, evidence of results, and relevance to the conference. … There is no quota or a specific number of abstracts to be accepted or rejected, each abstract should be judged on its own merit.

We will try to assist reviewers in the future by providing clearer expectations and more detailed instructions that would assist them with their review of a wide variety of topics and research designs.

Peer Review Chair Report

I have had the honor of participating in the growth and development of the peer review process for this conference for many years, serving as the peer review chair for 20 of its 29 occurrences. Over the years, it has been heartening to see the research quality improve and to watch promising young researchers evolve into impressive senior scholars. In addition to attending ACC conferences, I continue to participate in other health professions and education conferences, such as the American Association of Medical Colleges, American Education Research Associations, and Innovations in Medication Education conferences. By attending or presenting at these conferences, I reflected on how we could make the ACC-RAC an even better conference and thus have proposed the implementation of improved processes using an incremental change method over the years.

Much has changed since the first ACC conference that I attended in 1994. The initial ACC Educational Conference meeting and the late 1980s Education Congresses from which it evolved were limited to education research presentations, which were delivered in a short, single-track program.9  When I became peer review chair, I considered the feedback from the research chairs and input from conference attendees about how the conference could be improved. Based on their desire to include additional content, we broadened the call for abstracts to include clinical, basic sciences, public health research, and case reports. During the early years, I was also involved in proposing and implementing the expansion of the conference to include peer-reviewed workshops, tracks organized by attendee interests, and poster presentations. These improvements have broadened the impact that this conference has on those who attend.

Future Considerations for Peer Review

The quality of peer review for this conference has improved over the years and so have the presentations, but there is always room for improvement. The issues surrounding peer review are important and complex, so much so that there are entire conferences dedicated to the single topic of peer review. As long as there is a human factor involved in peer review, it will continue to be an art form with a range of opinions surrounding it. For now, peer review is considered the best method to select manuscripts for journals and abstracts for conferences. Review can be improved by gathering a larger cadre of experienced peer reviewers who can provide high-quality peer review. However, it is important to note that the only way to obtain experience is to invite new scholars so that they may participate as peer reviewers. Because novices need to gain experience, they need to be included as well.

The quality of what is presented at a conference reflects the peer review process. Each year we see chiropractic scientific conferences grow and the presentations improve in quality. However, another factor to consider is how many high-quality submissions are received. The quality of the submissions determines the outcomes of the platform and poster presentations. No matter how good the peer reviewers and the peer review processes are, if the submissions are of low quality, the results of the conference will reflect this. Ultimately, the quality of the conference rests in the hands of the researchers and scholars who submit their abstracts.

As we look to preparing future chiropractic conferences, we consider what we will need to do to improve the peer review process. Can our peer review process be improved? I believe the answer is “yes”—with the caveat that it can improve when we all work together to make it better. Each year we work to improve the process, and we invite you to join us on this journey.

In closing this report, I thank all the volunteers on the 2023 ACC-RAC Peer Review Committee. Your contributions as peer reviewers and your feedback are essential to continuing to improve the science and scholarship within the chiropractic profession. Your work, input, and dedication are valued and appreciated.

1.
Johnson
 
C.
What is the Association of Chiropractic Colleges Educational Conference and Research Agenda Conference?
J Manipulative Physiol Ther
.
2007
;
30
(4)
:
249
250
.
2.
Johnson
 
C,
Green
 
B.
The Association of Chiropractic Colleges Educational Conference and Research Agenda Conference: 17 years of scholarship and collaboration
.
J Manipulative Physiol Ther
.
2010
;
33
(3)
:
165
166
.
3.
Johnson
 
CD,
Green
 
BN.
Association of Chiropractic Colleges Educational Conference and Research Agenda Conference 2015
.
J Chiropr Educ
.
2016
;
30
(1)
:
42
47
.
4.
Johnson
 
CD,
Green
 
BN.
Association of Chiropractic Colleges Educational Conference and Research Agenda Conference 2014
.
J Chiropr Educ
.
2015
;
29
(1)
:
49
55
.
5.
Johnson
 
CD,
Green
 
BN.
Association of Chiropractic Colleges Educational Conference and Research Agenda Conference 2012
.
J Chiropr Educ
.
2012
;
26
(2)
:
188
191
.
6.
Johnson
 
CD,
Green
 
BN.
How to be an awesome peer reviewer Chiropractic Educators Research Forum. 2020. Accessed June 12,
2023
.
7.
International Committee of Medical Journal Editors
.
Responsibilities in the submission and peer-review process. 2023. Accessed June 12,
2023
.
8.
Association of Chiropractic Colleges
.
Association of Chiropractic Colleges Educational Conference and Research Agenda Conference 2023: leadership in education
.
J Chiropr Educ
.
2023
;
37
(1)
:
50
70
.
9.
Green
 
BN,
Jacobs
 
GE,
Johnson
 
CD,
Phillips
 
RB.
A history of the journal of chiropractic education: twenty-five years of service, 1987-2011
.
J Chiropr Educ
.
2011
;
25
(2)
:
169
181
.

FUNDING AND CONFLICTS OF INTEREST The author declares consulting income from the Association of Chiropractic Colleges.

Author notes

Claire Johnson (corresponding author) is president of Brighthall, Inc, and chair of the peer review committee for ACC-RAC 2023. She is also a professor at the National University of Health Sciences (200 East Roosevelt Rd, Lombard, IL 60149; [email protected]).