Background

Historically, trainees in undergraduate and graduate health professions education have relied on secondary resources, such as textbooks and lectures, for core learning activities. Recently, blogs and podcasts have entered into mainstream usage, especially for residents and educators. These low-cost, widely available resources have many characteristics of disruptive innovations and, if they continue to improve in quality, have the potential to reinvigorate health professions education. One potential limitation of further growth in the use of these resources is the lack of information on their quality and effectiveness.

Objective

To identify quality indicators for secondary resources that are described in the literature, which might be applicable to blogs and podcasts.

Methods

Using a blended research methodology, we performed a systematic literature review using Google Scholar, MEDLINE, Embase, Web of Science, and ERIC to identify quality indicators for secondary resources. A qualitative analysis of these indicators resulted in the organization of this information into themes and subthemes. Expert focus groups were convened to triangulate these findings and ensure that no relevant quality indicators were missed.

Results

The literature search identified 4530 abstracts, and quality indicators were extracted from 157 articles. The qualitative analysis produced 3 themes (credibility, content, and design), 13 subthemes, and 151 quality indicators.

Conclusions

The list of quality indicators resulting from our analysis can be used by stakeholders, including learners, educators, academic leaders, and blog/podcast producers. Further studies are being conducted, which will refine the list into a form that is more structured and stratified for use by these stakeholders.

Editor's Note: The online version of this article contains the sample search strategy for MEDLINE and the final list of quality indicators for blogs and podcasts.

Historically, learners in health professions education have relied mainly on secondary resources such as textbooks and lectures to acquire important medical knowledge. However, the incorporation of new knowledge into textbooks can take a long time.1  In contrast, lectures are dynamic and often more up to date, but are limited by the expertise of the speaker; in addition, learners must usually attend these in person at predefined times. Recently, blogs, podcasts, and other digital educational resources have been used to accelerate knowledge translation by providing timely, frequently updated resources that are available at users' convenience. As a result, their prevalence in health professions education has increased dramatically over the past decade.26 

The emergence of blogs and podcasts in education can be viewed through the lens of Christensen's disruptive innovation model. Disruptive innovations introduce new products that are not of comparable quality to existing products but benefit from being simpler, more convenient, and cheaper for the user.7  Just as the papyrus leaf disrupted the traditions of oration and Gutenberg's printing press disrupted the reproduction of key religious texts, blogs and podcasts are poised to disrupt mass-produced textbooks and traditional lectures. Their affordability, accessibility, and timeliness have allowed them to gain a foothold in the traditional market of graduate medical education.7  However, in order for these disruptive forms to become sustainable innovations, they must improve in quality.7 

Currently, there are no standardized methods to measure the quality of medical education blogs and podcasts. While patient-oriented materials are well supported by various quality scores (eg, DISCERN8  and the Health on the Net Foundation Code of Conduct [HONcode]9), medical learners are not similarly guided in quality use of online resources. Quality tools and checklists that have been developed for other types of secondary resources have enhanced their reporting and assessment standards. DISCERN and the HONcode are quality scores for health care websites that identify high-quality resources for the lay public. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA),10  the Cochrane Collaborative,11  and the Standards for Reporting Qualitative Research (SRQR)12  are used for systematic reviews and qualitative research. These tools have proved to be beneficial in ensuring the quality of resources targeted at the patient population, systematic reviews, and qualitative research. Similarly, determining quality indicators for health professions education blogs and podcasts could lead to standards that will benefit the many learners that use them.4,5 

The lack of quality metrics has negative implications for the primary stakeholders: (1) health professions learners, who have no guidance to help discern the quality of these resources; (2) educators, who must rely on their preferences and gestalt to endorse resources; (3) academic leaders, who are unable to evaluate and credit digital educators in the promotions process; and (4) bloggers and podcasters, who have no standards to guide the production of these products. Measuring the quality of blogs and podcasts could support these groups in the usage and development of these and other online educational resources.2,6 

This review utilized a systematic literature search augmented by a thematic analysis strategy to determine which previously defined quality indicators for secondary resources may be applied to online health professions blogs and podcasts that are targeted toward learners. Focus groups of expert bloggers and podcasters were used to triangulate these results and further enhance the final list by identifying additional relevant indicators. These quality indicators may be able to identify superior blogs and podcasts for graduate learners and guide standards for the development of these online platforms.

Search Strategy

One of the investigators (Q.S.P.) conducted a systematic literature search with the oversight of an expert librarian using MEDLINE (OvidSP: 1950–February 2014), Embase (OvidSP: 1947–February 2014), Web of Science (ISI Web of Knowledge: 1899–February 2014), and ERIC (ProQuest: 1966–February 2014). The search strategy was designed to find literature containing quality indicators for secondary resources (provided as online supplemental material) and included searching key words and controlled vocabulary surrounding the themes “secondary resource” and “quality.” The results were limited to English-language articles.

Inclusion/Exclusion Criteria

Duplicate articles were removed. Two investigators (B.T., T.M.C.) independently performed a title and abstract review. Articles deemed potentially relevant by both reviewers were included. A third investigator (Q.S.P.) arbitrated articles deemed relevant by only 1 reviewer.

Articles were reviewed in full if, judging by their title and abstract, they described quality indicators for secondary resources. Articles that were excluded contained quality indicators developed purely to evaluate patient-oriented resources, as specified within the abstract. The reasoning behind this exclusion was that patient-oriented resources tend to address a very different set of needs than the needs addressed by health professions education resources, which may ultimately lead to a different set of quality markers. The search was not restricted to health professions education articles.

Data Extraction

One of 3 investigators (B.T., Q.S.P., T.M.C.) independently reviewed the full text of the selected articles to extract the following information: year of publication, field of study, type of resource being evaluated, and any quality scores or indicators that were mentioned, listed, or described. We used a piloted form to ensure the consistency of the extracted material. Any concerns were discussed with the group of investigators in order to reach consensus.

Data Analysis

Two investigators (B.T., M.L.) discussed all quality indicators and filtered each as being relevant to blogs, podcasts, both, or neither. Disagreements were resolved by consensus, and the interrater agreement between the reviewers was calculated. An auditor (T.M.C.) reviewed the excluded quality indicators to ensure trustworthiness of the sorting. Quality indicators that were excluded in this review process were not further assessed.

As in a previously published systematic review,13  we conducted an adjunctive thematic analysis to arrive at a final list of quality indicators for blogs and podcasts. Four investigators (B.T., M.L., Q.S.P., T.M.C.) performed a qualitative, thematic analysis using a constant comparative method to generate themes until saturation was reached. We defined saturation as the point at which the review yielded no new themes or subthemes when the extracted materials were further analyzed for the next 30 articles. Disagreements on terminology and semantic nomenclature were resolved by consensus, which required the research team to convene and discuss the analysis together. The first 30% of the quality indicators were coded redundantly, and interrater agreement was calculated.

Measures to Increase Trustworthiness of the Analysis: Audit, Triangulation, and Focus Groups

All investigators were privy to the full texts of the included articles and audited the final list of quality indicators to ensure that no important themes were excluded. Any themes or quality indicators that a single investigator thought were represented in the literature but missing from the final list were added.

To triangulate the list of quality indicators and to discover any that were not previously described in the literature relevant to blogs and podcasts, 4 focus groups were convened with leading emergency medicine and critical care (EMCC) bloggers and podcasters. EMCC bloggers and podcasters were chosen because of their prevalence2  and the availability of a metric to quantify expertise. The Social Media Index has been shown to correlate with impact when applied to medical journals and was calculated for EMCC blogs and podcasts on January 18, 2014.14  Two investigators (M.L., W.K.M.) conducted a total of 4 focus groups with the primary bloggers and podcasters from the list of top 10 Social Media Index resources. The interviews consisted of open-ended questions regarding blog and podcast quality. We analyzed audio recordings and field notes from the focus groups and compared them to the list of quality indicators to triangulate with our thematic analysis findings.

The Hamilton Integrated Research Ethics Board reviewed and approved the study methodology. Focus group participants provided written consent prior to participation. All efforts were made to adhere to guidelines set by the PRISMA statement10  and SRQR.12 

Literature Search

The literature search returned 3752 articles. The title and abstract review excluded 3585 articles, leaving 167 for full-text review. Of these, 10 articles were inaccessible though the libraries of the 5 investigators and interlibrary loans. Hence, 157 articles were included in the full manuscript review. The details of the literature search process are depicted in figure 1.

FIGURE 1

Flowchart Demonstrating the Process of Literature Search and Article Inclusion

FIGURE 1

Flowchart Demonstrating the Process of Literature Search and Article Inclusion

Close modal

Figure 2 shows the results of the qualitative analysis for quality indicators of secondary educational resources. Of the 1817 quality indicators extracted from the included articles, 1134 separate indicators were deemed relevant to blogs, podcasts, or both. The interrater agreement for this phase was 91%.

FIGURE 2

Qualitative Thematic Analysis That Resulted in Final List of 151 Quality Indicators (QI's) for Blogs and Podcasts

FIGURE 2

Qualitative Thematic Analysis That Resulted in Final List of 151 Quality Indicators (QI's) for Blogs and Podcasts

Close modal

Thematic Analysis

Three main themes (credibility, content, and design) and 13 subthemes emerged in the thematic analysis (table). The 3 themes were divided into subthemes in order to optimize clarity and organization. The credibility theme consisted of quality indicators surrounding transparency, process, use of other resources, trustworthiness, and bias. The content theme addressed issues of professionalism, engagement, academic rigor, and orientation. Additionally, the design theme focused on the topics of aesthetics, interaction, functionality, and ease of use.

TABLE

Themes and Subthemes of Quality Indicators That Reached Consensus

Themes and Subthemes of Quality Indicators That Reached Consensus
Themes and Subthemes of Quality Indicators That Reached Consensus

A total of 151 quality indicators emerged among the 3 themes: credibility (53 quality indicators), content (44 quality indicators), and design (54 quality indicators). The final quality indicators are provided as online supplemental material. The interrater agreements were 91%, 90%, and 89% for the credibility, content, and design themes, respectively. The audit by the investigators identified 6 indicators that were believed to be missing. Furthermore, the expert focus groups, which were attended by a total of 7 of 10 invited bloggers and 8 of 10 invited podcasters, identified 16 of the quality indicators.

The 151 quality indicators that emerged through the systematic review and qualitative analysis serve as a starting point for determining the quality of health professions education blogs and podcasts. As an early innovative approach, some bloggers and educators created a scoring instrument that currently lacks evidence of validity for selecting and highlighting quality online resources specifically for graduate medical education.15  This instrument, however, does not use blog- or podcast-specific scoring criteria. We believe such scoring instruments may be enhanced by the findings of our study. Other stakeholders possibly interested in this work include content producers, who may lack guidance on how resources can be improved, and academic leaders, who are unable to assess their value.

Our results suggest that many quality indicators published in the broader literature are potentially relevant and worth considering in the assessment of online resources. We hope that these results will serve as a platform from which more pragmatic evaluation schemata may be derived (eg, quality score, checklist, or toolbox). Such a tool could guide learners toward higher-quality resources, help teachers recommend resources to students, evaluate digital scholarship by academic leaders, and help blog and podcast creators improve the quality of their educational products.

The increasing number2  and usage4,5  of social media–based educational resources, specifically blogs and podcasts, suggests that there is potential for this seemingly disruptive innovation to become a sustaining innovation for health professions education.7  Already, there are some programs that have begun embarking on this movement6 ; however, we feel that a degree of quality assurance will need to be reached before these resources are adopted more broadly. This research provides educators with a transparent list of questions to consider in an effort to be more deliberate and thoughtful about assessing and creating superior resources. Ultimately, with more effective and high-quality content, we hope that blogs and podcasts will become valuable resources in growing teaching methodologies, such as the flipped classroom model,16  as well as in traditional educational curricula.

One limitation of our analysis is that we included only expert bloggers and podcasters from the EMCC community in the focus groups. It would have been beneficial to include an external assessment completed by educators from other fields. A follow-up study will attempt to generate further validity evidence for our results in a broader population of health professions educators. We opted to use blogging and podcasting experts in the focus groups to augment our literature review with the perspectives of leading experts. Another limitation is that the final list of 151 quality indicators likely is too unwieldy for application by learners or teachers. A follow-up study will use a consensus-based method to reduce the list of quality indicators to the most essential items.

We used a blended methodology to conduct a systematic, qualitative analysis of the literature, and we developed the first comprehensive list of quality indicators for online educational resources, particularly blogs and podcasts. We believe our work will serve as a foundation for determining the quality of blogs and podcasts in health professions education. More work must be done to distill or stratify our list of 151 quality indicators into a more useful format. Ultimately, this list of quality indicators may be useful to stakeholders (eg, learners, educators, academic leaders, and blog/podcast producers) in setting standards and raising awareness about quality in this new world of digitized health professions education.

1
Morris
ZS
,
Wooding
S
,
Grant
J.
The answer is 17 years, what is the question: understanding time lags in translational research
.
J R Soc Med
.
2011
;
104
(
12
):
510
520
.
2
Cadogan
M
,
Thoma
B
,
Chan
TM
,
Lin
M.
Free open access meducation (FOAM): the rise of emergency medicine and critical care blogs and podcasts (2002–2013)
.
Emerg Med J
.
2014
;
31
(
e1
):
e76
e77
.
3
Loeb
S
,
Bayne
CE
,
Frey
C
,
Davies
BJ
,
Averch
TD
,
Woo
HH
,
et al
.
Use of social media in urology: data from the American Urological Association (AUA)
.
BJU Int
.
2014
;
113
(
6
):
993
998
.
4
Mallin
M
,
Schlein
S
,
Doctor
S
,
Stroud
S
,
Dawson
M
,
Fix
M.
A survey of the current utilization of asynchronous education among emergency medicine residents in the United States
.
Acad Med
.
2014
;
89
(
4
):
598
601
.
5
Purdy
E
,
Thoma
B
,
Bednarczyk
J
,
Migneault
D
,
Sherbino
J.
The use of free online educational resources by Canadian emergency medicine residents and program directors
.
CJEM
.
2015
;
17
(
2
):
101
106
.
6
Scott
KR
,
Hsu
CH
,
Johnson
NJ
,
Mamtani
M
,
Conlon
LW
,
DeRoos
FJ.
Integration of social media in emergency medicine residency curriculum
.
Ann Emerg Med
.
2014
;
64
(
4
):
396
404
.
7
Christensen
CM
,
Raynor
ME.
The Innovator's Solution: Creating and Sustaining Successful Growth
.
Boston, MA
:
Harvard Business School Press;
2003
.
8
Charnock
D
,
Shepperd
S
,
Needham
G
,
Gann
R.
DISCERN: an instrument for judging the quality of written consumer health information on treatment choices
.
J Epidemiol Community Health
.
1999
;
53
(
2
):
105
111
.
9
Boyer
C
,
Selby
M
,
Scherrer
JR
,
Appel
RD.
The health on the net code of conduct for medical and health websites
.
Comput Biol Med
.
1998
;
28
(
5
):
603
610
.
10
Moher
D
,
Liberati
A
,
Tetzlaff
J
,
Altman
DG
,
PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement
.
Ann Intern Med
.
2009
;
151
(
4
):
264
269
.
11
Higgins
JPT
,
Green
S
,
eds
.
Cochrane Handbook for Systematic Reviews of Interventions 4.2.6. Updated September 2006
.
In
:
The Cochrane Library, Issue 4, 2006
.
Chichester, UK
:
John Wiley & Sons Ltd
;
2006
. ,
2015
.
12
O'Brien
BC
,
Harris
IB
,
Beckman
TJ
,
Reed
DA
,
Cook
DA.
Standards for reporting qualitative research: a synthesis of recommendations
.
Acad Med
.
2014
;
89
(
9
):
1245
1251
.
13
Frank
JR
,
Mungroo
R
,
Ahmad
Y
,
Wang
M
,
De Rossi
S
,
Horsley
T.
Toward a definition of competency-based education in medicine: a systematic review of published definitions
.
Med Teach
.
2010
;
32
(
8
):
631
637
.
14
Thoma
B
,
Sanders
JL
,
Lin
M
,
Paterson
QS
,
Steeg
J
,
Chan
TM.
The Social Media Index: measuring the impact of emergency medicine and critical care websites
.
West J Emerg Med
.
2015
;
16
(
2
),
242
249
.
15
Academic Life in Emergency Medicine
.
New AIR series: ALiEM approved instructional resources
. ,
2015
.
16
Prober
CG
,
Khan
S.
Medical education reimagined: a call to action
.
Acad Med
.
2013
;
88
(
10
):
1407
1410
.

Author notes

Funding: The authors report no external funding source for this study.

Competing Interests

Conflict of interest: Michelle Lin is a deputy editor for DynaMed. Brent Thoma, Michelle Lin, Ken Milne, and Teresa Chan are unpaid editors and contributors to various medical education websites, blogs, and podcasts. Teresa Chan and Brent Thoma have received funding from the Royal College of Physicians and Surgeons of Canada for other graduate studies and other unaffiliated projects.

The results of this study were presented as a poster at the Research in Medical Education Association of American Medical Colleges Medical Education Meeting in Chicago, Illinois, November 6–7, 2014, and at the Canadian Association of Emergency Physicians conference in Edmonton, Alberta, Canada, May 30–June 3, 2015.

The authors would like to thank Catherine Boden, an expert librarian from the University of Saskatchewan, as well as our numerous focus group participants.

Supplementary data