Regulatory bodies of health and non-health professions around the world have developed a diverse array of mechanisms to ensure maintenance of competence of practitioners. Quality assurance of professionals' practices is crucial to the work of regulators, yet there are few examples of interprofessional or cross-jurisdictional comparisons of approaches and mechanisms used to achieve this important objective. This review was undertaken using an indicative sampling method: to control for local cultural factors, all regulated health- and non-health professions in a single jurisdiction (Ontario, Canada) were studied, while intra-jurisdictional comparison was facilitated through targeted study of large professions (such as medicine, pharmacy and teaching) in other English-language jurisdictions (such as California, USA; the United Kingdom and Australia). A total of 91 regulated professions were examined to identify trends, commonalities and differences related to approaches used for professional quality assurance and maintenance of competence assessment. A diverse array of approaches was identified, highlighting divergent approaches to defining and measuring competency in the professions. Further comparative work examining this issue is required to help identify best- and promising-practices that can be shared among regulators from different jurisdictions and professions.
Background
Regulated professionals around the world are entrusted with the responsibility of providing their specialized services to members of the public with the highest possible degree of care and quality, often across careers that span several decades. Successful completion of formal training, entry-to-practice examinations, and in-service requirements are intended to ensure that these professionals are — at the start of this decades-long career — able to demonstrate baseline competencies. In the context of today's rapidly evolving knowledge and advancing technology, this emphasis on start-up competence assessment at entry-to-practice is increasingly being called into question.1 This may be particularly important in the health professions: It has been stated that medical knowledge in the first part of the 21st century has a half-life of as little as five years.2 This suggests that not only is continuing professional development a necessity to ensure ongoing competency in professional practice — it is actually the longest, and most important, part of the educational process itself.3 This reality, within a context of accountability mandating use of standardized mechanisms for measurement and public reporting to provide the public with reassurance regarding professionals' ongoing competence — has highlighted the importance of regulators' work in a significant way.
In an effort to reconcile these various forces, regulatory bodies around the world have developed different models and methods for continuing professional development and implemented diverse strategies for assessing ongoing competence of professionals engaged in practice. Regardless of the highly variable scopes of practice of different professions, or unique geopolitical or local-cultural contexts, the goal of these mandates is generally quite aligned: to ensure professionals are engaged in a process of life-long learning so that, at all stages of their careers, they continue to possess the knowledge, skills, and judgment necessary to competently practice their profession, thus ultimately ensuring protection of the public.4,5,6
...NOT ONLY IS CONTINUING PROFESSIONAL DEVELOPMENT A NECESSITY TO ENSURE ONGOING COMPETENCY IN PROFESSIONAL PRACTICE — IT IS ACTUALLY THE LONGEST, AND MOST IMPORTANT, PART OF THE EDUCATIONAL PROCESS ITSELF.
Despite this common objective behind professional quality assurance, great variation exists in the diverse systems in place across the professions globally, and across jurisdictions within each profession. This may be attributed to the complex interplay of many factors, including differences in professional regulatory legislation, interpretation of emerging evidence, regulatory body resources and capacity, specific professional culture and values, and inconsistent interpretations of what actually constitutes “professional competence.”7 Further, the absence of evidence regarding the actual value and impact of traditional continuing medical education (CME) models for maintenance of competency may raise questions of legitimacy and effectiveness of regulators' work.7
In this complex network, public confidence in the systems that are in place to achieve this objective is essential.8,9 This complexity, however, has led some regulators to avoid direct interprofessional or intra-jurisdictional comparisons to identify promising or best practices around quality assurance and competency assessment,10 arguing that apples-to-apples comparisons are not possible and apples-to-oranges comparisons are not helpful. While such comparisons may not produce actionable outcomes immediately, we believe there is value in such research to provide all stakeholders with a broader context for consideration of quality improvements that may be possible or desirable in the area of professional quality assurance and competency assessment.
Objectives
The objective of this review was to characterize quality assurance and competency assessment systems (and their constituent components) that are being used across professions and jurisdictions with respect to:
Evidence to support efficacy.
Perceived and demonstrated benefits.
Perceived and demonstrated limitations.
Method
Given the overwhelming potential scope and breadth that is possible for such research, a framing mechanism for indicative sampling was required to allow this work to proceed in a time- and cost-efficient manner. As a means of generating a broad picture of the diverse array of professional quality assurance systems that exist, this review included both health and non-health professions across an array of geographic regions. All professions (whether health or non-health) are concerned with ensuring the ongoing competency of those in practice. While the specific practice context may differ because of the nature of the profession (e.g., third-party oversight, remuneration models), the need for all professions to ensure their practitioners are competent and up-to-date in terms of their knowledge and skills is important. Non-health professions may have important lessons to share with the health professions; while the specific maintenance of competency activities they undertake may or may not be directly applicable, there may be value in understanding the philosophies undergirding their approaches and this may be useful to consider within the health professions. Historically, health professions have learned much from other fields and industries (e.g., aviation safety literature has informed interprofessional collaboration research); with this research, the inclusion of non-health professions was deemed relevant to determine whether learning from these fields may also be valuable.
HISTORICALLY, HEALTH PROFESSIONS HAVE LEARNED MUCH FROM OTHER FIELDS AND INDUSTRIES (E.G., AVIATION SAFETY LITERATURE HAS INFORMED INTERPROFESSIONAL COLLABORATION RESEARCH)...
In order to control for local-cultural contexts, an in-depth examination of all regulated professions in one specific jurisdiction (Ontario, Canada) was selected to facilitate intra-jurisdictional and inter-professional comparisons (see Table 1). Jurisdiction-specific factors (including the regulatory culture of that jurisdiction, and the legal frameworks within which regulators of all professions work within that specific jurisdiction) may influence the development of maintenance of competence assessment systems that evolve; examining all regulated professions within a single, well-developed jurisdiction such as Ontario would therefore facilitate interprofessional comparisons while controlling somewhat for confounding factors such as local culture and local legislative imperatives. Due to limitations of the study team, only jurisdictions where English-language documents and English-language speaking key informants were available were selected for inclusion.
Based on this framing, the scope of professions and geographic regions sampled for inclusion in this review included:
Twenty-six regulated health professions in Ontario, Canada.
Sixteen regulated non-health professions in Ontario, Canada.
A selected scan of these professions (medicine, nursing, pharmacy, dentistry, law, teaching, and engineering) in these geographic regions: British Columbia, Canada; Massachusetts, USA; California, USA; England, UK; Qatar, Australia, and New Zealand.
A total of 42 regulated health and non-health professions in Ontario were included in this study; a total of 49 regulated health and non-health professions from seven other jurisdictions were also reviewed. Overall, this study reviewed policies and practices of 91 different regulatory bodies. Of these, 54 (26 from Ontario and 28 from other jurisdictions) were health professions, and 37 (16 from Ontario and 21 from other jurisdictions) were non-health professions.
Ontario (as a single jurisdiction) was selected due to the investigators' familiarity with its professions and processes, and because it has a robust system of documentation of professional regulation. The non-Ontario professions selected were selected as they are well-established historical professions with a large member base and consequently a readily accessible document trail across diverse jurisdictions. These geographic areas were selected primarily for ease of information gathering as all are English-speaking regions. British Columbia was selected to provide an alternative Canadian perspective to Ontario's systems.
An indicative review methodology was selected for this work given the large quantity and diverse quality of academic and grey literature sources available across the indicative sample of health and non-health professions outlined above. In an indicative review, the objective is not to undertake a statistically representative sampling process, but instead to identify major themes that are highlighted throughout the sample frame of interest. Grey literature — in particular websites of professional regulatory bodies and associations — was a major source of information for this review. These websites were combed for information regarding quality assurance program structure, position/philosophical statements of intent, program reports, presentations, data summaries and other pertinent documents. Manual searching of documents referenced in this grey literature was conducted. To supplement these sources, multiple focused MEDLINE and Scopus database searches were also conducted. All English-language publication types, including review articles and commentaries, were deemed relevant for inclusion in this review.
Findings and Discussion
This section is structured around the different approaches taken by health and non-health professions across different jurisdictions, with an emphasis on one jurisdiction (Ontario) in particular, as a means of controlling for local-cultural issues. Of the 91 regulatory bodies reviewed for this study, 42 (~40%) were from Ontario. For the purposes of numerical reporting, each regulatory body was equally weighted and no adjustment was made based on the scale or number of registered practitioners governed by that regulatory body.
Continuing Education/Professional Development Requirements
Across all jurisdictions and professions examined, continuing education (CE) or continuing professional development (CPD) were explicitly identified as philosophically crucial to ongoing maintenance of professional competency. All 91 regulatory bodies reviewed for this study required practitioners to engage in some form of life-long learning as a necessary pre-condition to annual renewal of registration. There was, however, significant variation amongst regulatory bodies as to what constituted acceptable or appropriate activities with a clear trend towards offering practitioners choices in identifying activities and learning methods most relevant to individual needs. Table 2 outlines the seven categories of CE/CPD that were identified through this scoping review.
Within the pool of professions and jurisdictions reviewed for this study, 56% (51/91) of regulators required that at least some portion of activities be accredited or certified with the intention of ensuring quality of the activity itself. Seventy five percent of regulatory bodies (68/91) mandated a minimum amount of activity participation, typically in the form of compulsory continuing education hours. Two specific methods for calculating continuing education credits were identified: 88% (60/68) focused on time spent on the activity (i.e., one contact hour = one CE credit). Twelve percent (8/68) used a more complex formula for calculating CE credit based on factors such as time commitment, activity type (interactive vs. didactic), degree of assessment/outcome measurement, etc. Of those 68 regulatory bodies requiring minimum mandatory continuing education, 15% (10/68) of regulators required between 1–10 measured units/year, 30% (21/68) required 11–20 units/year, 40% (27/68) required 21–30 units/year, 10% (7/68) required 41–50 units/year and 5% (3/68) required greater than 60 units/year. The median required CE units/year across professions and regulatory bodies mandating CE as a precondition for annual renewal of registration was ~25.2 units/year. In some cases, individuals were required to accumulate a minimum number of CE units over a three-year or five-year cycle.
ACROSS ALL JURISDICTIONS AND PROFESSIONS EXAMINED, CONTINUING EDUCATION (CE) OR CONTINUING PROFESSIONAL DEVELOPMENT (CPD) WERE EXPLICITLY IDENTIFIED AS PHILOSOPHICALLY CRUCIAL TO ONGOING MAINTENANCE OF PROFESSIONAL COMPETENCY.
The strong emphasis on compulsory continuing education (especially in the health professions, regardless of jurisdiction) as an indicator of maintenance of competency across professions and jurisdictions is striking given the relatively weak evidence linking mandatory CE to maintenance of competency.12,13 The bulk of the literature related to value and impact of continuing education or continuous professional development and maintenance of competency has been ambiguous at best.14,15 Despite the frequent regulatory practice of mandating a required minimum number of hours or credits of continuing education, used widely across professions and jurisdictions, there is virtually no evidence available to support this practice or to establish any correlation to positive, practice-related outcomes.16 Instead, within certain health professions at least, there is evidence that compulsory continuing education has little to no effect on professional behavioral change.17,18,19 At least one study has noted that CE does not improve performance in incompetent individuals.20 It is important, however, to note the limitations of this data: Although many meta-analyses and systematic reviews have been conducted, none have included funnel plots to determine if publication bias may be present.21 Interestingly, one systematic review noted that studies were more likely to note a positive impact for CE or CPD when outcomes were measured at six months, where studies with negative findings generally measure outcomes at 12 or 18 months, suggesting the impact of CE may have poor retention over time.18 In addition, interpretation of evidence regarding the impact of CE/CPD is extremely complex due to the high variability of activity types in use and the extraordinary individual differences at work.
MANY PROFESSIONS AND REGULATORY BODIES ARE EVOLVING TOWARDS A CPD MODEL IN WHICH PROFESSIONALS TAKE GREATER PERSONAL RESPONSIBILITY FOR THEIR OWN ONGOING DEVELOPMENT.
Overall, the main benefit of a traditional mandatory continuing education approach as an indicator of maintenance of competency appears to be its ease of use: Compared to other more involved methods with greater complexity, resource demands or costs, this method often only constitutes collection of professional members' declarations of compliance and periodic random audits to confirm compliance. For these reasons, it appears to continue to be a favored method by regulatory bodies to demonstrate accountability to government and stakeholders, despite lack of meaningful evidence supporting this approach.
Learning Portfolios
Many professions and regulatory bodies are evolving towards a CPD model in which professionals take greater personal responsibility for their own ongoing development. Documentation of this development has been identified as a specific concern and legitimate interest of regulatory bodies, and a diverse array of tools and mechanisms have been used.22 Variously labeled “professional portfolios,” “learning portfolios,” “professional development plans,” or other proprietary names, these tools vary in structure, content and focus. Virtually all tools reviewed in this study were structured around adult-learning theories23,24 involving a learning cycle beginning with self-assessment, followed by development of a personal learning plan to address identified goals or deficiencies, implementation of the plan, and reflection to evaluate outcomes of plan implementation. Diverse formats (online, paper-based, structured, unstructured) appear to be available in most jurisdictions and professions. Despite the ubiquity of these tools, no evidence or outcome analysis was found in any jurisdiction or profession supporting effectiveness or efficacy of this approach.25,26,27,28,29,30
While the idea of learning portfolios is built upon a sound theoretical foundation of adult and experiential learning, the self-reporting and self-disclosure inherent in the process makes evaluation of impact challenging, if not impossible.26 While at least one study suggests that portfolio-based approaches are well accepted by professionals, and demonstrate high content and face validity, their use as a reliable indicator of competency is not consistently evident due to the heterogeneity of portfolio designs and methods of assessment.30 A key challenge in this area relates to self-assessment capacity and honest self-appraisal by professionals. A number of studies have identified poor accuracy and validity of professionals' self-assessment skills when compared to external objective and standardized assessment methods, and the fact that this skill may be particularly underdeveloped among those who are in fact least competent.31,32,33,34 This calls into question the appropriateness — and ultimately the effectiveness — of using unguided self-assessment and learning portfolio designs for ensuring professional competency.35,36,37,38
Self-Assessment and Reflective Practice
In all 91 jurisdictions and professions examined, self-assessment, reflective practice, and life-long learning were explicitly identified by relevant regulatory authorities as crucial competencies for safe and effective professional practice. There were significant differences in approaches and requirements for demonstration of self-assessment competencies; 20% (18/91) of regulatory bodies do not provide any discernible structure, system or tools to facilitate self-assessment, while 80% (73/91) of regulatory bodies utilize a diverse array of self-audit approaches available to practitioners. The most common supports provided by regulatory bodies are online multiple-choice and case-study examination questions based on peer-derived standards with answer keys to facilitate self-reflection. Within the health professions across the jurisdictions examined, the use of guided reflection questions designed to prompt recall and deconstruction of recent clinical experiences is widely utilized. Increasingly, trigger-video recall mechanisms are being utilized, in which practitioners access a video recording of a clinical simulation and then engage in structured reflection around the practitioner-patient interaction as a mechanism for self-assessment and quality improvement.
No evidence of value or efficacy of these tools or of reflective practice/self-assessment was found within specific regulatory bodies or professions examined.39,40 Within the health professions in particular, there is increasing evidence to challenge the notion that most practitioners actually engage in, or are capable of engaging in, authentic self-assessment.31,39 Adult learning theory related to CPD hinges on the first part of the cycle — self assessment — and is based on the premise that adult practitioners are capable of self-identification of practice-related deficiencies or areas requiring improvement.24 Within the health professions in particular, the use of unguided/unfacilitated self-assessment as a springboard for ensuring professional competence has been called into question.
MAKING SELF-ASSESSMENT COMPULSORY, REPORTABLE, AND MEASUREABLE CHANGES THE PRACTITIONERS' RELATIONSHIP TO THE ACT ITSELF, MAKING IT A HOOP THROUGH WHICH HE OR SHE MUST JUMP...
Several researchers have noted a structural misalignment within the CPD philosophy between practitioners' goals and broader professional or regulatory objectives, which may inhibit public declaration of learning gaps for fear this may lead to punishment.41,42 Further, the one-year CPD cycle favored by most regulators to align with annual renewal of registration may not — psycho-educationally — provide sufficient time for true engagement in the learning and professional development process: A two-to-five year cycle for professional development and practice change may be more realistic for busy practitioners.27 Attempts by regulators to use self-assessment for both summative and formative purposes clouds the true objective for practitioners, which in turn taints the self-assessment process itself.27 Making self-assessment compulsory, reportable, and measureable changes the practitioners' relationship to the act itself, making it a hoop through which he or she must jump rather than a valuable self-improvement activity.
WITHIN THE HEALTH PROFESSIONS ACROSS THE JURISDICTIONS EXAMINED, THE USE OF GUIDED REFLECTION QUESTIONS DESIGNED TO PROMPT RECALL AND DECONSTRUCTION OF RECENT CLINICAL EXPERIENCES IS WIDELY UTILIZED.
Despite these critiques, the use of self-assessment as a cornerstone for maintenance of competency activities and reporting was widespread across the health and non-health professions. Perhaps most telling were practitioner-led blogs or chat-rooms, which noted how susceptible this activity is to faking, and the disengagement it may subsequently produce as a result.26
Peer and Concealed/Unconcealed Practice-based Assessment
The use of peers as agents to evaluate ongoing maintenance of competency is widely used and widely described. This collegial model of assessment typically involves direct observation of one practitioner by another practitioner, under standardized (e.g., testing) or naturalistic (e.g., in-practice) conditions, followed by some form of structured and unstructured debriefing and assessment.
The trigger for peer or practice assessment varies based on profession and jurisdiction: Fifty percent (45/91) of regulatory bodies select participants based on a random selection of a proportion of all members using a systematic approach, the actual number typically being defined by logistics and capacity constraints rather than evidence. Eight percent (7/91) of regulatory bodies select members to participate in maintenance of competence assessment only in the event of practice-related concerns (e.g., complaints, disciplinary procedures, etc.), as part of an investigation process or follow-up. Forty-two percent (38/91) of regulatory bodies use a combination of both approaches. Unique to the profession of medicine across many jurisdictions examined was the inclusion of specific age-related criterion for triggering of peer assessment: In many jurisdictions, practicing physicians must undergo mandatory peer assessment/practice review at the age of 70 and at least every five years thereafter. Kinesiologists in Ontario are also unique in mandating assessments for all members who have practiced less than 1,500 hours in the preceding three years.
UNIQUE TO THE PROFESSION OF MEDICINE ACROSS MANY JURISDICTIONS EXAMINED WAS THE INCLUSION OF SPECIFIC AGE-RELATED CRITERION FOR TRIGGERING OF PEER ASSESSMENT...
Two dominant frameworks for peer or practice-based assessment were identified: a single-level model or a laddered approach. The single-level model (used by 70% (64/91) of regulatory bodies in this study) stipulates that all members selected for review (whether randomly or in a targeted fashion) undergo the same or substantially similar review process. This is explicitly undertaken to demonstrate procedural fairness, regardless of practice context or personal circumstances. In a laddered approach (used by 30% (27/91) of regulatory bodies), an initial standardized assessment of all selected members is followed by further, differentiated or customized assessment steps as required for a subset of participants. These further steps are individualized for each member's unique performance, and are generally designed to probe in further detail areas of greatest concern or interest.
A wide variety of peer review assessment methods have been reported by regulatory bodies. Many regulatory bodies utilize multiple methods as part of their approach, either in a single or laddered manner: As many of these methods have a psychometric foundation and utilize measurement approaches, there is a growing literature and evidence to support their use (to a greater or lesser degree) in providing assurance of maintenance of competence.43,44,45,46 Significant emphasis on training of peer reviewers/inspectors and standardization of expectations contributes positively to reliability and validity of assessments.46,47 Use of structured assessment forms and rubrics in both this training process and in the actual peer-to-peer visit is also important for the defensibility of the process.46,47 While each profession and jurisdiction must customize assessment forms to unique local-cultural needs, the existence of an agreed upon structure for peer-led assessment enhances feasibility of implementation of such programs.
A significant critique of this standardization approach relates to the issue of checklist-driven practice: Reducing the complex and nuanced work of a professional into a convenient 1–2 page checklist-driven form negates the true meaning of professional practice.48,49,50 Further, as the process itself becomes more widespread within a profession, the checklist itself becomes professional practice, and practitioners begin to adapt their highly individualized and contextualized practices to a generic standard.51 There is some emerging evidence to suggest such approaches may also punish or inadequately measure the performance of the true experts and leaders in practice;51,52 in some cases, some practitioners may truly be without (local) peers who can fairly assess their practice.53
Within the profession of medicine across all jurisdictions studied, the use of multi-source feedback (360-degree reviews) has been growing;56,57,58,59 in other professions and fields there are also reported attempts at implementation of this form for quality assurance.60 Multisource feedback — when appropriately implemented with trained facilitators and practitioners — demonstrates sufficient psychometric reliability, validity, and generalizability to be a prominent component of a quality assurance process.56,57,58 The selection of a context-specific, validated multisource feedback instrument is crucial.56,57 Within medicine (where this has been studied most robustly), acceptable generalizability of findings appears when responses are collected from 25–35 clients and 8–15 colleagues and coworkers. There is considerable variation in impact and outcome associated with multisource feedback: 40–70% of practitioners report any inclination to change their behavior or practice upon receipt of feedback, while 25–55% self-report actual implementation of change based on receipt of feedback. Facilitated feedback, and ongoing coaching/mentoring appear crucial to the success of multisource feedback in facilitating practice change.61,62,63,64,65,66 Of course, such systems are extraordinarily costly, time-consuming, and logistically challenging. Areas reported to be most amenable to change following multisource feedback include communication with clients and colleagues.67,68 Multisource feedback has been promoted as serving a dual role in professional quality assurance — as an assessment of professional competency based on norm-referenced standards, and as a method to contextualize individual continuing professional development needs to ultimately stimulate behavior or practice change.62,63,66 Concerns have been expressed regarding the capacity of multisource feedback to actually evaluate all critical competencies of a professional: A client is not likely to fully grasp and appropriately rate a practitioner's technical skills or clinical knowledge, while a colleague may not accurately evaluate a peer's record-keeping practices.67 While to some degree this issue can be mitigated by expanding the circle of stakeholders involved in multisource feedback, time, resource and logistics constraints in reality often mean this simply is not possible.
REDUCING THE COMPLEX AND NUANCED WORK OF A PROFESSIONAL INTO A CONVENIENT 1–2 PAGE CHECKLIST-DRIVEN FORM NEGATES THE TRUE MEANING OF PROFESSIONAL PRACTICE.
Direct observation models of quality assurance can take different forms, ranging from concealed observation, in which practitioners are unaware they are being observed (e.g., mystery shopper methods used in pharmacy in some jurisdictions, to unconcealed observation of real-world practice or standardized competence assessment using objective structured clinical examinations (OSCEs). Across professions and jurisdictions in this study, there are many different variations of direct observation used by regulatory bodies to assess maintenance of competence of practitioners.
Psychometricians consider direct observation to be a valuable tool with good content validity;69,70,71 concealed — or “mystery shopper” — observation (though perhaps ethically questionable) is thought to demonstrate even greater fidelity to actual performance as it enables assessors to evaluate what a professional does in day-to-day practice without the confounding impact of the Hawthorne effect.72,73,74 Concealed assessment of real-world practice has been described as the “gold standard” of quality assurance in the professions.72 From a regulatory perspective, however, concealed assessment poses extraordinary challenges and risks, particularly from the perspective of the relationship between regulator and practitioner. For many practitioners, such a practice raises concerns of authoritarian surveillance, and sets up an inherently antagonistic relationship between professional and regulator. The notion of being “spied on” in practice can be very anxiety provoking, despite psychometric evidence as to its value and impact.73 The stress and harm this can produce has severely limited use of “mystery shoppers” within regulated professions, despite the widespread use of such techniques in investigative journalism/reporting of professionals. The Pharmacy Guild of Australia has arguably the most highly developed concealed observation program, in which pharmacists' clinical and customer service performances are evaluated.72,73,74 Scenarios for this process are developed through a multi-stakeholder process (including pharmacists, clients, other health professionals), mystery shoppers are well-trained professional actors capable of reliably portraying their scenario on multiple occasions and responding in semi-standardized fashion to the flow of interaction with the unsuspecting pharmacist, all interactions are secretly video-recorded, and assessment is undertaken externally using validated assessment instruments.75,76,77
A KEY FINDING FROM THIS STUDY IS THE NOTION THAT NO ONE-SIZE FITS ALL... THERE IS NO SINGLE ‘GOLD STANDARD’ QUALITY ASSURANCE MECHANISM.
Unconcealed direct observation has been most widely studied in the context of students and trainees in most professions, rather than with practicing professionals.78,79 A strong emphasis in this approach is on the design of psychometrically defensible global/holistic scales and analytical checklists that can be used to standardize the assessment process.80,81 Commonly assessed skills are history-taking, communication skills, technical skills of the profession, counseling practices, negotiation and conflict management.78,79,81 Unconcealed direct observation can be standardized or un-standardized: Standardized processes utilize traditional testing methods (e.g., multiple choice case-based tests of knowledge or simulations such as OSCEs or objective structured clinical examinations).78,79,81 Health professions are — in general — much further advanced in these areas than the non-health professions examined in this study, though several non-health professions (notably law and teaching) have highlighted interest in these approaches. Unstandardized or naturalistic processes involve use of standardized assessment tools but within a real-world, context-sensitive practice. This approach has been criticized for not being capable of actually facilitating comparisons within a cohort of professionals: If the observer happens (through random luck) to arrive on a terrible or busy day, the practitioner may be disadvantaged.78,80
Summary
As described in this study, there is a plethora of different methods and models used by regulatory bodies across professions and jurisdictions in the name of “quality assurance.” A key finding from this study is the notion that no one-size fits all; not only does each profession use or require a unique model, professions within the same jurisdiction also use or require their own unique model. Thus, there is no single “gold standard” quality assurance mechanism to ensure maintenance of competence of practitioners.
In large part, the choice of a specific method or model seems to be driven by the regulators' need to balance sometimes-conflicting duties to the general public and the professionals whom they govern. For example, the strongest consequential validity evidence for a quality assurance mechanism that exists is for concealed direct assessment of practitioners,77 yet it is only used by one profession in one jurisdiction and even then is extraordinarily challenging. One strongly worded blog post from a pharmacist in Australia stated he would feel “professionally violated” if he were ever involved in such an interaction, and this very strong emotional response to the process continues to make implementation or spread of the method almost impossible, and continuously threatens the viability/existence of the model in Australia.75
This balance regulators face is further complicated by real world exigencies of cost, time, resource, and logistics. As a result, the vast majority of regulatory bodies in this study appear to have opted for quality assurance mechanisms that may be easily implemented, conceptually simple to grasp, and possess some measure of face-validity — for example mandatory continuing education hours, or maintenance and audit of a learning portfolio. Unfortunately, the evidence to support the value of such methods in terms of practice improvement or maintenance of competency is sparse. Where more sophisticated and defensible competency assessment methods have been used — for example, unconcealed direct observation or chart-stimulated recall — there is a strong need to ensure psychometric rigor of the process. This requires investment in training of peers and practitioners, development and validation of high-quality and defensible assessment tools, mechanisms for one-on-one observation that are time-consuming and costly. Other methods — such as multi-source feedback — have the potential to demonstrate consequential validity but require investments in ongoing coaching and mentoring that are generally beyond the resources — or remit — of regulatory bodies.
Perhaps uncomfortably, this research has highlighted the widespread use of quality assurance mechanisms across professions and jurisdictions that may not be of any particular value or impact in achieving the stated objective of ensuring maintenance of competence and safe and effective professional practice. While the mechanisms themselves — such as continuing education — do require time and effort, and may, on the face of it appear to be meaningful — the actual evidence supporting their value is scant at best. This is a significant challenge to regulators: While it is of course important to be seen to be ensuring maintenance of competence of practitioners, this activity should actually have this effect in reality. Currently, as highlighted in this study, the bulk of activities used across a broad swath of professions worldwide do not appear to be as meaningful as regulators may wish them to be — or as the general public may expect them to be.
HONEST SELF-APPRAISAL BY REGULATORS — AND THE PUBLIC THEY SERVE — IS ESSENTIAL IN ORDER TO MANAGE EXPECTATIONS AS TO WHAT REGULATORS AND QUALITY ASSURANCE MECHANISMS CAN ACTUALLY ACCOMPLISH.
Ultimately, the tension between psychometrics and practicality, or between practitioner acceptability and public accountability, will not be easily addressed. As highlighted in this study, there can be no one-size fits all solution to the quality-assurance conundrum. Increasingly, regulatory bodies (particularly in health professions) have noted the value of multiple methods at multiple times as a more feasible and effective approach to quality assurance.82 Ongoing work to evaluate these more complex and evolving models is required. A particular strength of the approach used in this work is the use of a structure/process/outcome approach to evidence initially proposed by Donabedian,83 highlighting the importance of data and evidence to support conclusions regarding program quality.
INCREASINGLY, REGULATORY BODIES (PARTICULARLY IN HEALTH PROFESSIONS) HAVE NOTED THE VALUE OF MULTIPLE METHODS AT MULTIPLE TIMES AS A MORE FEASIBLE AND EFFECTIVE APPROACH TO QUALITY ASSURANCE.
Conclusions
This study was initially undertaken in an effort to better understand the diverse methods by which regulatory bodies around the world in different health- and non-health professions fulfilled their mandate of ensuring safe and effective professional practice and ongoing maintenance of competence of practitioners. As a scoping review, this research is indicative, rather than statistically representative, of the regulatory communities studied. Findings should be interpreted with caution, as the convenience sample selected for study is limited to large professions in English-speaking jurisdictions only.
The absence of gold-standard models or practices, and the recognition that, for all professions and jurisdictions this is a work-in-progress, is sobering. Honest self-appraisal by regulators — and the public they serve — is essential in order to manage expectations as to what regulators and quality-assurance mechanisms can actually accomplish. Trade-offs in terms of costs/benefits and risks/rewards must constantly be balanced. While theoretically and conceptually superior methods may exist, real-world exigencies may preclude their implementation. Conversely, the existence of such exigencies should not be used as an excuse by regulators or others to condone ongoing use of weaker methods with scant evidence of impact simply because they are quick, cheap and easy. Worse, when such methods are used, they should be honestly presented to the public and other stakeholders, and not be “sold” as being more meaningful or powerful than the evidence behind them provides.
There appears to be interest and efforts amongst regulators to improve the quality of quality-assurance mechanisms, and to address the challenges associated with doing so. As highlighted in this study, every profession in every jurisdiction is struggling with similar issues and challenges; moving towards more collaborative interprofessional and intra-jurisdictional approaches to quality assurance of professional practice may provide opportunities to pool resources, enjoy economies of scale, and ultimately lead evolution of regulatory practices in a more meaningful way.
The authors gratefully acknowledge the contributions of the staff and Council members of the College of Physiotherapists of Ontario, who provided funding to support this research and who contributed their comments and suggestions. The work of Lesley Young, BScPhm, Pharm D (who was, at the time of this work, a research student), is most gratefully acknowledged.
References
About the Authors
Zubin Austin, BScPhm, PhD, is Professor and Murray Koffler Chair in Management, Leslie Dan Faculty of Pharmacy, University of Toronto.
Paul A.M. Gregory, MLS, is Research Associate, Leslie Dan Faculty of Pharmacy, University of Toronto.