Regulatory bodies of health and non-health professions around the world have developed a diverse array of mechanisms to ensure maintenance of competence of practitioners. Quality assurance of professionals' practices is crucial to the work of regulators, yet there are few examples of interprofessional or cross-jurisdictional comparisons of approaches and mechanisms used to achieve this important objective. This review was undertaken using an indicative sampling method: to control for local cultural factors, all regulated health- and non-health professions in a single jurisdiction (Ontario, Canada) were studied, while intra-jurisdictional comparison was facilitated through targeted study of large professions (such as medicine, pharmacy and teaching) in other English-language jurisdictions (such as California, USA; the United Kingdom and Australia). A total of 91 regulated professions were examined to identify trends, commonalities and differences related to approaches used for professional quality assurance and maintenance of competence assessment. A diverse array of approaches was identified, highlighting divergent approaches to defining and measuring competency in the professions. Further comparative work examining this issue is required to help identify best- and promising-practices that can be shared among regulators from different jurisdictions and professions.

Regulated professionals around the world are entrusted with the responsibility of providing their specialized services to members of the public with the highest possible degree of care and quality, often across careers that span several decades. Successful completion of formal training, entry-to-practice examinations, and in-service requirements are intended to ensure that these professionals are — at the start of this decades-long career — able to demonstrate baseline competencies. In the context of today's rapidly evolving knowledge and advancing technology, this emphasis on start-up competence assessment at entry-to-practice is increasingly being called into question.1 This may be particularly important in the health professions: It has been stated that medical knowledge in the first part of the 21st century has a half-life of as little as five years.2 This suggests that not only is continuing professional development a necessity to ensure ongoing competency in professional practice — it is actually the longest, and most important, part of the educational process itself.3 This reality, within a context of accountability mandating use of standardized mechanisms for measurement and public reporting to provide the public with reassurance regarding professionals' ongoing competence — has highlighted the importance of regulators' work in a significant way.

In an effort to reconcile these various forces, regulatory bodies around the world have developed different models and methods for continuing professional development and implemented diverse strategies for assessing ongoing competence of professionals engaged in practice. Regardless of the highly variable scopes of practice of different professions, or unique geopolitical or local-cultural contexts, the goal of these mandates is generally quite aligned: to ensure professionals are engaged in a process of life-long learning so that, at all stages of their careers, they continue to possess the knowledge, skills, and judgment necessary to competently practice their profession, thus ultimately ensuring protection of the public.4,5,6 

...NOT ONLY IS CONTINUING PROFESSIONAL DEVELOPMENT A NECESSITY TO ENSURE ONGOING COMPETENCY IN PROFESSIONAL PRACTICE — IT IS ACTUALLY THE LONGEST, AND MOST IMPORTANT, PART OF THE EDUCATIONAL PROCESS ITSELF.

Despite this common objective behind professional quality assurance, great variation exists in the diverse systems in place across the professions globally, and across jurisdictions within each profession. This may be attributed to the complex interplay of many factors, including differences in professional regulatory legislation, interpretation of emerging evidence, regulatory body resources and capacity, specific professional culture and values, and inconsistent interpretations of what actually constitutes “professional competence.”7 Further, the absence of evidence regarding the actual value and impact of traditional continuing medical education (CME) models for maintenance of competency may raise questions of legitimacy and effectiveness of regulators' work.7 

In this complex network, public confidence in the systems that are in place to achieve this objective is essential.8,9 This complexity, however, has led some regulators to avoid direct interprofessional or intra-jurisdictional comparisons to identify promising or best practices around quality assurance and competency assessment,10 arguing that apples-to-apples comparisons are not possible and apples-to-oranges comparisons are not helpful. While such comparisons may not produce actionable outcomes immediately, we believe there is value in such research to provide all stakeholders with a broader context for consideration of quality improvements that may be possible or desirable in the area of professional quality assurance and competency assessment.

The objective of this review was to characterize quality assurance and competency assessment systems (and their constituent components) that are being used across professions and jurisdictions with respect to:

  • Evidence to support efficacy.

  • Perceived and demonstrated benefits.

  • Perceived and demonstrated limitations.

Given the overwhelming potential scope and breadth that is possible for such research, a framing mechanism for indicative sampling was required to allow this work to proceed in a time- and cost-efficient manner. As a means of generating a broad picture of the diverse array of professional quality assurance systems that exist, this review included both health and non-health professions across an array of geographic regions. All professions (whether health or non-health) are concerned with ensuring the ongoing competency of those in practice. While the specific practice context may differ because of the nature of the profession (e.g., third-party oversight, remuneration models), the need for all professions to ensure their practitioners are competent and up-to-date in terms of their knowledge and skills is important. Non-health professions may have important lessons to share with the health professions; while the specific maintenance of competency activities they undertake may or may not be directly applicable, there may be value in understanding the philosophies undergirding their approaches and this may be useful to consider within the health professions. Historically, health professions have learned much from other fields and industries (e.g., aviation safety literature has informed interprofessional collaboration research); with this research, the inclusion of non-health professions was deemed relevant to determine whether learning from these fields may also be valuable.

HISTORICALLY, HEALTH PROFESSIONS HAVE LEARNED MUCH FROM OTHER FIELDS AND INDUSTRIES (E.G., AVIATION SAFETY LITERATURE HAS INFORMED INTERPROFESSIONAL COLLABORATION RESEARCH)...

In order to control for local-cultural contexts, an in-depth examination of all regulated professions in one specific jurisdiction (Ontario, Canada) was selected to facilitate intra-jurisdictional and inter-professional comparisons (see Table 1). Jurisdiction-specific factors (including the regulatory culture of that jurisdiction, and the legal frameworks within which regulators of all professions work within that specific jurisdiction) may influence the development of maintenance of competence assessment systems that evolve; examining all regulated professions within a single, well-developed jurisdiction such as Ontario would therefore facilitate interprofessional comparisons while controlling somewhat for confounding factors such as local culture and local legislative imperatives. Due to limitations of the study team, only jurisdictions where English-language documents and English-language speaking key informants were available were selected for inclusion.

Table 1

Health and Non-health Regulated Professions in Ontario, Canada as listed in the Regulated Health Professions Act (Section 1), 199111 

Health and Non-health Regulated Professions in Ontario, Canada as listed in the Regulated Health Professions Act (Section 1), 199111
Health and Non-health Regulated Professions in Ontario, Canada as listed in the Regulated Health Professions Act (Section 1), 199111

Based on this framing, the scope of professions and geographic regions sampled for inclusion in this review included:

  • Twenty-six regulated health professions in Ontario, Canada.

  • Sixteen regulated non-health professions in Ontario, Canada.

  • A selected scan of these professions (medicine, nursing, pharmacy, dentistry, law, teaching, and engineering) in these geographic regions: British Columbia, Canada; Massachusetts, USA; California, USA; England, UK; Qatar, Australia, and New Zealand.

A total of 42 regulated health and non-health professions in Ontario were included in this study; a total of 49 regulated health and non-health professions from seven other jurisdictions were also reviewed. Overall, this study reviewed policies and practices of 91 different regulatory bodies. Of these, 54 (26 from Ontario and 28 from other jurisdictions) were health professions, and 37 (16 from Ontario and 21 from other jurisdictions) were non-health professions.

Ontario (as a single jurisdiction) was selected due to the investigators' familiarity with its professions and processes, and because it has a robust system of documentation of professional regulation. The non-Ontario professions selected were selected as they are well-established historical professions with a large member base and consequently a readily accessible document trail across diverse jurisdictions. These geographic areas were selected primarily for ease of information gathering as all are English-speaking regions. British Columbia was selected to provide an alternative Canadian perspective to Ontario's systems.

An indicative review methodology was selected for this work given the large quantity and diverse quality of academic and grey literature sources available across the indicative sample of health and non-health professions outlined above. In an indicative review, the objective is not to undertake a statistically representative sampling process, but instead to identify major themes that are highlighted throughout the sample frame of interest. Grey literature — in particular websites of professional regulatory bodies and associations — was a major source of information for this review. These websites were combed for information regarding quality assurance program structure, position/philosophical statements of intent, program reports, presentations, data summaries and other pertinent documents. Manual searching of documents referenced in this grey literature was conducted. To supplement these sources, multiple focused MEDLINE and Scopus database searches were also conducted. All English-language publication types, including review articles and commentaries, were deemed relevant for inclusion in this review.

This section is structured around the different approaches taken by health and non-health professions across different jurisdictions, with an emphasis on one jurisdiction (Ontario) in particular, as a means of controlling for local-cultural issues. Of the 91 regulatory bodies reviewed for this study, 42 (~40%) were from Ontario. For the purposes of numerical reporting, each regulatory body was equally weighted and no adjustment was made based on the scale or number of registered practitioners governed by that regulatory body.

Continuing Education/Professional Development Requirements

Across all jurisdictions and professions examined, continuing education (CE) or continuing professional development (CPD) were explicitly identified as philosophically crucial to ongoing maintenance of professional competency. All 91 regulatory bodies reviewed for this study required practitioners to engage in some form of life-long learning as a necessary pre-condition to annual renewal of registration. There was, however, significant variation amongst regulatory bodies as to what constituted acceptable or appropriate activities with a clear trend towards offering practitioners choices in identifying activities and learning methods most relevant to individual needs. Table 2 outlines the seven categories of CE/CPD that were identified through this scoping review.

Table 2

Categories of CE/CPD

Categories of CE/CPD
Categories of CE/CPD

Within the pool of professions and jurisdictions reviewed for this study, 56% (51/91) of regulators required that at least some portion of activities be accredited or certified with the intention of ensuring quality of the activity itself. Seventy five percent of regulatory bodies (68/91) mandated a minimum amount of activity participation, typically in the form of compulsory continuing education hours. Two specific methods for calculating continuing education credits were identified: 88% (60/68) focused on time spent on the activity (i.e., one contact hour = one CE credit). Twelve percent (8/68) used a more complex formula for calculating CE credit based on factors such as time commitment, activity type (interactive vs. didactic), degree of assessment/outcome measurement, etc. Of those 68 regulatory bodies requiring minimum mandatory continuing education, 15% (10/68) of regulators required between 1–10 measured units/year, 30% (21/68) required 11–20 units/year, 40% (27/68) required 21–30 units/year, 10% (7/68) required 41–50 units/year and 5% (3/68) required greater than 60 units/year. The median required CE units/year across professions and regulatory bodies mandating CE as a precondition for annual renewal of registration was ~25.2 units/year. In some cases, individuals were required to accumulate a minimum number of CE units over a three-year or five-year cycle.

ACROSS ALL JURISDICTIONS AND PROFESSIONS EXAMINED, CONTINUING EDUCATION (CE) OR CONTINUING PROFESSIONAL DEVELOPMENT (CPD) WERE EXPLICITLY IDENTIFIED AS PHILOSOPHICALLY CRUCIAL TO ONGOING MAINTENANCE OF PROFESSIONAL COMPETENCY.

The strong emphasis on compulsory continuing education (especially in the health professions, regardless of jurisdiction) as an indicator of maintenance of competency across professions and jurisdictions is striking given the relatively weak evidence linking mandatory CE to maintenance of competency.12,13 The bulk of the literature related to value and impact of continuing education or continuous professional development and maintenance of competency has been ambiguous at best.14,15 Despite the frequent regulatory practice of mandating a required minimum number of hours or credits of continuing education, used widely across professions and jurisdictions, there is virtually no evidence available to support this practice or to establish any correlation to positive, practice-related outcomes.16 Instead, within certain health professions at least, there is evidence that compulsory continuing education has little to no effect on professional behavioral change.17,18,19 At least one study has noted that CE does not improve performance in incompetent individuals.20 It is important, however, to note the limitations of this data: Although many meta-analyses and systematic reviews have been conducted, none have included funnel plots to determine if publication bias may be present.21 Interestingly, one systematic review noted that studies were more likely to note a positive impact for CE or CPD when outcomes were measured at six months, where studies with negative findings generally measure outcomes at 12 or 18 months, suggesting the impact of CE may have poor retention over time.18 In addition, interpretation of evidence regarding the impact of CE/CPD is extremely complex due to the high variability of activity types in use and the extraordinary individual differences at work.

MANY PROFESSIONS AND REGULATORY BODIES ARE EVOLVING TOWARDS A CPD MODEL IN WHICH PROFESSIONALS TAKE GREATER PERSONAL RESPONSIBILITY FOR THEIR OWN ONGOING DEVELOPMENT.

Overall, the main benefit of a traditional mandatory continuing education approach as an indicator of maintenance of competency appears to be its ease of use: Compared to other more involved methods with greater complexity, resource demands or costs, this method often only constitutes collection of professional members' declarations of compliance and periodic random audits to confirm compliance. For these reasons, it appears to continue to be a favored method by regulatory bodies to demonstrate accountability to government and stakeholders, despite lack of meaningful evidence supporting this approach.

Learning Portfolios

Many professions and regulatory bodies are evolving towards a CPD model in which professionals take greater personal responsibility for their own ongoing development. Documentation of this development has been identified as a specific concern and legitimate interest of regulatory bodies, and a diverse array of tools and mechanisms have been used.22 Variously labeled “professional portfolios,” “learning portfolios,” “professional development plans,” or other proprietary names, these tools vary in structure, content and focus. Virtually all tools reviewed in this study were structured around adult-learning theories23,24 involving a learning cycle beginning with self-assessment, followed by development of a personal learning plan to address identified goals or deficiencies, implementation of the plan, and reflection to evaluate outcomes of plan implementation. Diverse formats (online, paper-based, structured, unstructured) appear to be available in most jurisdictions and professions. Despite the ubiquity of these tools, no evidence or outcome analysis was found in any jurisdiction or profession supporting effectiveness or efficacy of this approach.25,26,27,28,29,30 

While the idea of learning portfolios is built upon a sound theoretical foundation of adult and experiential learning, the self-reporting and self-disclosure inherent in the process makes evaluation of impact challenging, if not impossible.26 While at least one study suggests that portfolio-based approaches are well accepted by professionals, and demonstrate high content and face validity, their use as a reliable indicator of competency is not consistently evident due to the heterogeneity of portfolio designs and methods of assessment.30 A key challenge in this area relates to self-assessment capacity and honest self-appraisal by professionals. A number of studies have identified poor accuracy and validity of professionals' self-assessment skills when compared to external objective and standardized assessment methods, and the fact that this skill may be particularly underdeveloped among those who are in fact least competent.31,32,33,34 This calls into question the appropriateness — and ultimately the effectiveness — of using unguided self-assessment and learning portfolio designs for ensuring professional competency.35,36,37,38 

Self-Assessment and Reflective Practice

In all 91 jurisdictions and professions examined, self-assessment, reflective practice, and life-long learning were explicitly identified by relevant regulatory authorities as crucial competencies for safe and effective professional practice. There were significant differences in approaches and requirements for demonstration of self-assessment competencies; 20% (18/91) of regulatory bodies do not provide any discernible structure, system or tools to facilitate self-assessment, while 80% (73/91) of regulatory bodies utilize a diverse array of self-audit approaches available to practitioners. The most common supports provided by regulatory bodies are online multiple-choice and case-study examination questions based on peer-derived standards with answer keys to facilitate self-reflection. Within the health professions across the jurisdictions examined, the use of guided reflection questions designed to prompt recall and deconstruction of recent clinical experiences is widely utilized. Increasingly, trigger-video recall mechanisms are being utilized, in which practitioners access a video recording of a clinical simulation and then engage in structured reflection around the practitioner-patient interaction as a mechanism for self-assessment and quality improvement.

No evidence of value or efficacy of these tools or of reflective practice/self-assessment was found within specific regulatory bodies or professions examined.39,40 Within the health professions in particular, there is increasing evidence to challenge the notion that most practitioners actually engage in, or are capable of engaging in, authentic self-assessment.31,39 Adult learning theory related to CPD hinges on the first part of the cycle — self assessment — and is based on the premise that adult practitioners are capable of self-identification of practice-related deficiencies or areas requiring improvement.24 Within the health professions in particular, the use of unguided/unfacilitated self-assessment as a springboard for ensuring professional competence has been called into question.

MAKING SELF-ASSESSMENT COMPULSORY, REPORTABLE, AND MEASUREABLE CHANGES THE PRACTITIONERS' RELATIONSHIP TO THE ACT ITSELF, MAKING IT A HOOP THROUGH WHICH HE OR SHE MUST JUMP...

Several researchers have noted a structural misalignment within the CPD philosophy between practitioners' goals and broader professional or regulatory objectives, which may inhibit public declaration of learning gaps for fear this may lead to punishment.41,42 Further, the one-year CPD cycle favored by most regulators to align with annual renewal of registration may not — psycho-educationally — provide sufficient time for true engagement in the learning and professional development process: A two-to-five year cycle for professional development and practice change may be more realistic for busy practitioners.27 Attempts by regulators to use self-assessment for both summative and formative purposes clouds the true objective for practitioners, which in turn taints the self-assessment process itself.27 Making self-assessment compulsory, reportable, and measureable changes the practitioners' relationship to the act itself, making it a hoop through which he or she must jump rather than a valuable self-improvement activity.

WITHIN THE HEALTH PROFESSIONS ACROSS THE JURISDICTIONS EXAMINED, THE USE OF GUIDED REFLECTION QUESTIONS DESIGNED TO PROMPT RECALL AND DECONSTRUCTION OF RECENT CLINICAL EXPERIENCES IS WIDELY UTILIZED.

Despite these critiques, the use of self-assessment as a cornerstone for maintenance of competency activities and reporting was widespread across the health and non-health professions. Perhaps most telling were practitioner-led blogs or chat-rooms, which noted how susceptible this activity is to faking, and the disengagement it may subsequently produce as a result.26 

Peer and Concealed/Unconcealed Practice-based Assessment

The use of peers as agents to evaluate ongoing maintenance of competency is widely used and widely described. This collegial model of assessment typically involves direct observation of one practitioner by another practitioner, under standardized (e.g., testing) or naturalistic (e.g., in-practice) conditions, followed by some form of structured and unstructured debriefing and assessment.

The trigger for peer or practice assessment varies based on profession and jurisdiction: Fifty percent (45/91) of regulatory bodies select participants based on a random selection of a proportion of all members using a systematic approach, the actual number typically being defined by logistics and capacity constraints rather than evidence. Eight percent (7/91) of regulatory bodies select members to participate in maintenance of competence assessment only in the event of practice-related concerns (e.g., complaints, disciplinary procedures, etc.), as part of an investigation process or follow-up. Forty-two percent (38/91) of regulatory bodies use a combination of both approaches. Unique to the profession of medicine across many jurisdictions examined was the inclusion of specific age-related criterion for triggering of peer assessment: In many jurisdictions, practicing physicians must undergo mandatory peer assessment/practice review at the age of 70 and at least every five years thereafter. Kinesiologists in Ontario are also unique in mandating assessments for all members who have practiced less than 1,500 hours in the preceding three years.

UNIQUE TO THE PROFESSION OF MEDICINE ACROSS MANY JURISDICTIONS EXAMINED WAS THE INCLUSION OF SPECIFIC AGE-RELATED CRITERION FOR TRIGGERING OF PEER ASSESSMENT...

Table 3

Peer and Practice Review Assessment Methods Used

Peer and Practice Review Assessment Methods Used
Peer and Practice Review Assessment Methods Used

Two dominant frameworks for peer or practice-based assessment were identified: a single-level model or a laddered approach. The single-level model (used by 70% (64/91) of regulatory bodies in this study) stipulates that all members selected for review (whether randomly or in a targeted fashion) undergo the same or substantially similar review process. This is explicitly undertaken to demonstrate procedural fairness, regardless of practice context or personal circumstances. In a laddered approach (used by 30% (27/91) of regulatory bodies), an initial standardized assessment of all selected members is followed by further, differentiated or customized assessment steps as required for a subset of participants. These further steps are individualized for each member's unique performance, and are generally designed to probe in further detail areas of greatest concern or interest.

A wide variety of peer review assessment methods have been reported by regulatory bodies. Many regulatory bodies utilize multiple methods as part of their approach, either in a single or laddered manner: As many of these methods have a psychometric foundation and utilize measurement approaches, there is a growing literature and evidence to support their use (to a greater or lesser degree) in providing assurance of maintenance of competence.43,44,45,46 Significant emphasis on training of peer reviewers/inspectors and standardization of expectations contributes positively to reliability and validity of assessments.46,47 Use of structured assessment forms and rubrics in both this training process and in the actual peer-to-peer visit is also important for the defensibility of the process.46,47 While each profession and jurisdiction must customize assessment forms to unique local-cultural needs, the existence of an agreed upon structure for peer-led assessment enhances feasibility of implementation of such programs.

A significant critique of this standardization approach relates to the issue of checklist-driven practice: Reducing the complex and nuanced work of a professional into a convenient 1–2 page checklist-driven form negates the true meaning of professional practice.48,49,50 Further, as the process itself becomes more widespread within a profession, the checklist itself becomes professional practice, and practitioners begin to adapt their highly individualized and contextualized practices to a generic standard.51 There is some emerging evidence to suggest such approaches may also punish or inadequately measure the performance of the true experts and leaders in practice;51,52 in some cases, some practitioners may truly be without (local) peers who can fairly assess their practice.53 

Within the profession of medicine across all jurisdictions studied, the use of multi-source feedback (360-degree reviews) has been growing;56,57,58,59 in other professions and fields there are also reported attempts at implementation of this form for quality assurance.60 Multisource feedback — when appropriately implemented with trained facilitators and practitioners — demonstrates sufficient psychometric reliability, validity, and generalizability to be a prominent component of a quality assurance process.56,57,58 The selection of a context-specific, validated multisource feedback instrument is crucial.56,57 Within medicine (where this has been studied most robustly), acceptable generalizability of findings appears when responses are collected from 25–35 clients and 8–15 colleagues and coworkers. There is considerable variation in impact and outcome associated with multisource feedback: 40–70% of practitioners report any inclination to change their behavior or practice upon receipt of feedback, while 25–55% self-report actual implementation of change based on receipt of feedback. Facilitated feedback, and ongoing coaching/mentoring appear crucial to the success of multisource feedback in facilitating practice change.61,62,63,64,65,66 Of course, such systems are extraordinarily costly, time-consuming, and logistically challenging. Areas reported to be most amenable to change following multisource feedback include communication with clients and colleagues.67,68 Multisource feedback has been promoted as serving a dual role in professional quality assurance — as an assessment of professional competency based on norm-referenced standards, and as a method to contextualize individual continuing professional development needs to ultimately stimulate behavior or practice change.62,63,66 Concerns have been expressed regarding the capacity of multisource feedback to actually evaluate all critical competencies of a professional: A client is not likely to fully grasp and appropriately rate a practitioner's technical skills or clinical knowledge, while a colleague may not accurately evaluate a peer's record-keeping practices.67 While to some degree this issue can be mitigated by expanding the circle of stakeholders involved in multisource feedback, time, resource and logistics constraints in reality often mean this simply is not possible.

REDUCING THE COMPLEX AND NUANCED WORK OF A PROFESSIONAL INTO A CONVENIENT 1–2 PAGE CHECKLIST-DRIVEN FORM NEGATES THE TRUE MEANING OF PROFESSIONAL PRACTICE.

Direct observation models of quality assurance can take different forms, ranging from concealed observation, in which practitioners are unaware they are being observed (e.g., mystery shopper methods used in pharmacy in some jurisdictions, to unconcealed observation of real-world practice or standardized competence assessment using objective structured clinical examinations (OSCEs). Across professions and jurisdictions in this study, there are many different variations of direct observation used by regulatory bodies to assess maintenance of competence of practitioners.

Psychometricians consider direct observation to be a valuable tool with good content validity;69,70,71 concealed — or “mystery shopper” — observation (though perhaps ethically questionable) is thought to demonstrate even greater fidelity to actual performance as it enables assessors to evaluate what a professional does in day-to-day practice without the confounding impact of the Hawthorne effect.72,73,74 Concealed assessment of real-world practice has been described as the “gold standard” of quality assurance in the professions.72 From a regulatory perspective, however, concealed assessment poses extraordinary challenges and risks, particularly from the perspective of the relationship between regulator and practitioner. For many practitioners, such a practice raises concerns of authoritarian surveillance, and sets up an inherently antagonistic relationship between professional and regulator. The notion of being “spied on” in practice can be very anxiety provoking, despite psychometric evidence as to its value and impact.73 The stress and harm this can produce has severely limited use of “mystery shoppers” within regulated professions, despite the widespread use of such techniques in investigative journalism/reporting of professionals. The Pharmacy Guild of Australia has arguably the most highly developed concealed observation program, in which pharmacists' clinical and customer service performances are evaluated.72,73,74 Scenarios for this process are developed through a multi-stakeholder process (including pharmacists, clients, other health professionals), mystery shoppers are well-trained professional actors capable of reliably portraying their scenario on multiple occasions and responding in semi-standardized fashion to the flow of interaction with the unsuspecting pharmacist, all interactions are secretly video-recorded, and assessment is undertaken externally using validated assessment instruments.75,76,77 

A KEY FINDING FROM THIS STUDY IS THE NOTION THAT NO ONE-SIZE FITS ALL... THERE IS NO SINGLE ‘GOLD STANDARD’ QUALITY ASSURANCE MECHANISM.

Unconcealed direct observation has been most widely studied in the context of students and trainees in most professions, rather than with practicing professionals.78,79 A strong emphasis in this approach is on the design of psychometrically defensible global/holistic scales and analytical checklists that can be used to standardize the assessment process.80,81 Commonly assessed skills are history-taking, communication skills, technical skills of the profession, counseling practices, negotiation and conflict management.78,79,81 Unconcealed direct observation can be standardized or un-standardized: Standardized processes utilize traditional testing methods (e.g., multiple choice case-based tests of knowledge or simulations such as OSCEs or objective structured clinical examinations).78,79,81 Health professions are — in general — much further advanced in these areas than the non-health professions examined in this study, though several non-health professions (notably law and teaching) have highlighted interest in these approaches. Unstandardized or naturalistic processes involve use of standardized assessment tools but within a real-world, context-sensitive practice. This approach has been criticized for not being capable of actually facilitating comparisons within a cohort of professionals: If the observer happens (through random luck) to arrive on a terrible or busy day, the practitioner may be disadvantaged.78,80 

As described in this study, there is a plethora of different methods and models used by regulatory bodies across professions and jurisdictions in the name of “quality assurance.” A key finding from this study is the notion that no one-size fits all; not only does each profession use or require a unique model, professions within the same jurisdiction also use or require their own unique model. Thus, there is no single “gold standard” quality assurance mechanism to ensure maintenance of competence of practitioners.

In large part, the choice of a specific method or model seems to be driven by the regulators' need to balance sometimes-conflicting duties to the general public and the professionals whom they govern. For example, the strongest consequential validity evidence for a quality assurance mechanism that exists is for concealed direct assessment of practitioners,77 yet it is only used by one profession in one jurisdiction and even then is extraordinarily challenging. One strongly worded blog post from a pharmacist in Australia stated he would feel “professionally violated” if he were ever involved in such an interaction, and this very strong emotional response to the process continues to make implementation or spread of the method almost impossible, and continuously threatens the viability/existence of the model in Australia.75 

This balance regulators face is further complicated by real world exigencies of cost, time, resource, and logistics. As a result, the vast majority of regulatory bodies in this study appear to have opted for quality assurance mechanisms that may be easily implemented, conceptually simple to grasp, and possess some measure of face-validity — for example mandatory continuing education hours, or maintenance and audit of a learning portfolio. Unfortunately, the evidence to support the value of such methods in terms of practice improvement or maintenance of competency is sparse. Where more sophisticated and defensible competency assessment methods have been used — for example, unconcealed direct observation or chart-stimulated recall — there is a strong need to ensure psychometric rigor of the process. This requires investment in training of peers and practitioners, development and validation of high-quality and defensible assessment tools, mechanisms for one-on-one observation that are time-consuming and costly. Other methods — such as multi-source feedback — have the potential to demonstrate consequential validity but require investments in ongoing coaching and mentoring that are generally beyond the resources — or remit — of regulatory bodies.

Perhaps uncomfortably, this research has highlighted the widespread use of quality assurance mechanisms across professions and jurisdictions that may not be of any particular value or impact in achieving the stated objective of ensuring maintenance of competence and safe and effective professional practice. While the mechanisms themselves — such as continuing education — do require time and effort, and may, on the face of it appear to be meaningful — the actual evidence supporting their value is scant at best. This is a significant challenge to regulators: While it is of course important to be seen to be ensuring maintenance of competence of practitioners, this activity should actually have this effect in reality. Currently, as highlighted in this study, the bulk of activities used across a broad swath of professions worldwide do not appear to be as meaningful as regulators may wish them to be — or as the general public may expect them to be.

HONEST SELF-APPRAISAL BY REGULATORS — AND THE PUBLIC THEY SERVE — IS ESSENTIAL IN ORDER TO MANAGE EXPECTATIONS AS TO WHAT REGULATORS AND QUALITY ASSURANCE MECHANISMS CAN ACTUALLY ACCOMPLISH.

Ultimately, the tension between psychometrics and practicality, or between practitioner acceptability and public accountability, will not be easily addressed. As highlighted in this study, there can be no one-size fits all solution to the quality-assurance conundrum. Increasingly, regulatory bodies (particularly in health professions) have noted the value of multiple methods at multiple times as a more feasible and effective approach to quality assurance.82 Ongoing work to evaluate these more complex and evolving models is required. A particular strength of the approach used in this work is the use of a structure/process/outcome approach to evidence initially proposed by Donabedian,83 highlighting the importance of data and evidence to support conclusions regarding program quality.

INCREASINGLY, REGULATORY BODIES (PARTICULARLY IN HEALTH PROFESSIONS) HAVE NOTED THE VALUE OF MULTIPLE METHODS AT MULTIPLE TIMES AS A MORE FEASIBLE AND EFFECTIVE APPROACH TO QUALITY ASSURANCE.

This study was initially undertaken in an effort to better understand the diverse methods by which regulatory bodies around the world in different health- and non-health professions fulfilled their mandate of ensuring safe and effective professional practice and ongoing maintenance of competence of practitioners. As a scoping review, this research is indicative, rather than statistically representative, of the regulatory communities studied. Findings should be interpreted with caution, as the convenience sample selected for study is limited to large professions in English-speaking jurisdictions only.

The absence of gold-standard models or practices, and the recognition that, for all professions and jurisdictions this is a work-in-progress, is sobering. Honest self-appraisal by regulators — and the public they serve — is essential in order to manage expectations as to what regulators and quality-assurance mechanisms can actually accomplish. Trade-offs in terms of costs/benefits and risks/rewards must constantly be balanced. While theoretically and conceptually superior methods may exist, real-world exigencies may preclude their implementation. Conversely, the existence of such exigencies should not be used as an excuse by regulators or others to condone ongoing use of weaker methods with scant evidence of impact simply because they are quick, cheap and easy. Worse, when such methods are used, they should be honestly presented to the public and other stakeholders, and not be “sold” as being more meaningful or powerful than the evidence behind them provides.

There appears to be interest and efforts amongst regulators to improve the quality of quality-assurance mechanisms, and to address the challenges associated with doing so. As highlighted in this study, every profession in every jurisdiction is struggling with similar issues and challenges; moving towards more collaborative interprofessional and intra-jurisdictional approaches to quality assurance of professional practice may provide opportunities to pool resources, enjoy economies of scale, and ultimately lead evolution of regulatory practices in a more meaningful way.

The authors gratefully acknowledge the contributions of the staff and Council members of the College of Physiotherapists of Ontario, who provided funding to support this research and who contributed their comments and suggestions. The work of Lesley Young, BScPhm, Pharm D (who was, at the time of this work, a research student), is most gratefully acknowledged.

1.
Horsley
T
,
Grimshaw
J
,
Campbell
C.
Maintaining the competence of Europe's workforce
.
BMJ
2010
;
341
:
706
708
.
2.
Emanuel
E.
A half-life of 5 years
.
Can Med Assoc J
1975
;
112
(
5
):
572
.
3.
Review of International CPD Models: Final Report
.
2010
:
Pharmaceutical Society of Ireland
.
4.
TriPartite Alliance (Royal College of Physicians and Surgeons of Canada, Royal Australasian College of Physicians and Royal Australasian College of Surgeons)
.
Work-based assessment: a practical guide. Building an assessment system around work
.
2014
.
5.
Marinopoulos
S
,
Dorman
T
,
Ratanawongsa
N.
et al
.
Effectiveness of continuing medical education. Evidence Report Number 149 Prepared for Agency for Healthcare Research and Quality, US Dept of Health
.
2007
.
6.
Tran
D
,
Tofade
T
,
Thakker
N
,
Rouse
M.
US and International Health Professions Requirements for Continuing Professional development
.
American Journal of Pharmaceutical Education
2014
;
78
(
6
):
p
.
129
.
7.
FitzGerald
M
,
Walsh
K
,
McCutcheon
H.
An Integrative Systematic Review of indicators of competence for practice & protocol for validation of indicators of competence
.
Queensland Nursing Council, 2001. Accessed September 14 2016 at: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.112.8337&rep=rep1&type=pdf
8.
Crossing the Quality Chasm: A New Health System for the 21st Century
.
2001
,
Washington DC
:
National Academy Press
.
Committee on Quality of Health Care in America, Institute of Medicine
.
9.
Swankin
D
,
LeBuhn
R
,
Morrison
R.
Implementing continuing competency requirements for healthcare practitioners
.
2006
.
American Association of Retired Persons Public Policy Institute. Accessed September 14 2016 at: https://www.nbcrna.com/about-us/Documents/ImplementingCC%20Requirements%20for%20HCP%202006.pdf
10.
Morrison
J.
Research issues in CPD
.
The Lancet
2003
;
362
(
9381
):
410
.
11.
Regulated Health Professions Act, 1991, S.O. 1991, c. 18:
. .
12.
Davis
D
,
Galbraith
R.
Continuing medical education effect on practice performance: effectiveness of continuing medical education: American College of Chest Physicians Evidence-Based Educational Guidelines
.
Chest
2009
;
135
(
3 Suppl
):
42S
48S
.
13.
Davis
D
,
Thomson
M
,
Oxman
A
,
Haynes
R.
Evidence for the effectiveness of CME. A review of 50 randomized controlled trials
.
JAMA
1992
;
268
(
9
):
1111
7
.
14.
Davis
D
,
Thomson
M
,
Oxman
A
,
Haynes
R.
Changing physician performance. A systematic review of the effect of continuing medical education strategies
.
1995
;
274
(
9
);
700
5
.
15.
Firmstone
V
,
Elley
K
,
Skrybant
M
,
Fry-Smith
A
,
Bayliss
S
,
Torgerson
C.
Systematic review of the effectiveness of continuing dental professional development on learning, behavior, or patient outcomes
.
J Dental Educ
2013
;
77
(
3
):
300
315
.
16.
Forsetlund
L
,
Bjorndal
A
,
Rashidian
A
,
et al
.
Continuing education meetings and workshops: effects on professional practice and health care outcomes
.
Cochrane Database Syst Rev
2009
Apr
15
;(
2
):
CD003030. doi:10.1002/14651858.CD003030.pub2
17.
Griscti
O
,
Jacono
J.
Effectiveness of continuing education programmes in nursing: Literature review
.
J Advanced Nursing
2006
;
55
(
4
):
449
456
.
18.
Mazmanian
P
,
Davis
D
,
Galbraith
R.
Continuing medical education effect on clinical outcomes
.
Chest
2009
;
135
:
49S
55S
.
19.
McConnell
K
,
Newlon
C
,
Delate
T.
The impact of continuing professional development versus traditional continuing pharmacy education on pharmacy practice
.
Ann Pharmacother
,
2010
;
44
(
10
):
1585
95
.
20.
Davis
D
,
O'Brien
M
,
Freemantle
N
,
Wolf
F
,
Mazmanian
P
,
Taylor-Vaisey
A.
Impact of Formal Continuing Medical Education. Do conferences, workshops, rounds, and other traditional continuing education activities change physician behavior or health care outcomes?
JAMA
1999
;
282
(
9
):
867
874
.
21.
Dixon
D
,
Takhar
J
,
Macnab
J
,
et al
.
Controlling quality in CME/CPD by measuring and illuminating bias
.
J Contin Educ Health Prof
2011
;
31
(
2
):
p
.
109
16
.
22.
Board of Health Care Services, Institute of Medicine
.
Redesigning continuing education in the health professions
.
2010
:
Institute of Medicine
.
23.
Kolb
D.
Experiential Learning: Experience as the Source of Learning and Development
.
1984
,
Englewood Cliffs, NJ
:
Prentice-Hall
.
24.
Merriam
S
,
Caffarella
R
,
Baumgartner
L.
Learning in Adulthood: A Comprehensive Guide 3rd Edition
.
2007
San Francisco, CA
:
John Wiley & Sons
.
25.
College of Respiratory Therapists of Ontario
.
Quality assurance program 2013 Evaluation Final Report
.
26.
Austin
Z
,
Marini
A
,
Desroches
B.
Use of a learning portfolio for continuous professional development: A study of pharmacists in Ontario (Canada)
.
Pharmacy Education
2005
;
5
(
3–4
):
175
181
.
27.
Tochel
C
,
Haig
A
,
Hesketh
A
,
Cadzow
A
,
Beggs
K
,
Colthart
I
,
Peakcock
H.
The effectiveness of portfolios for post-graduate assessment and education: BEME Guide No 12
.
Med Teach
2009
;
31
(
4
):
299
318
.
28.
Taylor
L
,
Keir
J.
Quality Assurance Program Evaluation Report 2012
.
College of Dental Hygenists of Ontario. Accessed on September 14 2016 at: http://www.cdho.org/docs/default-source/pdfs/evaluation/qaevaluationreport.pdf
29.
American Psychological Association Task Force on the Assessment of Competence in Professional Psychology
.
Final Report — October 2006
.
30.
Donyai
P
,
Alexander
A
,
Denicolo
P.
CPD Records for revalidation: assessing fitness-to-practice using revalidations standards and an outcomes framework
.
Project Report. 2010. The General Pharmaceutical Council London. Accessed on September 14 2016 at: http://centaur.reading.ac.uk/26381/
31.
Austin
Z
,
Gregory
P
,
Galli
M.
“I just don't know what I'm supposed to know”: Evaluating self-assessment skills of international pharmacy graduates in Canada
.
Research in Social and Administrative Pharmacy
2008
;
4
(
2
):
115
124
.
32.
Davis
D
,
Mazmanian
P
,
Fordis
M
,
Van Harrison
R
,
Thorpe
K
,
Perrier
L.
Accuracy of physician self-assessment compared with observed measures of competence
.
JAMA
2006
:
296
(
9
);
1094
1102
.
33.
Colthart
I
,
Bagnall
G
,
Evans
A
,
Allbutt
H
,
Haig
A
,
Illing
J
,
McKinstry
B.
The effectiveness of self-assessment on the identification of learner needs, learner activity and impact on clinical practice: BEME Guide no. 10
.
Med Teach
2008
;
30
(
2
):
124
145
.
34.
Eva
K
,
Regehr
G.
“I'll never play professional football” and other fallacies of self-assessment
.
Journal of Continuing Education in the Health Professions
2008
;
28
(
1
):
14
19
.
35.
Nagler
A
,
Adnolsek
K
,
Padmore
J.
The Unintended Consequences of Portfolios in Graduate Medical Education
.
Acad Med
,
2009
;
84
(
11
):
1522
1536
.
36.
Ibrahim
J.
Continuing professional development: a burden lacking educational outcomes or a marker of professionalism?
Med Educ
2015
;
49
(
3
):
240
2
.
37.
Lee
N.
An evaluation of CPD learning and impact upon positive practice change
.
Nurse Educ Today
2011
;
31
(
4
):
390
5
.
38.
College of Physiotherapists of Ontario
.
Professional Portfolio Guide - Quality Management Program 2013
. .
39.
Gagliardi
A
,
Brouwers
M
,
Finelli
A
,
Campbell
C
,
Marlow
B
,
Silver
I.
Physician self-audit: a scoping review
.
J Contin Educ Health Prof
2011
;
31
(
4
):
258
64
.
40.
Schostak
J
,
Davis
M
,
Brown
T
,
Driscoll
P
,
Starke
I
,
Jenkins
N.
‘Effectiveness of Continuing Professional Development’ project: A summary of findings
.
Med Teach
2010
;
32
(
7
):
586
592
.
41.
MAINPORT – Streamlined, learner-centered, flexible
.
2014
.
42.
Miller
G.
The assessment of clinical skills/competence/performance
.
Acad Med
1990
;
65
(
9 Suppl
):
S63
67
.
43.
Fromme
H
,
Karani
R
,
Downing
S.
Direct observation in medical education: A review of the literature and evidence for validity
.
Mount Sinai Journal of Medicine
2009
;
76
(
4
):
365
371
.
44.
Jouriles
N
,
Emerman
C
,
Cydulka
R.
Direct observation for assessing emergency medicine core competencies: Interpersonal skills
.
Academic Emergency Medicine
2002
;
9
(
11
):
1338
1341
.
45.
Kogan
J
,
Holmboe
E
,
Hauer
K.
Tools for direct observation and assessment of clinical skills of medical trainees: A systematic review
.
JAMA
,
2009
;
302
(
12
):
1316
1326
.
46.
Kane
M.
The assessment of professional competence
.
Evaluation and the Health Professions
,
1992
;
15
(
2
):
163
182
.
47.
Bardage
C
,
Westerlund
T
,
Barzi
S
,
Bernsten
C.
Non-prescription medicines for pain and fever — a comparison of recommendations and counselling from staff in pharmacy and general sales stores
.
Health Policy
2013
;
100
(
1
):
76
83
.
48.
Harden
R
,
Gleeson
F.
Assessment of clinical competence using an objective structured clinical examination (OSCE)
.
Med Educ
1979
;
13
(
1
):
p
.
41
54
.
49.
Norcini
J
,
Anderson
B
,
Bollela
V
,
Burch
V
,
Costa
M
,
Duvivier
R
,
et al
.
Criteria fo rgood assessment: consensus statement and recommendations from the Ottawa 2010 conference
.
Med Teach
2011
;
33
(
3
):
206
214
.
50.
Austin
Z
,
Marini
A
,
Croteau
D
,
Violato
C.
Assessment of pharmacists' patient care competencies: validity evidence form Ontario (Canada)'s quality assurance and peer review process
.
Pharmacy Education
2004
;
4
(
1
):
23
32
.
51.
Harden
R.
Twelve tips for organizing an Objective Structured Clinical Examination (OSCE)
.
Med Teach
1990
;
12
(
3–4
):
259
64
.
52.
Epstein
R
,
Hundert
E.
Defining and assessing professional competence (Review)
.
JAMA
2002
;
287
(
2
):
226
236
.
53.
Kirton
S
,
Kravitz
L.
Objective structured clinical examinations (OSCEs) compared with traditional assessment methods
.
American Journal of Pharmaceutical Education
2011
;
75
(
6
).
54.
Goulet
F
,
Jacques
A
,
Gagnon
R
,
Racette
P
,
Sieber
W.
Assessment of family physicians' performance using patient charts: interrater reliability and concordance with chart-stimulated recall interview
.
Eval Health Prof
2007
;
30
(
4
):
376
392
.
55.
Sargeant
J
,
Macleod
T
,
Sinclair
D
,
Power
M.
How do physicians assess their family physician colleagues' performance?: creating a rubric to inform assessment and feedback
.
J Contin Educ Health Prof
2011
;
31
(
2
):
87
94
.
56.
Lockyer
J.
Multisource feedback: Can it meet criteria for good assessment?
J Continu Educ Health Prof
2013
;
33
(
2
):
89
98
.
57.
Violato
C
,
Lockyer
J
,
Fidler
H.
Changes in performance: a 5-year longitudinal study of participants in a multi-source feedback programme
.
Med Educ
2008
;
42
(
10
):
1007
1113
.
58.
Sargeant
J
,
Mann
K
,
Sinclair
D
,
van der Vleuten
C
,
Metsemakers
J.
Challenges in multisource feedback: intended and unintended outcomes
.
Med Educ
2007
;
41
(
6
):
583
591
.
59.
Violato
C
,
Lockyer
J
,
Fidler
H.
Multisource feedback: a method of assessing surgical practice
.
BMJ
2003
:
326
(
7388
):
546
548
.
60.
Violato
C
,
Worsfold
L
,
Polgar
J.
Multisource feedback systems for quality improvement in the health professions: assessing occupational therapists in practice
.
J Contin Educ Health Prof
2009
;
29
(
2
):
111
119
.
61.
Ferguson
J
,
Wakeling
J
,
Bowie
P.
Factors influencing the effectiveness of multisource feedback in improving the professional practice of medical doctors: a systematic review
.
BMC Med Educ
2014
;
14
:
76
.
62.
Bracken
D
,
Rose
D.
When Does 360-Degree Feedback Create Behavior Change? And How Would We Know It When It Does?
Journal of Business and Psychology
2011
;
26
(
2
):
183
192
.
63.
Smither
J
,
London
M
,
Reilly
R.
Does performance improve following multisource feedback? A theoretical model, meta-analysis, and review of empirical findings
.
Personnel Psychology
2005
;
58
:
33
66
.
64.
Overeem
K
,
Wollersheimh
H
,
Arah
O
,
Cruijsberg
J
,
Grol
R
and
Lombarts
K.
Factors predicting doctors' reporting of performance change in response to multisource feedback
.
BMC Med Educ
2012
;
12
:
52
. .
65.
Elwyn
G
,
Lewis
M
,
Evans
R
,
Hutchings
H.
Using a peer assessment questionnaire in primary medical care
.
Br J Gen Pract
2005
;
55
(
518
):
690
695
.
66.
Weissman
S.
Multisource feedback: Problems and potential
.
Acad Med
2013
;
88
(
8
):
1055
67.
Ng
K
,
Koh
C
,
Ang
S
,
Kennedy
J
,
Chan
K.
Rating leniency and halo in multisource feedback ratings: testing cultural assumptions of power distance and individualism-collectivism
.
J App Psych
2011
;
96
(
5
):
1033
1044
.
68.
Archer
J
,
McGraw
M
,
Davies
H.
Assuring validity of multi-source feedback in a national programme
.
Postgrad Med J
2010
;
86
(
1019
):
526
531
.
69.
Fromme
H
,
Karani
R
,
Downing
S.
Direct observation in medical education: A review of the literature and evidence for validity
.
Mount Sinai Journal of Medicine
2009
;
76
(
4
):
365
371
70.
Jouriles
N
,
Emerman
C
,
Cydulka
R.
Direct observation for assessing emergency medicine core competencies: Interpersonal skills
.
Acad Emerg Med
2002
;
9
(
11
):
1338
1341
.
71.
Kogan
J
,
Holmboe
E
,
Hauer
K.
Tools for direct observation and assessment of clinical skills of medical trainees: A systematic review
.
JAMA
2009
;
302
(
12
):
1316
1326
.
72.
Moriarty
H
,
McLeod
D
,
Dowell
A.
Mystery shopping in health service evaluation
.
Br J Gen Pract
2003
;
53
(
497
):
942
946
.
73.
Glasier
A
,
Manners
R
,
Loudon
J
,
Muir
A.
Community pharmacists providing emergency contraception give little advice about future contraceptive use: a mystery shopper study
.
Contraception
2010
;
82
(
6
):
538
542
.
74.
Benrimoj
S
,
Werner
J
,
Raffaele
C
,
Roberts
A
,
Costa
F.
Monitoring quality standards in the provision of non-prescription medicines from Australian community pharmacies: results of a national program
.
Qual Saf Health Care
2007
;
16
(
5
):
354
358
.
75.
Rhodes
K
,
Miller
F.
Simulated patient studies: an ethical analysis
.
The Milbank Quarterly
2012
;
90
(
4
):
706
724
.
76.
Rhodes
K.
Taking the mystery out of “Mystery Shopper” studies
.
N Engl J Med
2011
;
365
(
6
):
484
486
.
77.
Epstein
R
,
Hundert
E.
Defining and assessing professional competence (Review)
.
JAMA
2002
;
287
(
2
):
226
236
.
78.
Kirton
S
,
Kravitz
L.
Objective structured clinical examinations (OSCEs) compared with traditional assessment methods
.
Am J Pharm Educ
2011
;
75
(
6
):
111
.
79.
Tabish
S.
Assessment methods in medical education
.
Int J Health Sci
2008
;
2
(
2
):
3
7
.
80.
Austin
Z
,
Croteau
D
,
Marini
A
,
Violato
C.
Continuous professional development: the Ontario experience in professional self-regulation through quality assurance and peer review
.
Am J Pharm Educ
2003
;
67
(
2
):
56
. .
81.
Krishnamurthy
R
,
VandeCreek
L
,
Kaslow
N
,
Tazeau
Y
,
Miville
M
,
Kerns
R
et al
.
Achieving competency in pyschological assessment: directions for education and training
.
J Clin Psych
2004
;
60
(
7
):
725
739
.
82.
American Board of Medical Specialties (ABMS)
.
Toolbox of assessment methods: ACGME Outcomes Project Accreditation Council for Graduate Medical Education v1.1
.
2000
. .
83.
Donabedian
A.
The quality of care: how can it be assessed?
JAMA
1988
;
260
(
12
):
1743
1748
.

About the Authors

Zubin Austin, BScPhm, PhD, is Professor and Murray Koffler Chair in Management, Leslie Dan Faculty of Pharmacy, University of Toronto.

Paul A.M. Gregory, MLS, is Research Associate, Leslie Dan Faculty of Pharmacy, University of Toronto.