Overdoses of prescription medications continue to be a significant concern for health systems around the world. Medical regulators in several jurisdictions have started generating personalized prescribing profiles for individual physicians as an audit and feedback tool to reduce the sub-optimal prescribing of high-risk drugs such as opioids, benzodiazepines and stimulants. However, little is known about how to most effectively communicate the data in these prescriber profiles to the intended recipients. The aim of this study was to collect the opinions of physicians in Saskatchewan, Canada, regarding their personalized prescriber profiles. One-on-one semi-structured interviews were completed in January 2019 with 17 physicians who were given access to personalized profiles containing their prescribing information on opioids, benzodiazepines, stimulants and gabapentin. Interviews were recorded and data was analyzed using thematic analysis. Respondents thought the profiles were a useful tool that had significant potential to improve their prescribing practices. However, many physicians also thought the profiles were confusing and difficult to interpret. Several recommendations were made to improve the prescriber profiles, which may be applicable to other jurisdictions currently using, or planning to develop, similar quality improvement tools. These recommendations include: limiting the use of abbreviations and acronyms; being explicit regarding the intent of the profiles; ensuring comparator data is relevant to the individual recipient; using a combination of numbers and visuals to display data; and providing detailed context regarding what the data means.
The drug overdose epidemic in the United States and Canada continues to have catastrophic effects on individuals, communities, and health systems. In 2018, there were 67,367 drug overdose deaths in the United States and opioids were responsible for 46,802 (69.5%) of those fatalities.1 These drug overdose deaths are tracked by the U.S. National Vital Statistics System (NVSS), which does not differentiate whether or not the opioid that caused the fatality was illegally obtained, but it does differentiate between prescription opioids, heroin and synthetic opioids other than methadone.1 The NVSS also does not attempt to determine if deaths caused by prescription opioids are related to patient misuse or physician malpractice. However, the proportion of the deaths related to opioids that were prescribed by physicians and legally dispensed from pharmacies is significant and represented 32.0% (14,975/46,802) of all opioid related overdose deaths in 2018.1 In Canada, there were 3,823 deaths related to opioid overdose in 2019 and 94% were found to be accidental or unintentional.2 Several commonly prescribed non-opioid drugs, such as benzodiazepines, gabapentin and zopiclone, also have a high potential for misuse and have been shown to increase the risk for overdose mortality, particularly when combined with opioids.3,4
Considering that a significant proportion of drug overdose deaths are related to prescription medications, many jurisdictions are committing resources to support physicians to reduce the prescribing of high-risk drugs. Audit and feedback is a quality improvement intervention that has been successfully utilized within health systems when change is required to address crises, such as the current drug overdose epidemic.5 A review published in 2019 identified 211 papers that documented the effectiveness of this intervention in medical practice.5 When applied to target sub-optimal prescribing, audit and feedback typically involves providing physicians with a summary of personal prescribing history, along with a comparator that assists recipients to gauge their performance and identify the need to change their practice. Commonly utilized comparators include prescribing rates of peers, recipients’ own historical prescribing rates and best practice guidelines.5
Audit and feedback has been used most commonly to improve antibiotic prescribing, but it has also been studied to target prescribing of opioids and other commonly misused drugs.6–11 A 2019 systematic review of acute care interventions that included audit and feedback found a significant reduction in overall opioid prescribing, with a 71% drop in utilization of high-risk agents such as mepridine.12 A randomized controlled trial in the United States also found reductions in opioid prescribing when an audit and feedback intervention was implemented with physicians in emergency departments.13
Guidelines have been published that suggest audit and feedback is most effective at improving prescribing when personalized data are provided to individual prescribers by a supervisor or colleague, on more than one occasion, both verbally and in writing, and including specific benchmarks or targets.5 Unfortunately, little is known regarding how to most effectively present and communicate the data in prescriber profiles so that the information can be easily understood and interpreted, in a way that ensures that recipients are willing and able to use the profiles to change their prescribing practices. This suggests that additional research would be useful in this area.
In 2016, the College of Physicians and Surgeons of Alberta (CPSA), the medical regulatory authority in one of Canada’s most populous provinces, used audit and feedback to target the inappropriate prescribing of opioids and benzodiazepines.14 The CPSA created customized reports that detailed individual physicians’ prescribing rates over a period of time, which included the average doses prescribed and flagged specific patients who were receiving doses higher than recommended in practice guidelines. The reports included the mean prescribing data of all other physicians in Alberta as the comparator. When the CPSA surveyed recipients of the prescribing profile to collect their opinions, most physicians reported that they reviewed their personalized report and more than half planned to make a change to their prescribing practices as a result of the information.14 Although many of the comments made by respondents at the end of the survey were positive, numerous negative responses were received, suggesting that the manner and format in which the data was communicated could be improved. Some physicians found the profile was personally offensive and insulting, several stated that the report provided no useful information and others reported that the profile was confusing and difficult to interpret.14 Unfortunately, there was no data in the published literature to guide the CPSA in how to best present and communicate the data in their prescriber profiles.
In 2018, the College of Physicians and Surgeons of Saskatchewan (CPSS), the medical regulatory authority in another Canadian province, developed an audit and feedback tool similar to the one used by the CPSA. In an effort to avoid the negative feedback received by the CPSA, the CPSS made efforts to ensure their prescriber profiles were written in a way that was not offensive or insulting, and that provided useful information that was presented in a manner that was not confusing or difficult to interpret. Unfortunately, the survey published by the CPSA did not specify which aspects of the Alberta prescriber profiles were found to be offensive, confusing, or difficult to interpret, suggesting that additional research would be useful in this area.14
The CPSS prescriber profiles used data from the Saskatchewan Prescription Review Program (PRP), which has access to the dispensing records from all community pharmacies in the province, of a sub-set of high-risk medications (Table 1). The dispensing data for these medications are collected by the Saskatchewan Ministry of Health Drug Plan and are shared with the PRP for the purpose of monitoring for potentially sub-optimal prescribing and/or potential abuse/diversion.
The prescriber profiles included a summary of individual physician prescribing rates for each of the following four categories of prescribed medications: opioids, benzodiazepines, stimulants and gabapentin (Figure 1). Dispensing data related to medications prescribed within these four categories, but not monitored by the PRP, would not have been included in the reports (e.g., tramadol, pseudoephedrine, zopiclone, pregabalin). The following details were provided for each of the four categories of drugs: the number of patients who were prescribed a drug in the category; the average daily dose prescribed per patient (presented as oral morphine equivalent per day per patient or “OME,” oral diazepam equivalent per day per patient or “ODE,” and the defined daily dose of stimulants and gabapentin or “DDD”); the number of patients prescribed three or more drugs within each category (except for gabapentin); the number of patients who had three or more prescribers of a drug within each category; and the number of seniors (i.e., those aged 65 and over) who were prescribed a drug within each category.
The comparators provided in the profiles included the median scores of other physicians in the same medical specialty (presented as “specialty median”), the median scores of all physicians in the province (presented as “Saskatchewan or SK median”), the individual’s percentile among others in their medical specialty (presented as “percentile in specialty”), and the individual’s percentile among all physicians in the province (presented as “percentile in Saskatchewan or SK”). Abbreviated definitions of all of these terms were included for recipients who were not familiar with them as footnotes at the bottom of the document. The profile also included an appended list of patient names, along with each patient’s medications and doses within each of the four categories, which were prescribed by the individual physician (Figure 1).
Considering that little is known regarding how to most effectively present and communicate the data in prescriber profiles for physicians regarding opioids and other high-risk medications, along with the negative, but non-specific, feedback that Alberta regulators received from physicians about their prescriber profiles, the CPSS was interested in collecting feedback from prospective recipients regarding their newly developed audit and feedback tool. The aim of this study was to collect the attitudes and opinions of a sample of Saskatchewan physicians regarding the CPSS prescriber profile to determine if the information was useful and if it was presented in a manner that was not offensive, confusing or difficult to interpret.
Any physician licensed in the Province of Saskatchewan was eligible to participate in this study. The CPSS invited physicians to participate in a one-on-one, semi-structured telephone interview by including a notice in its November/December 2018 communiques, which are emailed on a monthly basis to all practicing physicians in Saskatchewan. Individuals interested in participating were prompted to contact researchers directly. An interview guide was developed, which was pilot-tested with three physicians who did not plan on participating in the study. The pilot participants provided feedback to improve the clarity of the interview questions and the guide was revised accordingly. The telephone interviews were performed in January 2019 by an individual researcher who was trained in this methodology.
Several days prior to the scheduled interview, participants were emailed a copy of their personalized prescriber profile to review before the interview. Each telephone interview was scheduled for 30 minutes and all of the sessions were recorded and transcribed verbatim. Interviews were intended to continue until saturation of data was reached or until all physicians who agreed to participate had completed an interview, whichever occurred first. Saturation was defined as the point in the data collection when each successive interview failed to produce any new information.15 To determine if saturation was reached, the interviewer and the primary investigator met soon after each interview to listen to the recording and make a decision.
Thematic analysis of the interview transcripts was performed external to the research team by two analysts from the Social Sciences Research Laboratory (SSRL) at the University of Saskatchewan. The SSRL employs expert researchers with training and experience in qualitative data analysis methodologies. The qualitative data analysis software package NVivo 12 Pro for Windows was utilized to assist in the analysis. First, the two analysts familiarized themselves with the data by reading the transcripts verbatim without assigning any coding. The analysis then consisted of a descriptive phase, based on a set of a priori categories established by the research team. In this case, those categories equated to the interview guide prompts themselves. The analytical and more interpretive phase from which additional themes or sub-themes emerged was the sole responsibility of the SSRL. Once all transcripts had been coded to initial categories, each major category was revisited to determine if any sub-categories needed to be made, or whether any re-coding or “uncoding” needed to occur.
The aim of the resultant qualitative analysis was evaluative in nature to provide a deep and focused view of the data. A pre-defined coding framework based on the structure of the interview guide (i.e., coding by question) was adopted to guide the descriptive coding and focus of analysis based on the research study objectives and categorical impressions derived from the interviews. Overall, the thematic analysis approach occurred in an iterative process until relative saturation of themes was achieved in two stages: descriptive coding, which focused on semantic meanings presented by participants, followed by analytic coding, which focused on organizing higher themes based on more latent meanings from the resultant descriptive themes. The research protocol was approved by the Research Ethics Board at the University of Saskatchewan.
A total of 17 interviews were completed in January 2019, at which point it was determined that saturation of data had been achieved and participant recruitment was discontinued. Demographic information regarding the respondents was not reported, in order to respect their wishes to ensure that their participation was anonymous and confidential. Out of the 17 interviews, 16 were family physicians and one was an internal medicine specialist. The themes that emerged from the data analysis are organized by interview question.
Question #1: Was the prescriber profile clear and easy to interpret?
There were divergent views and a relatively even split in opinions on whether or not the profile was clear and easy to interpret. Eight physicians’ initial response was that it was clear and well-organized, or that it was “relatively easy to go through” (Interview 10). Some participants resided in the middle, stating that “parts of it were easy to understand” (Interview 15) or that “once you got into it, it became clearer.” (Interview 16). It took some participants time to figure out what some of the abbreviations meant, but eventually it made sense for them. Five of the physicians stated without hesitation that it was not clear or easy to understand, explaining that lots of guesswork or additional effort had to be put in to understand the profile, and many were left wondering what the data meant.
Question #1 Sub-theme: What was unclear?
Physicians were relentless in their ability to express areas of confusion with the profile. This theme was separated into three categories: (1) abbreviations, (2) lack of context, and (3) formatting.
A large proportion of respondents reported confusion regarding the frequent use of abbreviations (i.e., OME, ODE, DDD). Some did not notice that the abbreviations were defined in the footnotes at the bottom of the profile. Others were aware of the footnotes, but were still confused by the wording of the definitions and/or annoyed at having to frequently refer to footnotes to understand the information in the profile.
“ I had to figure out what the abbreviations meant and even after looking at it a few times I wasn’t sure. I had to look at the bottom over and over.” (Interview 3)
“ Some of the abbreviations were not familiar to me so I had to go up and down to the bottom of the page and even then I wasn’t sure. It took a little while to wrap my head around the concept of the ODE being equivalent to 10 mg of diazepam.” (Interview 14)
2. Lack of context
There was significant confusion regarding how to interpret and understand the numbers presented in the profile, due to a reported lack of context being provided. There was particular confusion about the four comparators (i.e., specialty median, percentile in speciality, Saskatchewan median, percentile in Saskatchewan). Respondents were unclear what these four terms meant and how it was determined which physicians were in a certain speciality. Not only was it found to be problematic in terms of preventing physicians from being able to understand their own prescribing patterns, but it was also difficult to make meaningful interpretations between individuals’ own data and the comparators, which is key to using the profiles to determine the need to change one’s practice.
“ It was hard to interpret. I had to look at it for a long time before I really understood what the numbers meant. Medians are not like intuitive you know what I mean? I think it’s probably the best number for you to use but you have to concentrate to figure out the information.” (Interview 3)
“ I guess specialty median means where I stand in family medicine, but is that all family docs in the province or just in my city…and then percentile in specialty, I don’t get that at all.” (Interview 9)
“ I look at my specialty median number and I don’t know what it means. I am a family doctor with a sub-speciality, so does it include only others with a similar sub-speciality? I doubt it, but I am just not sure.” (Interview 10)
There was also confusion regarding the list of individual patient names and corresponding prescribing history that was appended to the profile and many respondents were adamant that the list required more context in order to be useful. Respondents were unsure why certain patients were selected, since most respondents thought it was not an exhaustive list of all patients to whom they had prescribed these drugs.
“ …the other thing I didn’t understand was the actual patient list…was it intended to be comprehensive or not, because it includes patients who are not even my patients…so they’d be patients of a colleague who I prescribed for once.” (Interview 15)
The design of the profile, including its aesthetics, formatting and placement of information, was the last problem identified by physicians in terms of causing difficulty in interpreting the data. Respondents reported that the information would have been easier to interpret if a simple reconfiguration was done. The two most common suggestions included using fewer abbreviations and the incorporation of bar graphs or pie charts to make the data more visually appealing and easier to interpret.
Question #2: What did you like about the profile?
Everyone had something positive to say about the profile. Physicians found it to be “information I have not previously had” (Interview 5) and all respondents stated that it had the potential to be a very useful tool. Respondents all appreciated that the profiles were being created and were thankful for the work that was being done to eventually disseminate them broadly. Positive perceptions revolved around the fact that the profile was brief, contained useful information and attempted to compare an individual’s prescribing data with his or her peers. Physicians liked “knowing where I sit in my specialty.” (Interview 3). Many also liked that it provided details regarding the doses they had prescribed and also that it included drugs other than opioids (i.e., gabapentin, stimulants, benzodiazepines).
Question #3: What did you dislike about the profile?
Plainly put, what participants disliked about the profile were the things that made it unclear and difficult to interpret (Question #1). Many felt that the profile prompted more confusion or questions than clear answers. Many also disliked the fact that it was not clearly stated anywhere on the document what the intent of the profile was and why it was being created. Several noted that the data is very sensitive and could be interpreted as punitive, threatening or “scolding” rather than as an informative tool meant to assist physicians to improve their practices. While the CPSS’s good intentions may be implied, respondents stated that it is dangerous to not be explicit regarding the profile’s purpose.
Question #4: What would you change?
The prominent view held by the physicians interviewed is that there is potential for the profile to be informative and impactful, but in order for that to happen, it needs to be made much clearer. It should be of no surprise that what physicians said they would change revolved mainly around the same features that caused confusion in Question #1. Responses to this question led to four key suggestions for change: (1) more clarity and context, (2) visualizations, (3) background information, and, (4) more accurate comparator data.
1. More clarity and context
Lack of information and little to no explanation of the information that was included were the main factors contributing to misunderstanding. Several people noted that if the metrics and abbreviations were better explained, the profiles would have more meaning. Respondents suggested that terms such as “specialty median,” “percentile in speciality,” “Saskatchewan median,” and “percentile in Saskatchewan” should be better defined. They also suggested that there should be information included regarding how the various specialities were defined, along with which speciality the recipient fell into. It was also suggested that it should be explained which opioids, benzodiazepines, and stimulants were included in each category. Finally, respondents requested that use of abbreviations be reduced, and when used, each should be defined in a simpler manner (e.g., in brackets immediately next to the abbreviation). Many respondents thought the existing definitions of the abbreviations in the footnotes were confusing and needed to be written more plainly.
“ The average OME, ODE, DDD needs a lot of clarification. Because I still don’t even know what each means even after I read the definitions.” (Interview 3)
“ It states that ODE is oral diazepam equivalent in doses of 10 milligrams…so does an ODE of 2.5 mean the equivalent of 25 milligrams of diazepam or does it mean 2.5 milligrams?” (Interview 8)
“ I looked at the ‘my practice’ and the ‘specialty median’ numbers and I assume that means my prescribing data and me compared to other family doctors. I understand that my number, 77.2, means I’m in the upper percentiles and then when I look at the ‘Saskatchewan median’ number I don’t know if that means all physicians, including physicians who had never prescribed opioids.” (Interview 10)
Several physicians noted that the information presented is important, but perhaps would be better conveyed through the addition of visualizations, such as pie charts or bar graphs, to complement the numbers in the chart. Some respondents did not anticipate the use of visuals as making much difference, but thought that it may come down to individual learning styles. None thought that it would be negative to add in visuals to the profiles.
“ Graphs and charts are nice, like I remember when they audited labs they would give more of a visual, sometimes that’s easier than trying to decipher numbers.” (Interview 9)
3. Background information
Physicians suggested that the profile should be distributed with clear information about the intention and purpose of creating them. Information regarding why the profile was generated has to be transparent. Physicians stated that there is a risk of feeling offended or fearful, and that the profile could take on a threatening tone without the true purpose of it being freely offered in an upfront manner.
“ There needs to be an explanation about why this is being shared, how you might use it, what’s the intention of the prescribing profile. So that either should be on the profile itself or with an accompanying cover letter.” (Interview 15)
4. Accurate comparator data
Physicians wanted the data to be properly contextualized, using more accurate comparators so that the percentile and median numbers would allow for useful interpretation of individual physician prescribing data. Without accurate comparators it is impossible to use the data to determine if individual physicians need to improve or not. One participant was particularly adamant about this:
“ To make a comparison to know whether you’re in line with your colleagues you must make sure you are comparing apples to apples.” (Interview 9)
This was primarily an issue for family physicians who believed that it was unhelpful to compare one family physician’s prescribing to all other family physicians (i.e., the “specialty median” and “percentile in specialty” comparators). It was stated that family physicians can have extremely variable prescribing practices, depending on location (e.g., rural, urban, inner-city), patient population (e.g., older adults, people living with substance use disorder), sub-speciality (e.g., sports medicine, chronic pain, palliative care), or type of practice (group vs. solo practice). Consequently, for these profiles to be useful, a method must be devised to better categorize family physicians (based on location, patient population, sub-speciality, practice type) and then provide the corresponding, more-accurate, comparator data.
“ Because for me this isn’t as useful in a sense of being able to compare to my peers based on the patients I see. I do all the palliative care in my clinic so how could you possibly compare my data to other family docs even in my own clinic?” (Interview 9)
“ I think for some family physicians who have sub-specialities that lead to unusual prescribing numbers for these drugs, this is just going to be a less meaningful tool, if you can’t find a way to provide more accurate comparators.” (Interview 15)
Question #5: What information did you find most valuable?
Answers to this question were mixed and dependent on the individual and the level to which they could make sense of the data. For example, Interviewee 2 found the list of individual patients meaningless, while Interviewee 3 found the patient list to be the most valuable aspect of the profile. Altogether, 16 physicians referred to one or more items that were “most valuable” and the common responses are reported in Table 2.
Question #6: What information could have been left out?
The consensus among participants was that it was not a question of whether any data should be left out, but whether the data requires more context to be valuable and informative — which has been previously described. In this sense, physicians only referenced information as being unnecessary because it was of little use without more context or explanation being provided.
Question #7: Will having access to this data impact your practice?
Opinions were split on this question. Those who responded that the data would not impact their practice said that the reason for their response was that the data was not meaningful, given the lack of clarity or because of the inaccurate comparators. Interviewees 2, 4, 7 and 12 stated explicitly that having access to this data would not impact their practice. More physicians (n = 11) spoke about how the information in the profile will likely have a positive impact on their practice, despite several aspects of the profile being confusing. For example, respondents stated that the information would help draw attention to areas where their prescribing could be cut back, or it would stimulate reflection on certain patients that could lead to dialogue regarding tapering.
“ I am already sensitized to narcotics and how to prescribe them, but when I saw some of the patient names and the drugs I had prescribed, it made me wish I hadn’t done what I did. It made me realize I should not have okayed those prescriptions.” (Interview 5)
“ I think it would probably impact it positively. I was surprised that I was in the higher percentile and I think it will encourage me, when I see those patients, to discuss why they’re taking it and try and reduce the doses.” (Interview 16)
Question #8: How frequently would you like to receive this profile?
Consensus was that the profile should be sent every six months to once per year (15 out of 17 physicians mentioned either every six months or every year). Two of the respondents suggested sending the profiles quarterly.
Question #9: Do you have any additional advice for the College?
Physicians took this last question as an opportunity to reinforce or reiterate points made earlier in their interviews and no new data or themes emerged from these responses to this question.
The overall support and appreciation that respondents reported for the prescriber profile in this study, despite the many drawbacks that were noted, reaffirms the value of using this audit and feedback tool to target sub-optimal prescribing of opioids and other high-risk drugs. It is notable that all the physicians interviewed had something positive to say and all respondents stated that the profile had the potential to be a very useful tool that might change their practice. The majority of physicians interviewed believed that the profile would likely improve their prescribing. Those who did not feel it would improve their prescribing believed so primarily because the data was presented in a manner that was confusing and difficult to interpret.
The significant amount of confusion and lack of clarity reported by respondents may be surprising to the creators of this tool (the CPSS), particularly since the profile was developed with the assistance of the medical regulator in Alberta (the CPSA), which had experience using and evaluating a similar tool.14 If there was one key theme to come out of this study, it was that most of the respondents found one or more aspects of the profile to be confusing, difficult to interpret and lacking in the necessary context to be useful. This highlights the importance of research, such as this study, to determine the best way to communicate this type of data in an audit and feedback tool. Multiple participants in this study stated that the profile would not change their practice, simply because of the way the data were presented. Significant work and effort should go into ensuring the content and format of these types of profiles are understandable and easy to interpret — otherwise the purpose of creating them will never be achieved.
Several recommendations emerged from this study that will be useful, if implemented, to improve the likelihood that the CPSS prescriber profile will be of value to physicians in Saskatchewan and assist them in modifying their practices when it comes to prescribing opioids and other high-risk drugs. Some of these findings may also be useful to consider when developing, or revising, similar audit and feedback tools in other jurisdictions. Specific recommendations from this study that may translate to audit and feedback tools targeting opioid/high-risk drug prescribing in other settings/jurisdictions include:
Limit the use of abbreviations and acronyms.
Place simply worded explanations directly beside/underneath abbreviations and acronyms, if possible (or use asterisks to facilitate identification/location of corresponding footnotes).
Use a combination of numbers and visuals (e.g., pie-charts, charts) to report data.
Provide context regarding what the comparator data mean and how the speciality groups were defined.
Provide context regarding what all of the numbers represent and which specific drugs are included in each category that is reported.
Include comparator data that is as relevant as possible to the individual physicians receiving the profile. Attempt to sub-categorize family physician comparators based on location of practice, patient population, sub-speciality and practice type.
Distribute the profiles on a regular basis, once every 6–12 months.
Explain the purpose and intent of the profile, either at the top of the document or in an appended cover letter. The snapshots were intended to be used as a screening tool that individual physicians would use to reflect on their practice and determine if there is an opportuntiy for change. The snapshots were not meant to be used for the purposes of identifying physician misconduct.
This study has several limitations that should be considered. The prescriber profiles in this study included data on opioids, benzodiazepines, stimulants, and gabapentin. However, not all drugs within these four categories were monitored by the CPSS at the time of the study (Table 1). For instance, tramadol, pseudoephedrine, zopiclone and pregabalin were not monitored by the PRP in late 2018 and were therefore excluded from the profiles. It should also not be assumed that the findings of this study will directly translate to audit and feedback tools developed to target the prescribing of categories of medications other than opioids, benzodiazepines, stimulants or gabapentin. The participants of this study were primarily family physicians and the results may not be valid for specialist physicians or non-physician prescribers. Although completion of 17 one-on-one interviews is a large number in qualitative research, it may be useful to confirm the findings of this study in a larger sample of physicians, perhaps from several different countries, using survey methodology. Finally, the potential impact of participant bias should be considered, since respondents self-selected for participation, and individuals who may have been cynical and negative about this type of quality improvement tool or who were not supportive of the CPSS, may not have volunteered for the study.
This study reaffirms the value of using audit and feedback tools to target the sub-optimal prescribing of opioids and other high-risk drugs. Several specific recommendations have emerged from this study that may be helpful to consider when creating prescriber profiles related to opioids, benzodiazepines, stimulants and gabapentin.
Derek Jorgenson assisted in the development of the study concept and design, assisted with data collection, assisted with analysis and interpretation of data, and took the lead in writing the manuscript.
Diar Alazawi assisted in the development of the study concept and design, took the lead in data collection, assisted with analysis and interpretation of data, and assisted in writing the manuscript.
Julia Bareham assisted in the development of the study concept and design, assisted with analysis and interpretation of data, and assisted in writing the manuscript.
Nicole Bootsman assisted in the development of the study concept and design, assisted with analysis and interpretation of data, and assisted in writing the manuscript.
About the Authors
Derek Jorgenson, BSP, PharmD, FCSHP, is Professor of Pharmacy, College of Pharmacy and Nutrition, University of Saskatchewan, Canada.
Diar Alazawi, BSP, is Pharmacy Resident, Saskatchewan Health Authority, Canada.
Julia Bareham, BSP, MSc, is Pharmacist, RxFiles Academic Detailing, College of Pharmacy and Nutrition, University of Saskatchewan, Canada.
Nicole Bootsman, BSc (Hons), BSP, is Pharmacist Manager, Prescription Review Program and Opioid Agonist Therapy Program, College of Physicians and Surgeons of Saskatchewan, Canada.