AAMI's Benchmarking Solution (ABS) is a new online tool designed specifically to help clinical engineering (CE) departments measure their practices, policies, and procedures against similar departments at other facilities; monitor their progress throughout the year; and share best practices. The tool includes an online survey with more than 100 benchmarking and best practice questions, both qualitative and quantitative, and a set of graphical analysis tools to help CE departments analyze benchmarking data results against other facilities of particular demographics.

The survey tool—developed by a software company, NeuraMetrics, with the guidance of a team of clinical engineering experts hired by AAMI—is structured to encourage both numeric and non-numeric responses, allowing those departments that do not have all the numeric information to participate and enter the data that they do have. This overcomes the problem of some prior clinical engineering benchmarking studies which required many numeric responses that were difficult for some potential participants to provide. Although a very new product, ABS already has a high participation rate.

This study looks at the first set of ABS data focusing on and further analyzing some of the quantitative measurements available. One key statistical analysis technique used in this study is linear correlation. Linear correlation quantifies the strength of a linear relationship between two variables. When there is no correlation between the two variables, then there is no tendency for the values of one quantity to increase or decrease with the values of the second quantity. When there is 100% correlation, every increase or decrease in one variable has an equivalent increase or decrease in the other one. Linear correlation is typically represented by a number between 0.00 (no correlation) and ± 1.00 (100% correlation). Sample size in linear correlation studies is relevant; one of the weaknesses of some prior studies1 is their small sample size. ABS has overcome the small sample size issue with more than 100 institutions subscribing in less than one year of operation.

At the time of this analysis (April 2010), 86 organizations had entered data into the ABS database. The process for data analysis included the following:

  • Check for complete data: The survey data have required fields for each calculation. If any one of the required fields is not entered, the calculation is not done. For example, the Cost of Service Ratio (COSR, further described below) requires Acquisition cost and both Internal and External service costs. Internal costs require that Employee Benefits costs are entered. Therefore, if Employee Benefits costs are not entered, the Internal Costs, Total Service Costs, and COSR cannot be calculated.

  • Eliminate outliers: After the above checks are done and entries eliminated for incomplete data, the data are checked against a table of “outlier thresholds” established by the subject matter experts (SME) that developed the ABS questionnaire. Any outliers are then eliminated from the dataset for the purpose of further analysis and their data are not included in this paper.

ABS itself, without any further data analysis, provides several metrics such as device count per tech, and supervisory span of control, and also allows one to select demographic “data cuts” so survey participants can select subsets of hospitals that they are interested in based on hospital size, geographic location, university-hospitals-only, and other demographics.

The following is a further analysis of a few of the metrics available via the ABS Calculations report that may be useful to clinical engineering managers.

Cost of Service Ratio (COSR)

The ratio of the total cost of service to total equipment acquisition cost is sometimes called the Cost of Service Ratio (COSR). Complete data was available from 47 survey participants to make a COSR calculation. Although 62 hospitals were able to provide acquisition cost data, 15 hospitals did not enter complete external (i.e. vendor) and/or internal staffing (e.g. benefits) costs and therefore were not included in this portion of the analysis. In addition, four organizations were considered outliers, with two deemed too high and the others too low, and were not included in subsequent analysis. Interestingly, upon further investigation of the 1.5% COSR outlier, it appeared that this hospital reported zero repair parts costs.

Figure 1 shows a scatter plot of the results for the 43 COSR validated entries. Acquisition costs ranged from a low of $25 million to $830 million (mean $160 million), and service costs from a low of $1.4 million to a high of $28 million per year (mean of $7 million). A COSR mean of 4.7% (range 1.9% to 12.5%) and a linear correlation of 0.91 shows that service costs do track acquisition costs in a linear manner and that the 5% anecdotal benchmark referenced by many clinical engineers continues to be not only a ballpark norm for this ratio, but statistically relevant. Note that even with a high number of subscribers, a majority of survey participants still did not report sufficient cost data to be able to accurately calculate this metric. These numbers also indicate that the validated data are from large hospitals (only four hospitals with less than 300 beds reported sufficient cost data for COSR calculations).

Figure 1.

Cost of Service Ratio (COSR) for 43 survey entries (Note: for graphics clarity, four high acquisition cost data points from $500 million to $830 million are not shown but are included in the statistics.)

Figure 1.

Cost of Service Ratio (COSR) for 43 survey entries (Note: for graphics clarity, four high acquisition cost data points from $500 million to $830 million are not shown but are included in the statistics.)

Table 1.

Outlier Threshold Values.

Outlier Threshold Values.
Outlier Threshold Values.

One function of benchmarking is to look for best performers. From the COSR analysis, the best performer has a Cost of Service Ratio of 1.9% which by any previous reported data would be outstanding. With the anonymous nature of the ABS data, and therefore minimal information available to audit the provided financial data, it is challenging to determine if the 1.9% is an anomaly or the true best performer. The ABS survey does show that the hospital with a COSR of 1.9% (hereafter referred to as “Hospital 1.9%”) does not support imaging systems. Other prior studies have shown that imaging equipment repair and maintenance expenses can account for up to 50% of a hospital's medical systems support costs. In reviewing other portions of the ABS survey, it can be seen that Hospital 1.9% only scored a “yellow” subjective score or “partial accuracy” for acquisition cost and 0/5 (“no confidence”) in outside vendor service costs. Further analysis, improvements in the way the analysis data can be “cut,” and additional questions would be required to determine if Hospital 1.9% is indeed a best performer. AAMI and their SME consultants are considering additional questions for a version 2 of ABS that will help further ferret out this type of information, while maintaining anonymity for survey participants.

Other Metrics

One of the simplest and most common staffing metrics is the number of devices that are supported per technician (Table 2). One thousand devices per tech has been anecdotally reported and ABS supports that metric, with a mean reported device count per tech of 995 from the data extract used in this paper. Although this metric is simple to define and measure, it is often seen as flawed as a staffing metric by itself since device count is a poor analog for workload (i.e., one infusion pump does not have the same workload as one CT scanner). Others have attempted to overcome this flaw by adding in multiplier factors,2 but there have not been any definitive reported analyses showing that these multiplier factors are statistically representative of workload.

Table 2.

Summary of other ABS metrics.

Summary of other ABS metrics.
Summary of other ABS metrics.

Another metric proposed as a staffing measurement is technician per acquisition cost (e.g., one tech can support x million dollars of medical equipment). ABS data can be easily used to calculate this metric and, in the data extract used in this paper, it calculates to a value of one tech per $7,300,000 (n=55). Although this metric offers potential improvement over device count per tech and may be easier data to gather, it has several problems, including a lack of vendor service information. A CE department that outsources a large quantity of work could have a very high acquisition cost per tech, but would not necessarily be a “best practice.” (Note: Two outliers showing one tech supporting greater than $30 million in equipment value were removed from this data set).

For the device count per tech metric, improved definitions are required in order to provide guidance on what to count (e.g., count modules or don't count modules). For all the staffing metrics, further research is necessary to see if there are ways to improve the metric validity by constraining the data ranges (e.g., if the device value is less than $20,000 or greater than $100,000, then don't include it as part of the tech per acquisition value metric.)

Supervisory span of control and certification measures are other easy-to-use benchmarks where the ABS data are not particularly surprising, but significant, because of the relatively large amount of data collected and the potential for more data in the future. The ABS results of a span of control of eight technicians per supervisor seem very reasonable, and one clinical engineer for every 12.5 staff also seem like a reasonable number. Of course, there are outliers that were eliminated, such as the hospital that reported 72% clinical engineers. Most likely, again not confirmed due to the anonymity of the data, this hospital was calling its technicians “engineers” on the survey response form (it is outside the scope of this paper to further comment on that issue).

For CE departmental space, other papers3 have reported 200 square feet per tech as a reasonable amount of shop/workbench space for one technician. ABS's 193 square feet per technician with 62 responses corroborates that.

For hourly costs, financial data is always some of the most difficult data to compare, due to the myriad number of ways that repair and maintenance cost data is reported, different ways of reporting benefit costs, cost-of-living differences in differing locations of the country, and other issues. However, $101 per hour for the internal cost of CE staffing, including benefits, seems like a reasonable number with 50 survey entries reporting this metric. The high range ($269) that is still within the validation threshold is perhaps due to a high number of original equipment manufacturer (OEM) contracts. A low outlier threshold of $50 per hour was used. The low numbers ($20 per hour) were screened out and have several possible causes including: underreporting of cost data, no reporting of benefits cost, and/or an area of the United States with a very low cost of living and hence low technician wages.

Summary

ABS has tremendous possibilities as a clinical engineering benchmarking service and metric research tool. With the large amount of data collected in less than one year and future plans to collect more data, as well as improving data validation, future years will see opportunities for further confirmation of existing metrics as well as new metrics. There are many other possibilities for metrics that can be calculated in ABS. Also, in the future developments of ABS, one or more other common denominators used in other healthcare benchmarking (e.g. adjusted discharges) will also be considered for inclusion. These common denominators are important to hospital administrators and other C-suite executives who use these numbers to justify staffing levels and other resource allocations (positive or negative). Of course, any new metrics developed will need to maintain relevance to clinical engineering workloads in order to be of any practical value.

In order for ABS, or any benchmarking service, to make further inroads into the establishment of commonly used and referenced metrics, there is a need for high quality data. ABS shows that although large improvements have been made in data quality there are many strange outliers including: $460 per hour of internal cost, $30 million of support per technician, zero shop space, 0.3% cost of service ratio, and more. CE departments and computerized maintenance management systems (CMMS) vendors need to do a much better job of collecting and providing accurate data and quality tools in order to improve the ease and accuracy of data collection. The inability of many departments to accurately measure cost is still a major issue within the CE community.

Another challenge for ABS is the anonymity of the data. Although being anonymous helps recruit more participants, one of the attributes of benchmarking is its ability to identify best practices. With ABS, the data can be manipulated so that potential best practice partners can be identified based on demographics and other data selections. In order for the people doing an across-ABS analysis to confirm that a best practice is really a best practice, and not a data anomaly, and in order for individuals to really learn something from a potential partner, that partner will need to be identified. In the future, AAMI will have to determine a way to promote communication, with permission of course, between potential benchmarking partners. Already, ABS subscribers have access to an e-forum where they share questions, comments, and advice.

Acknowledgments

I am one of four subject matter experts (SME) hired by AAMI to develop AAMI's Benchmarking Solution. I would like to thank the others on the team—Matt Baretich, Frank Painter, and Manny Furst for their assistance with the preparation of this paper; as well as the AAMI staff on this project, Patrick Bernat and Steve Campbell; and the staff at NeuraMetrics who continue to work with AAMI and the SME team on improving the software tool used for this project.

References:

References:
Cohen
,
, et al
.
Benchmarking series in Biomedical Instrumentation & Technology (BI&T).
1995, 1996, 1997
.
Evans
,
G.
ABS Discussion Group.
Campbell
,
C.
“A Model Clinical Engineering Department,”.
in
Dyro
,
J.
Clinical Engineering Handbook
Elsevier Academic Press
.
2004
.

Author notes

Ted Cohen, CCE, is clinical engineering manager at the UC Davis Medical Center in Sacramento, CA. He is a benchmarking expert who was instrumental in the development of AAMI's Benchmarking Solution. E-mail: theodore.cohen@ucdmc.ucdavis.edu