The emerging field of artificial intelligence (AI) is making an impact on a variety of industries, including healthcare. Although the AI world is expanding quickly, standards-development efforts are underway to help establish a baseline set of expectations for AI applications.
AI, sometimes also referred to as “augmented intelligence” or “assistive intelligence,” is increasingly being applied in a variety of areas to improve the quality and efficiency of healthcare. These applications can range, for example, from early identification of high-risk patients, to triage software that helps prioritize a radiologist's workload, to customer service-oriented chatbots, and to manufacturing tools that detect negative quality trends that traditional methods would miss.
This is a quickly evolving field that wasn't on the radar of many people a few years ago. What changed was the availability of high-power computing platforms that were originally designed for video games. In addition to being great for graphics, massively parallel microprocessors can be repurposed to perform calculations that support artificial neural networks. These networks can find patterns in big data that were unknown previously. The explosion of online tools, cloud computing, and computing horsepower enables a wide variety of applications that can be written by a wide variety of organizations, including manufacturers, hospitals, researchers, and even hobbyists.
One of the advantages of standards is that they help show the reader “what good looks like” and provide a minimum set of expectations for product and process quality. Because of this, consensus standards play an important role in medical device regulation in that they capture the current “state of the art.”
Both AAMI and BSI have recognized a need for future standards work regarding AI in healthcare, as evidenced by two recent reports on the topic. The first report discussed standardization to support AI as a medical technology,1 while the second sought to answer key questions: How does AI differ from traditional medical software? What are the implications of those differences? What are some initial thoughts regarding controls and adaptations that are needed to ensure AI in healthcare is safe and effective?2
The AAMI/BSI team is developing a new guidance document that delves more deeply into AI and risk management. Using the well-established ANSI/AAMI/ISO 149713 standard as a starting point, this guidance suggests that AI keeps the same risk management process as traditional health software, while also exploring how machine learning (ML) systems can fail in unexpected ways.
In late 2019, ISO/TC 215 (Health informatics), which has developed a large number of software-related standards, formed a new committee to look at how AI might affect its existing set of standards. The Ad hoc Group 2 (AHG2) committee is very energetic about this topic and will publish a report by October 2020 that will include:
A landscape survey of AI with respect to health informatics.
Key considerations for ISO/TC 215.
A set of recommendations for future work.
A joint committee with the International Electrotechnical Commission (IEC) also is developing generic (horizontal) standards for use by multiple industries. The ISO/IEC Joint Technical Committee (JTC) 1/SC 42 committee has 11 subcommittees and has published five standards. It is developing 15 additional standards, with topics including management systems, use cases, trustworthiness, bias, functional safety, and definitions.4 As these are generic, horizontal standards, many nonhealthcare participants are involved, and this can sometimes lead to confusion because the focus of representatives from different industries may vary. For example, the risk management standard that SC 42 is developing is related to organizational risks (not patient safety), whereas those reading the current article probably are more interested in the relatively new SC 42 project related to “functional safety.”
IEC also has independent projects underway. It formed SEG 10 (Ethics in Autonomous and Artificial Intelligence Applications) in 2019 to identify ethical issues and societal concerns relevant to IEC technical activities. Rather than producing standards, SEGs (Standardization Evaluation Groups) only make recommendations to the IEC SMB (Standardization Management Board). The intent is to determine whether ethical considerations should be included in the relevant IEC standards and to deliver a report to the SMB by spring 2021. The current work of this team includes:
Reviewing the use cases developed by JTC 1/SC 42 and categorizing the ethical concerns, and determining how or if the user can be notified.
Reaching out to the relevant committee managers/secretaries of TCs/SCs with a questionnaire/survey identifying and prioritizing ethics requirements.
One popular saying when it comes to software is “garbage in equals garbage out,” which stresses the importance of the quality of the data that people give to software. Given that the quality of the training data used in ML can have a significant effect on product performance, IEEE is in the process of developing P2801 (Recommended Practice for the Quality Management of Datasets for Medical Artificial Intelligence).5 This set of recommendations covers the dataset life cycle, including data collection, transfer, utilization, storage, maintenance, and update. IEEE is hoping to publish the document by the end of the year.
“Machines will not replace physicians, but physicians using AI will soon replace those not using it.” —Antonio Di Ieva, Macquarie University, Sydney, Australia. Source: Di leva A. AI-augmented multidisciplinary teams: hype or hope? Lancet. 2019;394(10211):1801.
“Machines will not replace physicians, but physicians using AI will soon replace those not using it.” —Antonio Di Ieva, Macquarie University, Sydney, Australia. Source: Di leva A. AI-augmented multidisciplinary teams: hype or hope? Lancet. 2019;394(10211):1801.
A fundamental concept in developing standards is the achievement of a common set of terminology so that people are (literally) speaking the same language. In February of this year, the Consumer Technology Association (CTA) published the first-ever ANSI-accredited standard for the use of AI in healthcare: CTA-2089.1 (Definitions/Characteristics of Artificial Intelligence in Health Care).6 It is the first in a series of standards that are planned to be developed by CTA. Because questions related to ML systems arise, CTA currently is developing a standard regarding the impact of trustworthiness, as seen through the lens of the end user (e.g., physician, consumer, professional, family caregiver).7
AI is poised to bring major improvements to the efficiency and performance of health-care. It also will make us stop and question our assumptions regarding how healthcare delivery should really work. It will force us to think hard about things that are second nature to us, and we will discover new risks and new benefits that this technology offers. Standards will have a significant role in moving us forward.
References
Author notes
Pat Baird is regulatory head of global software standards at Philips and chair of the BI&T Editorial Board. Email: pat.baird@philips.com
Pat Baird is regulatory head of global software standards at Philips and chair of the BI&T Editorial Board. Email: pat.baird@philips.com