Context.—

Technology companies and research groups are increasingly exploring applications of generative artificial intelligence (GenAI) in pathology and laboratory medicine. Although GenAI holds considerable promise, it also introduces novel risks for patients, communities, professionals, and the scientific process.

Objective.—

To summarize the current frameworks for the ethical development and management of GenAI within health care settings.

Data Sources.—

The analysis draws from scientific journals, organizational websites, and recent guidelines on artificial intelligence ethics and regulation.

Conclusions.—

The literature on the ethical management of artificial intelligence in medicine is extensive but is still in its nascent stages because of the evolving nature of the technology. Effective and ethical integration of GenAI requires robust processes and shared accountability among technology vendors, health care organizations, regulatory bodies, medical professionals, and professional societies. As the technology continues to develop, a multifaceted ecosystem of safety mechanisms and ethical oversight is crucial to maximize benefits and mitigate risks.

Generative artificial intelligence (GenAI) has created enormous interest within pathology and laboratory medicine, just as it has in other areas of health care. Some of the envisioned applications may turn out to be hype, whereas others may end up having a significant impact on certain aspects of pathology. As described in other papers in this series, GenAI differs from many other technologies in certain important ways. Unlike traditional artificial intelligence (AI), which typically uses historical data to make predictions, GenAI algorithms use the patterns within historical data to create entirely new data. Maintaining explicit human control over the technology (human in the loop) is both essential and potentially challenging.1,2  And although some authors have suggested that the early benefits of GenAI in health care may lie in streamlining administrative tasks, prototypes have also been developed for safety-critical clinical functions such as image-based diagnosis.3  Risk assessment and quality management of GenAI applications must therefore be tailored to the type of use case.

GenAI opens a new set of ethical, legal, and social issues that must be addressed before it can be responsibly integrated into patient care.4  The National Institute of Standards and Technology recently issued a draft report for GenAI risk management that enumerates categories of potential dangers associated with the use of GenAI.5  The same characteristics that make GenAI powerful, in particular the ability to simulate certain aspects of human thinking, also create new challenges for quality management. For example, GenAI will reflect biases within the data sets used to train it.6  Yet the opaque nature of deep learning models used in GenAI makes it difficult to uncover, let alone quantify, these biases within an AI’s output.6  Opacity is also a major barrier to trust, particularly within safety-critical environments such as patient care.7  These issues make it important to design accountability systems at each stage of the AI lifecycle, from design to training to validation to ongoing performance monitoring. It is also important to have clear accountability of any human clinicians making GenAI-assisted decisions. Figure 1 describes a hypothetical scenario in which GenAI could potentially fuel a wide array of ethically problematic practices within pathology.

Figure 1.

Ethical risks in artificial intelligence (AI)–driven pathology: a hypothetical scenario. This scenario explores the potential ethical problems that could occur in the absence of strong accountability to enforce principles such as transparency, consent, and professionalism.

Figure 1.

Ethical risks in artificial intelligence (AI)–driven pathology: a hypothetical scenario. This scenario explores the potential ethical problems that could occur in the absence of strong accountability to enforce principles such as transparency, consent, and professionalism.

Close modal

There has been a remarkable amount of recent work in identifying and beginning to address the social, ethical, and regulatory issues for AI, including GenAI.8–10  Ethicists and technologists have debated how best to create trust, how to embed ethics within the development of AI solutions, and the ideal roles of health care leaders and regulators. Some in the technology community have argued that regulation should wait until the technology is further advanced. There are 2 problems with this thinking. One is the potential harm that could occur in the transition period, when the technology is underregulated and safety issues may not yet be adequately addressed. The other is that safety and ethics are prerequisites for many types of advances. Negative perceptions, such as concerns about bias, lack of transparency, or job displacement, can create barriers to adoption. To build trust and address these concerns within pathology, strategies for educating patients and health care providers about the benefits and limitations of GenAI must be developed. One such trust-building strategy is to provide transparent information about how systems are developed, tested, and implemented.11  This includes providing clear explanations of the algorithms used that include any potential biases or limitations. It is also essential to ensure that GenAI systems are designed with patient safety and privacy in mind, using techniques such as anonymization and encryption to protect sensitive data. Maximizing the rate of progress, therefore, requires coevolution of regulatory and quality management systems and technologies with the core AI technologies.

Ethical GenAI requires that the various stakeholder groups do their part to ensure that patient interests are appropriately protected (Figure 2). This includes roles for governments at multiple levels, for accreditation agencies such as the College of American Pathologists, for individual medical professionals, and for the health care organizations in which GenAI is implemented and used. Consider for a moment the regulatory systems for a different technology, namely automobiles. Traffic safety has evolved continuously ever since the first horseless carriages were developed. This includes requirements for the machines themselves, such as headlights and brakes. It also includes licensing for drivers, together with training programs and elaborate traffic rules. It includes engineering of roads and the design of intersections, traffic lights, and collapsible lampposts. There are also financial incentives for safety, largely mediated by insurance companies. All of this is backed up by various forms of both legal and social enforcement. Essentially, there is an ecosystem of overlapping and mutually reinforcing safety technologies and systems, within which are roles for many different types of organizations and individuals, both in the public and private sector. This elaborate ecosystem of safety mechanisms has coevolved during the past 140 years alongside advances in automobiles themselves. For GenAI to achieve its potential to improve pathology and laboratory medicine we need an analogous ecosystem, with multiple layers of control mechanisms and accountability to ensure safety and ethics, backed up by law and implemented operationally within both the software industry and the health care industry. Pathologists should play a key role in overseeing the quality management of pathology AI systems, especially those used for clinical purposes.

Figure 2.

Stakeholders of generative artificial intelligence (GenAI) in pathology. The ethical clinical implementation of GenAI relies on the collective efforts of a diverse set of stakeholder groups fulfilling their respective roles.

Figure 2.

Stakeholders of generative artificial intelligence (GenAI) in pathology. The ethical clinical implementation of GenAI relies on the collective efforts of a diverse set of stakeholder groups fulfilling their respective roles.

Close modal

The first step toward an ethical and regulatory framework is understanding who the major stakeholders are, and what they expect and require of GenAI in pathology and laboratory medicine. In cases where the values and expectations of stakeholder groups are not completely in alignment, transparent acknowledgement of the issues and tradeoffs may enhance public dialogue and creative problem solving across the various groups.

Patients are the most important stakeholders. In general, patients desire the benefits of new technologies in medicine, while avoiding potential harms such as misdiagnosis and privacy violations. Clinical GenAI requires real-world patient data for training purposes, raising important issues related to patient consent and data privacy. Patients have a right to know how their personal health data will be used and should have clear mechanisms to either provide consent or opt out of use of their data for AI training.12–16  Patients may also have expectations of fairness and justice related to the use of GenAI and may also expect some level of public transparency.

Health care providers, in the role of patient advocates, care about the issues listed above. They also may have concerns about the impact of GenAI on their own jobs and professional status. One major concern is the possibility of job displacement for pathologists, as GenAI systems become increasingly sophisticated and capable of performing cognitive tasks traditionally done by human experts. Although it is unlikely that AI will completely replace pathologists, it may lead to a shift in the types of tasks they perform and the skills required for those tasks.17  Pathologists may need to develop new skills, such as data analysis and interpretation, to work effectively with AI systems. Furthermore, the integration of AI into daily practices may require significant changes to workflow and processes. Pathologists may need to learn how to use new software and tools, and they may need to collaborate more closely with other health care professionals, such as radiologists or oncologists, to provide comprehensive care for patients.

To mitigate the potential negative impacts of AI on the pathology workforce, it is essential to ensure that pathologists are adequately trained and supported in developing new skills. This may involve providing opportunities for continuing education, creating mentorship programs, and promoting cross-functional collaboration among health care professionals. Additionally, it is crucial to consider the ethical implications of job displacement and to develop strategies for supporting workers who may be affected by these changes.17 

The developers of GenAI systems are important stakeholders as well. Most developers are presumably motivated by some mix of benefiting society and being remunerated for their success. Although industry is stereotypically thought of as wanting as few constraints as possible, developers also benefit from clarity and predictability of the regulatory landscape in which they operate. This allows them to better prioritize their efforts in the areas where they see the most potential, ensuring that their innovations can thrive within a stable and understood framework.

Finally, regulators and policymakers themselves are stakeholders. However, they often find themselves caught between competing perspectives among the other stakeholder groups. They are not always experts in the technologies under consideration, yet they must make decisions that hopefully allow technologic progress to proceed at a rapid pace in the direction of societal benefit, balancing this against potential harms while creating a level playing field with acceptable levels of fairness, trust, and transparency. None of this is easy. Regulators and policymakers therefore benefit from open and vigorous debate and engagement among the various stakeholders as regulations are proposed, enacted, and modified over time. This ensures that all perspectives are considered and the resulting policies are well-informed and effective.

Medical ethics is not new, going back thousands of years, at least as far as Hippocrates. This is not to say that the subject has been by any means static, though. As society and technologies have evolved, understandings and formulations of ethics have likewise needed to evolve. This includes the ethics of personal data and of GenAI.

Perhaps the most familiar modern formulation of medical ethics is the one that formed the basis of the Belmont Report, which was subsequently written into US law for oversight of research on human subjects.18  It divides medical ethics into 3 categories that are in mutual tension. The first is respect for persons. Also referred to as autonomy, this is the principle that all competent individuals should have the right of determination over medical actions on their physical bodies, as well as their personal information. This principle is often addressed through various forms of informed consent. The second principle is beneficence, along with its counterpart nonmaleficence. This is the principle of acting in the best interests of each individual patient, both to benefit the individual and to avoid harm. The third principle is justice, which addresses various forms of fairness from the perspectives of individuals and groups.

Predating the Belmont Report is the World Medical Association’s Declaration of Helsinki: Ethical Principles for Medical Research Involving Human Subjects. This document, which has been iteratively updated since its initial 1964 publication, speaks to many aspects of medical ethics as they relate to research with human subjects. It addresses informed consent, privacy and confidentiality, the need to consider the needs of vulnerable groups or individuals, the need to consider the risks, burdens, and benefits of the research, and how to ethically disseminate the research findings.19  The World Medical Association published the Declaration of Taipei in 2016, which focuses on the ethical use of health databases and biobanks, including personal health data. It emphasizes that ethical principles apply to personal data as much as they do to physical bodies and underscores the importance of informed consent, transparency, governance, and accountability.20 

The World Health Organization published a guidance document for AI in health in 2021.21  This guidance was based on 6 core principles: (1) protect autonomy; (2) promote human well-being, human safety, and the public interest; (3) ensure transparency, “explainability,” and intelligibility; (4) foster responsibility and accountability; (5) ensure inclusiveness and equity; and (6) promote AI that is responsive and sustainable.16  The World Health Organization subsequently issued a 2024 update of this guidance, focusing specifically on GenAI, which they referred to as “large multi-modal models.”22  This guidance raised concerns about GenAI’s lack of transparency, its potential to “hallucinate,” and the fact that GenAI is dominated by a small group of large technology companies whose interests may not be fully aligned with those of the public as a whole. Most recently, the European Union has issued the EU Artificial Intelligence Act, which establishes a common regulatory and legal framework for AI, and thereby sets a new standard for AI.23 

Other bioethicists have proposed a 6-pronged framework for ethical collection and use of personal data—minimizing harm, fairly distributing benefits and burdens, respecting autonomy, transparency, accountability, and inclusion—to inform policy and to guide action for companies that process personal data.24  The Findable, Accessible, Interoperable, and Reusable (FAIR) data management principles, although not an ethics code per se, may support the ethical development and assessment of AI systems by improving the transparency of applications and replicability of AI-related studies.25 

Outside of the medical world, various governments, nonprofit organizations, and private companies have drafted principles of ethical development and use of AI. Examples include the US intelligence community,26  the European Parliament,27  the Institute of Electrical and Electronics Engineers,28  and the Association for Computing Machinery.29  Although these codes vary in scope, they generally emphasize the importance of transparency (including explainability), human control over all systems, and benefit to society as a whole.

To successfully navigate a rapidly evolving technology landscape, including GenAI, it is necessary to understand the applicable regulatory frameworks.30  The following summary focuses on the United States and Europe, while also shedding light on the role of standards and potential future directions.

US Food and Drug Administration

In the United States, medical AI and other software is subject to the medical device regulations of the US Food and Drug Administration (FDA). Software intended to be used for medical purposes without being part of a hardware medical device is classified by FDA as software as a medical device (SaMD).31  The FDA uses multiple pathways for regulatory approval based on both the novelty and the expected risk of a device. For instance, SaMD deemed to be of moderate to high risk might require a more rigorous premarket approval, whereas lower-risk SaMD may be eligible for the less rigorous 510(k) pathway. The FDA has also developed the Medical Device Development Tools program to facilitate the development and evaluation of medical devices, including SaMD.32  This program acknowledges the critical role of tools such as clinical outcome assessments, biomarkers, and computational models in advancing medical device innovation. A particularly challenging issue is how to regulate evolving algorithms, that is, those that are designed to “learn” over time. The FDA has proposed the Predetermined Change Control Plan framework.33  In this proposed guidance, the FDA specifies which kinds of changes are permissible (eg, an algorithm cannot entirely change its intended use). Furthermore, the submitter would need to define the acceptance criteria for any change.

Centers for Medicare and Medicaid Services

The Centers for Medicare and Medicaid Services (CMS) regulates laboratory testing performed on humans in the United States and ensures the quality and accuracy of laboratory testing procedures, based on the Clinical Laboratory Improvement Amendments.34  This is distinct from, and complementary to, the FDA’s regulation of the manufacture and sale of devices and software. It is possible, then, that CMS may issue guidance regarding validation, use, and quality management of GenAI within clinical laboratories.

Other US Regulatory Agencies

Other agencies, such as the Office for Civil Rights and the Office of the National Coordinator for Health Information Technology, may also play roles in governing certain aspects of medical AI, such as data privacy and security.35  For example, upon request by the US president, the Department of Health and Human Services (the parent organization to the FDA and CMS) recently updated the federal record to account for AI tools entering medicine. One interesting development is the proposal to establish national accuracy laboratories.36  Whether these centers will be laboratories or centers of excellence regarding software performance assessments remains to be determined.

Regulation in Europe: Current Systems and Processes

In Europe, there are 3 main regulations that apply to AI within health care. The General Data Protection Regulation sets the standard for data protection and privacy, including health data, thus impacting the development and deployment of AI in health care.37  The recently proposed AI Act aims to regulate the development and use of AI systems, including those used in health care settings.38  Finally, the In Vitro Diagnostic Regulation governs the manufacture and distribution of in vitro diagnostic medical devices, including AI-based diagnostic tools.39 

The Role of Standards

Standards play a central role in ensuring interoperability, reliability, and safety in the development and deployment of AI in health care. Organizations such as the International Organization for Standardization and the Clinical and Laboratory Standards Institute provide guidelines and standards for the validation, verification, and performance evaluation of AI algorithms and systems.40  Although standards development organizations by themselves do not carry regulatory authority, regulatory agencies can and often do require compliance with particular standards. For example, the FDA recently recognized 2 pathology scanning systems as compliant with the Digital Imaging and Communications in Medicine standard.41,42 

Future Regulatory Directions

As AI continues to advance and integrate into clinical practice, regulatory frameworks will need to evolve if they are to adequately address transparency and accountability in AI development.15  This may include harmonizing international regulations and fostering collaboration among regulatory bodies, industry stakeholders, and health care professionals.43  It is essential for individual pathologists and the pathology professional community to stay informed about the current regulatory landscape, understand the roles of different regulatory bodies, and engage in ongoing discussions surrounding ethics, standards, and potential future directions. By doing so, pathologists can effectively leverage AI technologies while upholding the highest standards of patient care and safety.

Health care organizations that develop, implement, and/or use GenAI have the responsibility to develop operational systems to ensure quality and safety, consistent with medical ethics.22  Health care delivery organizations also have an obligation to treat their workforces fairly, including when software has potential to replace certain human tasks. Although regulatory authorities play a uniquely important role with regard to codifying and enforcing ethical principles, they are too far removed from the day-to-day activities to be the only or even the main enforcers of ethical accountability. Quality management systems provide a good model for this, with their overlapping and complementary layers of controls. Just as with traffic safety, no one quality technique can catch or prevent all possible errors, and so a systems approach is necessary.

As with most medical technologies, it is likely that the majority of GenAI applications in health care settings will be supplied by commercial vendors, rather than being developed entirely within clinical or academic organizations.22  As discussed above, in the United States, the FDA is the primary regulator of commercially vended medical devices, including software. The FDA plays an important role in premarket approval as well as enforcing good manufacturing practices. However, in the same way that the Clinical Laboratory Improvement Amendments require clinical laboratories to separately oversee validation, quality control, and proficiency testing of laboratory tests, there is an important need for institutional systems for quality management of commercial GenAI products, including validation, calibration, ongoing monitoring, and external quality assessment of AI applications. (The technical details of how to best perform these functions are the subject of ongoing research.) Hospitals and other health care delivery organizations should design processes and organizational structures to carry out these activities. One term that has been proposed for this is “AI Ops.” Just as clinical laboratory quality management programs are ultimately under the authority and direction of a doctoral-level laboratory director, AI Ops within clinical settings should be organized under the authority and direction of a medical director.

Supply chains play an often underrecognized role in quality management and ethical accountability. Health care organizations must carefully construct contracts with GenAI vendors and contractors in ways that protect the values of the organization. This can be quite tricky given the dynamic nature of the technology industry. Imagine, for example, an ethical GenAI vendor with responsible data management practices that is acquired by a company that does not share those values. It is important, therefore, for contracts to include stipulations that prevent clinical data from being treated as fungible assets, for example, requiring destruction of clinical data in the event that a company is acquired or goes bankrupt. Other principles that can and should be enforced by contract include data security, access to validation data sets, transparency of quality data, and availability of GenAI systems for study by independent academic researchers. Hospital and health system supply chain organizations that are used to negotiating primarily on pricing and delivery terms must develop new capabilities for enforcing ethical accountability of contracted GenAI products and services.

Professional societies have a well-established role in promoting ethical behavior within their areas of practice. This includes setting and enforcing expectations for their own members, convening multi-stakeholder discussions, and engaging with standards development organizations and regulatory agencies. With regard to clinical AI specifically, the American Medical Association has published a formal policy on AI in medicine.44  Subspecialty societies such as the American Heart Association and the American College of Radiology have likewise published ethics statements related to AI within their domains.45,46  The College of American Pathologists has significant opportunities to influence responsible GenAI practices within the field of pathology and laboratory medicine, both as a subspecialty society and as a provider of both proficiency testing and accreditation.

As more GenAI applications make their way into clinical care, establishment of ethical governance is critical to protect patient safety and patients’ rights. These efforts are currently in the very early stages. Health care organizations, medical professionals, professional societies, and regulatory agencies must prioritize the development, implantation, evaluation, and evolution of ethical and quality systems for GenAI in order to capitalize on the opportunities while minimizing the risks to patients and society.

1.
Pesapane
F,
Cuocolo
R,
Sardanelli
F.
The Picasso’s skepticism on computer science and the dawn of generative AI: questions after the answers to keep “machines-in-the-loop
.”
Eur Radiol Exp
.
2024
;
8
(
1
):
81
.
2.
Jin
Q,
Chen
F,
Zhou
Y,
et al.
Hidden flaws behind expert-level accuracy of multimodal GPT-4 vision in medicine
.
NPJ Digit Med
.
2024
;
7
(
1
):
190
.
3.
Wachter
RM,
Brynjolfsson
E.
Will generative artificial intelligence deliver on its promise in health care
?
JAMA
.
2024
;
331
(
1
):
65
69
.
4.
Yaraghi
N.
Generative AI in health care: opportunities, challenges, and policy. Health Aff Forefront
.
January 8, 2024
. https://www.healthaffairs.org/content/forefront/generative-ai-health-care-opportunities-challenges-and-policy. Accessed April 23, 2024.
5.
National Institute of Standards and Technology
.
Artificial intelligence risk management framework: generative artificial intelligence profile
. https://airc.nist.gov/docs/NIST.AI.600-1.GenAI-Profile.ipd.pdf.
Published 2024
. Accessed September 25, 2024.
6.
Albahri
AS,
Duhaim
AM,
Fadhel
M,
et al.
A systematic review of trustworthy and explainable artificial intelligence in healthcare: assessment of quality, bias risk, and data fusion
.
Inform Fusion
.
2023
;
96
:
156
191
.
7.
Entwistle
VA,
Quick
O.
Trust in the context of patient safety problems
.
J Health Organ Manag
.
2006
;
20
(
5
):
397
416
.
8.
Ryan
M.
In AI we trust: ethics, artificial intelligence, and reliability
.
Sci Eng Ethics
.
2020
;
26
(
5
):
2749
2767
.
9.
McLennan
S,
Fiske
A,
Tigard
D,
Müller
R,
Haddadin
S,
Buyx
A.
Embedded ethics: a proposal for integrating ethics into the development of medical AI
.
BMC Med Ethics
.
2022
;
23
(
1
):
6
.
10.
Stix
C.
Actionable principles for artificial intelligence policy: three pathways
.
Sci Eng Ethics
.
2021
;
27
(
1
):
15
.
11.
Kaplan
B.
Seeing through health information technology: the need for transparency in software, algorithms, data privacy, and regulation
.
J Law Biosci
.
2020
;
7
(
1
):
Isaa062
.
12.
Caine
K,
Hanania
R.
Patients want granular privacy control over health information in electronic medical records
.
J Am Med Inform Assoc
.
2013
;
20
(
1
):
7
15
.
13.
Cumyn
A,
Menard
J-F,
Barton
A,
et al.
Patients’ and members of the public’s wishes regarding transparency in the context of secondary use of health data: scoping review
.
J Med Internet Res
.
2023
;
25
:
e45002
.
14.
Kaplan
B.
How should health data be used? privacy, secondary use, and big data sales
.
Camb Q Healthc Ethics
2016
;
25
(
2
):
312
329
.
15.
Lennerz
J,
Schneideer
N,
Lauterbach
K.
How health data integrity can earn trust and advance health
.
Issues Sci Technol
.
2024
;
40
(
2
):
52
55
. doi:10.58875/GTRD9795
16.
Lennerz
JK,
Schneider
N,
Lauterbach
K.
Dimensions of health data integrity
.
Eur J Epidemiol
.
2024
;
39
(
2
):
179
181
.
17.
Nakagawa
K,
Moukheiber
L,
Celi
LA,
et al.
AI in pathology: what could possibly go wrong
?
Semin Diagn Pathol
.
2023
;
40
(
2
):
100
108
.
18.
US Department of Health, Education and Welfare
.
Protection of human subjects; notice of report for public comment
.
Fed Regist
.
1979
;
44
(
76
):
23191
23197
.
19.
World Medical Association
.
World Medical Association Declaration of Helsinki: ethical principles for medical research involving human subjects
.
JAMA
.
2013
;
310
(
20
):
2191
2194
.
21.
World Health Organization
.
Ethics and governance of artificial intelligence for health
.
Geneva, Switzerland
:
World Health Organization
;
2021
. https://www.who.int/publications/i/item/9789240029200. Accessed April 23, 2024.
22.
World Health Organization
.
Ethics and governance of artificial intelligence for health: guidance on large multi-modal models
.
Geneva, Switzerland
:
World Health Organization
;
2024
.
23.
Regulation (EU)
2024
/1689 of the European Parliament and of the Council of 13 Jun 2024
. http://data.europa.eu/eli/reg/2024/1689/oj. Accessed August 9, 2024.
24.
McCoy
MS,
Allen
AL,
Kopp
K,
et al.
Ethical responsibilities for companies that process personal data
.
Am J Bioethics
.
2023
;
23
(
11
):
11
23
.
25.
Wilkinson
MD,
Dumontier
M,
Aalbersberg
IJ,
et al.
The FAIR guiding principles for scientific data management and stewardship
.
Sci Data
.
2016
;
3
:
160018
.
26.
Office of the Director of National Intelligence
.
Artificial intelligence ethics framework for the intelligence community
. https://www.intelligence.gov/artificial-intelligence-ethics-framework-for-the-intelligence-community. Accessed April 23, 2024.
27.
European Parliamentary Research Service
.
The ethics of artificial intelligence: issues and initiatives
. https://www.europarl.europa.eu/RegData/etudes/STUD/2020/634452/EPRS_STU(2020)634452_EN.pdf.
Published 2020
. Accessed April 23, 2023.
28.
Huang
C,
Zhang
Z,
Mao
B,
Yao
X.
An overview of artificial intelligence ethics
.
IEEE Trans Artif Intell
.
2023
;
4
(
4
):
799
819
.
29.
Association for Computing Machinery
.
ACM code of ethics and professional conduct
. https://www.acm.org/code-of-ethics.
Published 2018
. Accessed April 23, 2024.
30.
Lennerz
JK,
Green
U,
Williamson
DFK,
Mahmood
F.
A unifying force for the realization of medical AI
.
NPJ Digit Med
.
2022
;
5
(
1
):
172
.
31.
US Food and Drug Administration
.
Software as a medical device (SaMD)
. https://www.fda.gov/medical-devices/digital-health-center-excellence/software-medical-device-samd. Accessed April 23, 2024.
32.
US Food and Drug Administration
.
Medical device development tools (MDDT)
. https://www.fda.gov/medical-devices/medical-device-development-tools-mddt. Accessed April 23, 2024.
33.
US Food and Drug Administration
.
Predetermined change control plans for machine learning-enabled medical devices: guiding principles
. https://www.fda.gov/medical-devices/software-medical-device-samd/predetermined-change-control-plans-machine-learning-enabled-medical-devices-guiding-principles Accessed July 25, 2024.
34.
Centers for Medicare & Medicaid Services
.
Clinical Laboratory Improvement Amendments (CLIA)
. https://www.cms.gov/medicare/quality/clinical-laboratory-improvement-amendments. Accessed April 23, 2024.
35.
Tripathi
M.
Leveraging agency expertise to foster American AI leadership and innovation. HealthITbuzz
. https://www.healthit.gov/buzz-blog/ai-ml/leveraging-agency-expertise-to-foster-american-ai-leadership-and-innovation.
Published December 19, 2023
. Accessed April 23, 2023.
36.
Shah
NH,
Halamka
JD,
Saria
S,
et al.
A nationwide network of health AI Assurance Laboratories
.
JAMA
.
2024
;
331
(
3
):
245
249
.
37.
Meszaros
J,
Minari
J,
Huys
I.
The future regulation of artificial intelligence systems in healthcare services and medical research in the European Union
.
Front Genet
.
2022
;
13
:
927721
.
38.
European Parliament
.
EU AI Act: first regulation on artificial intelligence
. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence.
Published June 8, 2023
. Accessed April 23, 2024.
39.
Hoffmüller
P,
Brüggemann
M,
Eggermann
T,
et al.
Advisory opinion of the AWMF Ad Hoc Commission In-Vitro Diagnostic Medical Devices regarding in-vitro diagnostic medical devices manufactured and used only within health institutions established in the union according to Regulation (EU) 2017/746 (IVDR)
.
Ger Med Sci
.
2021
;
19
:
Doc08
.
41.
US Food and Drug Administration
.
510(k) substantial equivalence determination decision summary. 510(k) number K232202
. https://www.accessdata.fda.gov/cdrh_docs/reviews/K232202.pdf. Accessed April 23, 2024.
42.
US Food and Drug Administration
.
510(k) substantial equivalence determination decision summary. 510(k) number K232208
. https://www.accessdata.fda.gov/cdrh_docs/reviews/K232208.pdf. Accessed April 23, 2024.
43.
Marble
HD,
Huang
R,
Dudgeon
SN,
et al.
A regulatory science initiative to harmonize and standardize digital pathology and machine learning processes to speed up clinical innovation to patients
.
J Pathol Inform
.
2020
;
11
:
22
.
44.
American Medical Association
.
Principles for augmented intelligence development, deployment, and use
. https://www.ama-assn.org/system/files/ama-ai-principles.pdf. Accessed April 23, 2024.
45.
Spector-Bagdady
K,
Armoundas
AA,
Arnaout
R,
et al.
Principles for health information collection sharing, and use: a policy statement from the American Heart Association
.
Circulation
.
2023
;
148
:
1061
1069
.
46.
Geis
R,
Brady
AP,
Wu
CC,
et al.
Ethics of artificial intelligence in radiology: summary of the joint European and North American multisociety statement
.
Radiology
.
2019
;
293
(
2
):
436
440
.

Competing Interests

Lennerz is an employee of BostonGene. de Baca is an employee of Sysmex America, Inc. The other authors have no relevant financial interest in the products or companies described in this article.