Artificial intelligence (AI), a set of techniques aimed at approximating some aspect of human cognition using machines,1  promises to provide pathologists a tool to improve inefficiencies and inaccuracies in diagnosis and laboratory testing, and to facilitate an anticipated decreasing number of pathologists' ability to serve an increasing number of patients with timely diagnoses, even as those diagnoses exhibit an increasingly complicated molecular complexity.24  Pathology has long examined the use of AI in pathologic diagnosis5  and is already using AI. Papanicolaou test imaging, approved by the US Food and Drug Administration (FDA) for screening purposes, is now fully, if not widely, used.6,7  Other areas of cytopathology are investigating the use of AI for diagnosis,8  and AI will undoubtedly play a strong and central role in its continuing progress toward becoming fully automated.911  Artificial intelligence in pathology promises to outperform human beings in their assessment of the specific characteristics necessary for pathologic and radiologic diagnosis.9,12  Despite the initial hysteria,1316  it is now clear that AI will be, at least for the foreseeable future, a tool for pathologists, not a replacement for pathologists, and it is evident that although AI's use will fundamentally alter pathology practice, it is perhaps pathologists' greatest opportunity to invest themselves more fully in their patients' care.3,7  Pathologists' success with AI, however, will depend to a large degree on the successful implementation of efficient AI governance and regulation.

REGULATING ARTIFICIAL INTELLIGENCE

Current regulation of AI technology, still relatively nascent, includes some basic regulatory frameworks from the FDA.17,18  Currently, the Clinical Laboratory Improvement Amendments of 1988 (CLIA) has requirements for the use of artificial intelligence in pathology. Under CLIA, laboratory use of artificial intelligence is limited to developing algorithms prior to validation and implementation. Changing the rules of test performance after validation and implementation is generally not allowed without revalidation. Medical societies and other groups have also weighed in on AI regulation. The American Medical Association (AMA) has also examined artificial intelligence in health care, promoting the interaction of physicians with the federal regulatory regime, emphasizing patient privacy and confidentiality issues, and supporting an AMA policy regarding AI that can be continuously refined.19  Further, the Royal Australian and New Zealand College of Radiologists has drafted some ethical principles for AI in health care,20  and the Center for Data Innovation, a data, technology, and public policy think tank, has suggested that the United States develop a national strategy around artificial intelligence to maximize its value.21 

Mature discussion and strategizing of AI regulation is increasingly needed. It is evident that AI will increasingly be necessary to interpret the world, and will be integral to social, political, and business environments, making decisions for human beings.22  Yet, the development of AI, broadly speaking, has occurred substantially outside any regulatory environment.23  There are few state laws, essentially no court cases, and very little legal scholarship addressing AI regulation.23  The lack of much-needed legal regulatory scholarly examination and regulatory guidance should not be surprising. Artificial intelligence development, marketing, and use do not fit the traditional regulatory environment, which has long provided robust solutions for long-established regulatory needs, such as patent, trademark, and copyright law, tort liability, and research and development oversight.23 

The need for regulation of AI will continue to grow as it becomes increasingly integral to numerous technologies, including health care technology, with which people are progressively interacting. Yet effective and sufficient AI governance is likely to be stubbornly difficult to establish. Beyond considerations of responsibility and accountability for actions taken by or with the input of AI, governance will require deep and novel considerations of normative explanations of AI applications, implementation scenarios, and ethical standards.22  Further, AI's risks do not fit the typical regulatory paradigms. It often requires little infrastructure, and it contains components not designed in a coordinated manner, assembled by numerous, widespread people with little personal interaction, with the development and results usually opaque to outsiders. As such, potential harmful features, which could be regulated away or diminished, are not easily identified.23  Additionally, complicating research and development issues, collective issues that do not directly involve individual AI systems, and policymakers' general lack of specialized knowledge of AI development make governance before AI implementation extremely complicated, with little ability to control unforeseeable consequences.22,24  Given the conceptual difficulties in assigning moral and legal responsibility when AI causes harm, as well as the practical difficulties—controlling actions of autonomous machines, the potential unforeseeability of AI actions, and the potential for clandestine or diffuse development of AI—all of which have the capacity to render regulation before AI implementation ineffective or impracticable23 —it is reasonable to ask whether AI is even governable.22 

INITIAL REGULATORY CONSIDERATIONS

Regulatory policy must address the various risks AI imposes, balancing those risks with the benefits AI promises. Specifically, AI regulation must balance the risk of stifling innovation by overregulation with the risk of harm caused by underregulation or misregulation. Artificial intelligence policy will need to address difficult questions of justice and equity, specifically bias in AI application, the appropriateness and adequacy of AI's consequential decision-making, and AI's transparency; the limitations of AI decision-making where decisions may be life-threatening or involve the use of force; safety generally, and the potential use of certification to address AI safety, including standards related to validation, cybersecurity, and safety; privacy; and unemployment resulting from AI system use.25 

Regulation and governance inform responsibility, accountability, and liability. In considering the various regulatory regimes proposed for current and emerging AI, it is important to remember—and inform policymakers—that there may be some health care decisions so important that, despite the growing abilities and quality of AI, those decisions should never be exclusively the product of machines.1  Further, because AI is not yet autonomous (although driverless automobiles are beginning to appear), it is necessary to establish to whom the responsibility falls for AI choices.26  Currently, “[i]f someone gets hurt, you can't blame it on the machine.”26  As that is considered, it will be consistently necessary to ensure that individuals are not “placed into the loop for the sole purpose of absorbing liability for wrongdoing.”1 

A threshold question for policymakers attempting to devise regulations to best ensure AI safety is what amount of safety is appropriate, and how that appropriate level of safety can be reliably determined.1  “Safer than human beings” is too vague a level, and it is ultimately inadequate by itself for determining an appropriate level of safety; and looking to technology to answer this difficult question is futile—this is a question of policy, not of technology.1  Pathologists and other physicians will need to actively engage in the policy discussions regarding AI regulation so that these difficult questions are answered as carefully and appropriately as possible. The time to begin that discussion is now.

Uncertainty about AI regulation provides fertile ground for the development and sustenance of systemic risk. Although policymakers are familiar with systemic risk, the complexity, connectivity, geographic diversity, and sheer size of the numerous social, financial, and economic factors affected by AI make regulatory management of systemic risk with AI extremely difficult.25  Systemic risk with AI is likely to require an unprecedented level of governmental, professional societal, and industrial cooperation and trust-building to successfully develop a robust regulatory AI environment. This is so, even though traditionally such diversity of stakeholder involvement, with the stakeholders' necessarily heterogeneous interests, has made agreement on basic regulatory principles more difficult.25  This may very likely be AI's biggest challenge to widespread, efficient, and successful application.

Successful regulation of AI systems will be further hampered by the “pacing problem”—increasingly evolving and developing innovation occurring in tandem with a lag in necessary regulatory and legal rulemaking—which leads to technology decoupling or disengaging from regulation. Yet attempts to craft front-end regulation and legislation in order to “future-proof” legislation frequently result in laws too vague or too general to sufficiently guide the development and use of the evolving technology.

Without appropriate ethical guidance, regulatory precedent, or normative agreement, AI is at risk of significant regulatory delay and uncertainty; and with the additional risk of the involvement of numerous but uncoordinated regulatory bodies, the threat of agency capture exists.25  Agency capture is a situation in which, because of regulatory failure, regulators become sympathetic toward an industry, with associated high levels of interaction between industry individuals and regulators, a “revolving door” between industry employment and regulatory employment, and regulators potentially being “bought off” by industry representatives.25  The actualization of agency capture would undoubtedly significantly delay or derail appropriate and valuable AI system development.

Ultimately, an extraordinary amount of regulatory innovation will need to occur to ensure a robust AI future. Such extreme regulatory innovation will likely require an acknowledgment that government's traditional strong control as a regulatory state must give way to a decentralized regulation structure in which a government “command and control” regulatory regime is eliminated or significantly blunted, its ability to command obedience is reduced, and its resources limited.

Industry self-regulation may theoretically provide regulatory support, and in fact this has already begun due to a general lack of government input into AI regulation.25  Further, AI's fast-evolving technology lends itself well to self-regulation, as industry leaders attempt to stave off future clumsy government regulation, itself difficult or impossible to change because of tradition or convention. However, self-regulation ultimately will not provide efficient regulation because self-regulatory standards typically are not obligatory, lack enforcement mechanisms, and can lead to perverse outcomes.27,28  Self-regulation could be replaced by peer regulation, including a more robust industry society presence, serving in some cases in a quasi-governmental capacity.25  For pathologists, peer regulation in a manner similar to areas of peer regulation in which the College of American Pathologists is already deeply involved might provide necessary regulatory guidance.

It has been proposed that evolving AI regulatory strategies will abandon traditional normative boundaries and instead provide regulatory guidance using nontraditional but highly valuable characteristics, including (1) enhanced flexibility allowing for temporary, experimental regulation, (2) the provision of “regulatory sandboxes” so that industry can test new ideas without being forced to comply with rigid rules, (3) the development of “anticipatory rulemaking” that leverages stakeholder feedback to provide regulations that are timely and relevant, (4) the increased use of data analysis to guide regulation, (5) the adaptation of common law rules to AI whenever possible, (5) the use of “legal foresight” to proactively explore potential future legal developments, and (6) the creation of multistakeholder councils to overcome uncertainty and information deficits.25  Pathologists should embrace these new methods of regulatory guidance; however, that may be difficult, as it likely will be for others, because currently the leaders most engaged in regulation and governance typically have a strong affinity for traditional, well-understood regulatory strategies, so they reject novel, less comfortable ideas.

The use of nudge theory, popularized by Richard Thaler and Cass Sunstein, has also been proposed by some experts as a tool to influence AI regulation and governance.29  Nudge theory considers the concept of libertarian paternalism and explores human biases, aiming to exploit them to influence individual behavior. The behavioral economics of nudge theory are still being explored, however, and it remains to be seen whether nudge theory will have a strong role in the development of innovative AI regulation.25 

MEDICAL MALPRACTICE LIABILITY

A physician has an ethical responsibility to use sound judgment in establishing a diagnosis for and treating each patient, holding the patient's best interests paramount.30  For physicians, AI is expected to, among other things, reduce medical malpractice liability by improving diagnosis and treatment, reducing medical error, and preventing ineffective and unnecessary care.31  Yet there is a concern that, rather than reduce liability, AI may increase it by raising the standard of care with the introduction of additional measures that a physician may choose to use—but through medical judgment may choose not to use—when establishing a diagnosis for or treating a patient.31  Because courts have treated clinical practice guidelines inconsistently, it is reasoned that courts will be similarly inconsistent in their consideration of AI's influence on the standard of care.31  This will be a thorny issue to resolve because just as clinical practice guidelines may not take into account the unique features of every patient, so will AI—at least early on—likely be unable to account for the unique features of each patient.25 

One possible solution to the medical malpractice liability issue is to jettison traditional medical malpractice negligence theory and instead use enterprise liability theory. Worker's compensation plans are examples of enterprise liability. Enterprise liability spreads loss and victim compensation, and it removes the barrier of requiring plaintiffs to prove negligence.31  Because it is unlikely that the use of AI will ever be able to guarantee a good outcome for every patient, there will necessarily exist “unpreventable calculable harm” directly connected with AI. This scenario suggests a strong opportunity for enterprise liability to succeed in managing the risk of patient harm.31  And enterprise liability is one scenario in which physician input, most likely through physician medical societies, could inform standards and strategies for AI implementation. The College of American Pathologists, with its robust governmental and advocacy efforts, could lead this endeavor as AI continues to permeate pathology and laboratory medicine. The time for this discussion is now. These regulatory and legislative decisions have not yet been determined, indeed for the most part have not yet begun to be discussed.

THE FUTURE

For pathologists, one could imagine the future of AI occurring in 3 phases. The first phase is the “present” phase, with AI functioning to supplement pathologists' decision-making. Artificial intelligence in this phase will efficiently handle the “dull, dirty, or dangerous” jobs.32  The pathologist remains in control of the AI system entirely and is “in the loop.” This is, in fact, pathologists' current situation, in which pathologists are entirely comfortable, using various laboratory tools for diagnosis. This situation, where AI is merely another tool for diagnosis, does not require any specific legal rearrangement or novel legal theories, and medical malpractice liability remains with the pathologist. As with other laboratory tools, tools using AI will require validation, upgrades, and quality control, all of which fall under the purview of the pathologist.32  This phase, where the goal is for AI merely to make pathologists more efficient, would allow for increased efficiencies during a period of a pathologist shortage, thus reducing the risk of scope of practice creep that might otherwise attend such a shortage. One thought-provoking possibility, given the dynamics of AI growth in the present phase, would be the merger of radiology and pathology as medical subspecialties.33 

The second phase is the “near future” phase, where AI systems may involve pathologists' actions but are themselves capable of independent decision-making without a pathologist's input or involvement. This phase, where the pathologist is “on the loop,”32  will require a reexamination of liability and the development of a robust regulatory environment because an AI system could diagnose a disease and issue a pathology report entirely without a pathologist's review. One could speculate that the pathologist's role would be one of quality oversight or similar responsibilities. In this phase, the AI system would serve to reduce dramatically, but perhaps not eliminate, the need for human being involvement by persons with pathology training.

The third phase is the “distant future” phase, where AI and robotics advances shift decision-making away from the pathologists' control entirely and toward an autonomous AI, taking pathologists entirely “out of the loop.”32  This AI system may even evolve into a “loop of its own,” capable of self-awareness.32  In this phase, human beings, including pathologists, would be entirely unnecessary to render an actionable diagnosis or to institute treatment. Here, the risks surrounding foreseeability and control will be considerable.23  This situation—until recently the stuff of science fiction—is where the imagination takes hold and visions of dystopia arise. It would require an extremely mature regulatory and governance structure; essentially one that is entirely different from the current regulatory structure that is heavily dependent on FDA involvement. Regulating autonomous artificial intelligence in order to provide maximum patient safety while not unduly limiting innovation will be difficult but critical. The concept of liability would be a radical departure from pathologists' current concept of medical malpractice liability, even were liability to have evolved much earlier to enterprise liability. This phase will require significant consideration of accountability, because “without accountability, rule of law will fail.”32  Although these 3 phases will, of course, occur along a continuum, and while AI system risk may currently and for some period of time be stratified into low-risk and high-risk AI,25  it is likely that as AI begins to develop levels of autonomy, all AI risk will be considered high.

For example, once AI is autonomous and “physician robots” begin practicing medicine without even indirect human physician involvement, the question arises as to how to hold those “physician robots” accountable.34  One cannot envision them holding property or earning wages, so the question of restitution of injured patients must be considered.34,35  Another question turns to standard of care; querying whether human physicians (if they still exist) and physician robots are held to the same standard of care.35 

Inevitably, diagnostic systems will evolve to thoroughly include AI, changing from existing-knowledge rule-based expert systems to data-mining systems relying upon AI pattern recognition and advanced algorithms.34  At some yet-to-be-foreseen time, were AI to become more autonomous, regulatory discussion would probably very quickly need to address the tension between state medical licensing bodies and federal regulatory agencies, such as the FDA, regarding which will regulate the AI “doctors”, ie, whether these “doctors” are best considered diagnostic tools to be regulated by the FDA, or true physicians who require state licensure to allow their “practice of medicine.”36 

Once autonomous AI systems are being considered, questions arise that used to be the stuff of science fiction but no longer are.37  “[W]e will face a dilemma where we must decide whether to hold an artificially intelligent tool accountable for its own actions.”35  One must consider, almost astonishingly, whether autonomous AI systems—physician robots—are even capable of making a mistake or of negligence if they are programmed to not breach the duty to provide patients with medicine that meets the standard of care.34,35  Indeed, consideration of autonomous AI systems requires a profound reconsideration of the concept of personhood—from which flow legal rights, duties, and protections—and “the concept of embodiment of the essence of the person.”38  “Human [beings] may soon create entities who exhibit mental capacities equivalent to, or beyond, those of Homo sapiens.”39  It is currently unknown how society could appropriately bestow personhood “such that there exists an independent expression of ‘free will' that is subject to accountability….”38 

Of course, there has not yet been a lawsuit pitting a human physician against a robot physician, for example, where there is concern of joint and several liability, but were one to occur in the future, the case's resolution would require consideration of each of these issues. The early resolution of these questions, with the resultant institution of an appropriate, robust medical malpractice theory, would render such a scenario manageable.

GOVERNANCE AND EMPATHY

Finally, regulation and governance of AI, into whatever form it ultimately evolves, must always support the value of AI while managing the tension between automation and empathy. “One does not have to accept a fully dystopian image of future healthcare ruled by data bots in order to appreciate the tensions between automation and empathy.”36  Automation cannot be allowed to create an interpersonal wedge between physicians and their patients.36  And yet that wedge already exists. Abraham Verghese coined the term “iPatient,” expressing how the electronic medical record has “threatened to become the real focus of our attention, while the real patient in the bed often feels neglected, a mere placeholder for the virtual record.”40  Physicians, including pathologists, must now rethink what it means to be caring, and how to preserve and foster social interaction with patients while using AI efficiently. It will be especially important that physicians use AI in a manner that respects physician and patient heterogeneity and supports equity.36  It is physicians, with pathologists at the head of the table, who must lead and forever maintain this conversation with policymakers, providing regulatory guidance that ensures empathy is always considered and respected throughout the complicated discussion around AI regulation and governance.

References

1
Calo
R.
Artificial intelligence policy: a primer and roadmap
.
Univ Calif Davis Law Rev
.
2017
;
51
:
399
427
.
2
Proscia inks agreement to bring deep-learning technology to dermatopathology
.
BusinessWire
.
March 15, 2018
. ,
2018
.
3
Sharma
G
,
Carter
A.
Artificial intelligence and the pathologist: future frenemies?
Arch Pathol Lab Med
.
2017
;
141
(
5
):
622
623
.
4
Garrity
M.
How AI can impact 7 areas of healthcare
.
Becker's Health IT and CIO Report
.
March 29, 2019
. ,
2019
.
5
Koss
LG
,
Sherman
ME
,
Cohen
MB
, et al.
Significant reduction in the rate of false-negative cervical smears with neural network-based technology (PAPNET Testing System)
.
Hum Pathol
.
1997
;
28
(
10
):
1196
1203
.
6
Bengtsson
E
,
Malm
P.
Screening for cervical cancer using automated analysis of PAP-smears
.
Comput Math Methods Med
.
2014
;
2014
:
842037
.
7
Green
R
,
Hogarth
MA
,
Prystowsky
MB
,
Rashidi
HH
.
The job market outlook for residency graduates: clear weather ahead for the butterflies?
Arch Pathol Lab Med
.
2018
;
142
(
4
):
435
438
.
8
Sanyal
P
,
Mukherjee
T
,
Barui
S
,
Das
A
,
Gangopadhyay
P.
Artificial intelligence in cytopathology: a neural network to identify papillary carcinoma on thyroid fine-needle aspiration cytology smears
.
J Pathol Inform
.
2018
;
9
:
43
.
9
Stewart
J.
Lehigh University develops artificial intelligence system for better cervical cancer screening
.
Cervical Cancer News
.
April 28, 2017
. ,
2019
.
10
Zhang
L
,
Le
L
,
Nogues
I
,
Summers
RM
,
Liu
S
,
Yao
J.
DeepPap: deep convolutional networks for cervical cell classification
.
IEEE J Biomed Health Inform
.
2017
;
21
(
6
):
1633
1643
.
11
Bora
K
,
Chowdhury
M
,
Mahanta
LB
,
Kundu
MK
,
Das
AK
.
Automated classification of Pap smear images to detect cervical dysplasia
.
Comput Methods Programs Biomed
.
2017
;
138
:
31
47
.
12
Heady
D.
Artificial intelligence performs as well as experienced radiologists in detecting prostate cancer
.
UCLA Newsroom
.
April 16, 2019
. ,
2019
.
13
Kaplan
DA
.
Fear, hope, and hype for artificial intelligence
.
Diagnostic Imaging
.
December 27, 2017
. ,
2018
.
14
van Laak
J
,
Rajpoot
N
,
Vossen
D.
The promise of computational pathology: part 1
.
The Pathologist
.
January 19, 2018
. ,
2018
.
15
Salto-Tellez
M
,
Hamilton
P.
The computational discussion continues… The Pathologist. Feburary 26, 2018
. ,
2018
.
16
Lou
N.
Clinicians brace for AI to transform medicine: artificial intelligence is coming ... perhaps for your job?
MedPage Today
.
December 26, 2017
. ,
2018
.
17
US Food and Drug Administration Digital Health Software Precertification (Pre-Cert) Program
.
April
11,
2019
.
18
US Food and Drug Administration
.
Developing a software precertification program: a working model
.
April 2018
. ,
2019
.
19
American Medical Association
.
Augmented intelligence in healthcare
. ,
2019
.
20
The Royal Australian and New Zealand College of Radiologists
.
Ethical principles for AI in medicine
. ,
2019
.
21
New
J.
Why the United States needs a national artificial intelligence strategy and what it should look like
.
Center for Data Innovation Web site
.
December 4, 2018
. ,
2019
.
22
Rickli
JM
.
Part 3: emerging technologies: 3.2 assessing the risk of artificial intelligence
.
World Economic Forum
.
January 11, 2017
. ,
2018
.
23
Scherer
MU
.
Regulating artificial intelligence systems: risks, challenges, competencies, and strategies
.
Harvard J Law Technol
.
2016
;
29
(
2
):
354
398
.
24
Artificial intelligence: bringing machines into the boardroom
.
Sherpany Web site
.
April 21, 2016
. ,
2018
.
25
Guihot
M
,
Matthew
AF
,
Suzor
NP
.
Nudging robots: innovative solutions to regulate artificial intelligence
.
Vanderbilt J Entertain Technol Law
.
2017
;
20
;
385
445
.
26
Ross
C.
Advice on artificial intelligence from the front lines of combat: humans can't escape accountability
.
STAT
.
April 12, 2019
. ,
2019
.
27
Piper
K.
Exclusive: Google cancels AI ethics board in response to outcry
.
Vox
.
April 4, 2019
. ,
2019
.
28
Murphy
H.
Don't count on 23andMe to detect most breast cancer risks, study warns
.
New York Times
.
April 16, 2019
. ,
2019
.
29
Thanler
RH
,
Sunstein
CR
.
Nudge: Improving Decisions About Health, Wealth, and Happiness
.
New Haven, CT
:
Yale University Press;
2008
.
30
Schleiter
KE
.
Difficult patient-physician relationships and the risk of medical malpractice litigation
.
Virtual Mentor
.
2009
;
11
(
3
):
242
246
.
31
Swanson
A
,
Khan
F.
The legal challenge of incorporating artificial intelligence into medical practice
.
J Health Life Sci
.
2012
;
6
(
1
):
90
147
.
32
Reitinger
N.
Algorithmic choice and superior responsibility: closing the gap between liability and lethal autonomy by defining the line between actors and tools
.
Gonzaga Law Rev
.
2015/2016
;
51
:
79
92
.
33
Jha
S
,
Topol
EJ
.
Adapting to artificial intelligence: radiologists and pathologists as information specialists
.
JAMA
.
2016
;
316
(
22
):
2353
2354
.
34
Allain
JS
.
From Jeopardy! to jaundice: the medical liability implications of Dr. Watson and other artificial intelligence systems
.
LA Law Rev
.
2013
;
73
:
1049
1079
.
35
Semmler
S
,
Rose
Z.
Artificial intelligence: application today and implications tomorrow
.
Duke Law Technol Rev
.
2018
;
16
:
85
98
.
36
Terry
NP
.
Appification
,
AI
,
and healthcare's new iron triangle
.
J Health Care Law Policy
.
2018
;
20
:
118
179
.
37
Knight
W.
A robot has figured out how to use tools
.
MIT Technology Review
.
April 7, 2019
. ,
2019
.
38
Lauria
RM
,
Robinson
GS
.
From cyberspace to outerspace: existing legal regimes under pressure from emerging meta-technologies
.
Univ La Verne Law Rev
.
2012
;
33
:
219
238
.
39
Dowell
R.
Fundamental protections for non-biological intelligences or: how we learn to stop worrying and love our robot brethren
.
Minn J Law Sci Technol
.
2018
;
19
:
305
336
.
40
Verghese
A.
Treat the patient, not the CT scan
.
New York Times
.
February 26, 2011
. ,
2018
.

Author notes

The author has no relevant financial interest in the products or companies described in this article.