Abstract
To ensure patient safety, medical device manufacturers are required by the Food and Drug Administration and other regulatory bodies to perform biocompatibility evaluations on their devices per standards, such as the AAMI-approved ISO 10993-1:2018 (ANSI/AAMI/ISO 10993-1:2018).However, some of these biological tests (e.g., systemic toxicity studies) have long lead times and are costly, which may hinder the release of new medical devices. In recent years, an alternative method using a risk-based approach for evaluating the toxicity (or biocompatibility) profile of chemicals and materials used in medical devices has become more mainstream. This approach is used as a complement to or substitute for traditional testing methods (e.g., systemic toxicity endpoints). Regardless of the approach, the one test still used routinely in initial screening is the cytotoxicity test, which is based on an in vitro cell culture system to evaluate potential biocompatibility effects of the final finished form of a medical device. However, it is known that this sensitive test is not always compatible with specific materials and can lead to failing cytotoxicity scores and an incorrect assumption of potential biological or toxicological adverse effects. This article discusses the common culprits of in vitro cytotoxicity failures, as well as describes the regulatory-approved methodology for cytotoxicity testing and the approach of using toxicological risk assessment to address clinical relevance of cytotoxicity failures for medical devices. Further, discrepancies among test results from in vitro tests, use of published half-maximal inhibitory concentration data, and the derivation of their relationship to tolerable exposure limits, reference doses, or no observed adverse effect levels are highlighted to demonstrate that although cytotoxicity tests in general are regarded as a useful sensitive screening assays, specific medical device materials are not compatible with these cellular/in vitro systems. For these cases, the results should be analyzed using more clinically relevant approaches (e.g., through chemical analysis or written risk assessment).
Medical devices are engineered to be of durable construction and to accommodate the functionality needed for proper device application. The biocompatibility of the materials, as well as their processing, is also important to ensure that the patients are not negatively affected by the devices when they enter the clinical setting. Certain materials of constructions used for medical devices (and manufacturing processes or processing aids) may contain chemicals that can lead to failing cytotoxicity scores using traditional, regulatory-mandated methodologies. Examples of common materials include plastics (e.g., polyethylene or polypropylene [co]polymers, polyvinyl chloride [PVC]) and metals (e.g., nitinol, copper [Cu]-containing alloys). Although providing stable and reliable materials for use in relation to performance parameters, various metals/alloys and plastics may evoke undesired cytotoxic effects. These effects might be observed as reduced cellular activity or decay in the in vitro assay, especially when standard methods and test parameters (e.g., extraction ratios) are used.1,2
To prevent adverse effects (e.g., toxicity, or other types of biocompatibility-related issues) from occurring among patients and clinical end users, manufacturers are required to perform biocompatibility evaluations per guidance provided in e.g., ANSI/AAMI/ISO 10993-1:2018.3 This standard provides an overall framework for the biological evaluation, emphasizing a risk-based approach, as well as general guidance on relevant tests for specific types of contact to patients or users. Of note, traditional biocompatibility tests, within the battery of both in vivo and in vitro methods, could take up to 6 months (or take years, in the case of long-term systemic toxicity testing). Lengthy turnaround times stem from in vivo test methods, which are performed on animal models and include irritation, sensitization, systemic toxicity, genotoxicity, and carcinogenicity studies. Traditional in vitro tests involve exposure of cells or cellular material to device extracts in order to characterize toxicity in terms of cytotoxicity, genotoxicity, cellular metabolic activity, and aspects of hemocompatibility.3
In recent years, as a complement to or a substitute for traditional testing methods, a risk-based approach using a chemical and materials characterization for evaluation of patient safety has become mainstream. The framework for this approach is provided in ISO 10993-18:2020.4 Moreover, the Association for the Advancement of Medical Instrumentation (AAMI) and, by extension, regulatory bodies (including the Food and Drug Administration [FDA] and International Organization for Standardization [ISO]) have driven the use of chemical and material characterization. Particularly for medical devices in long-term contact with patient (e.g., implantable devices), use of chemical and material characterization can reduce unnecessary animal testing and provide results that are scientifically sound and detailed, while being more cost and time efficient. For example, ISO 10993-13 highlights that a correctly conducted risk assessment can provide justification to exclude long-term biological testing, where the nature and extent of exposure confirms that the patient is being exposed to very low levels of chemicals that are below relevant toxicological thresholds.3
Throughout the ISO 10993 series, it also is emphasized that conducting animal testing for biological risk evaluation should only be considered after all alternative courses of action (review of prior knowledge, chemical or physical characterization, in vitro evaluations, or alternative means of mitigation) have been exhausted. In addition, analytical chemistry used for chemical characterization can be used as a means for investigating possible culprits when traditional biocompatibility tests, such as cytotoxicity tests, fail, especially in cases where a known substance(s) in the material has cytotoxic potential (e.g., silver-infused wound dressing that provides antibacterial properties).
However, it should be kept in mind that although chemistry can be a powerful tool in many cases, not all medical devices extracts are compatible with the analytical methods and instruments used, and these studies may not provide the full understanding of the toxicity profile of the device. In those cases, animal testing or further justification may still be needed to demonstrate a safe biocompatibility profile for the device.
Cytotoxicity testing per AAMI/ISO 10993-5:2009/(R)20145 has historically been one of the most used (and is considered the most reactive) of the biocompatibility tests6,7 and can be efficiently used to detect abnormal effects to cells that may arise if harmful chemicals are present in device extracts. However, it also is recognized that cell-based test methods do not necessarily correlate to in vivo toxicological effects and actual clinical patient safety, often showing a reaction when no clinical adverse effects are known or expected to occur. For instance, some soluble metal ions (e.g., Cu, nickel [Ni]) are known to exert toxic effects on cells in an in vitro setting; however, their presence in surgical instruments and implants has demonstrated high patient tolerance and negligible effects upon clinical use.
This article provides a brief evaluation of the clinical impact of metals and plasticizers commonly used in medical device materials that may lead to patient exposure during the use of devices, with emphasis given to those that may result in cytotoxicity failures in an in vitro setting. In addition, an approach to evaluating valid clinical risks using a toxicological risk assessment is discussed.
Cytotoxic Adverse Effects of Metals
Metals used in medical device manufacturing, such as Cu, chromium (Cr), and zinc (Zn), are endogenous and required for some enzymes to function in the human body. Typically, such metals are introduced by diet and through the use of, for example, cosmetics and medications. However, at higher doses and via other routes of exposure, these metals may become toxic, causing adverse local and systemic effects in the body.8–11 Ionic Cr, cobalt (Co), Ni, aluminium, and titanium have been shown to have mutagenic actions and are classified as having possible or proven carcinogenic elements depending on the dose and route of exposure.12–14
Thus, the specific amount of the metal/compound to which the end user will be exposed, rather than its presence, defines whether the metal/compound is toxic on a whole-organism level (systemic toxicity). It also should be noted that specific surface treatments, such as passivation and anodization, can affect potential metal ion release from a medical device and thus can alter the potential toxicological effects. Therefore, final finished form medical devices (meaning, as they would be exposed to a patient) should be used for evaluation.15,16
The mechanism behind cytotoxicity of compounds lies in the ability of metal ions to oxidatively attack vital components inside cells, creating reactive free radicals (e.g., reactive oxygen or nitrogen species) that can cause adverse effects in the nucleus, proteins, and/or lipids.17,18 For instance, Cr has been shown to be a potent hydroxyl radical generator that, in turn, can cause DNA strand breaks.19 Of note, however, the hexavalent Cr [Cr(VI)] compounds, more specifically, are considered to have these effects by demonstrating genotoxic effects four times more frequently than trivalent Cr [Cr(III)] compounds.20 Regardless, in vitro analysis shows that Cr(III) also can contribute to this process.21 Moreover, research has demonstrated that some metal ions, such as Co and Ni, can directly bind to DNA polymerase, either affecting its fidelity22 or increasing the incorporation of incorrect nucleotides into the newly synthesized DNA strand.23 Reactive free radicals generated by metals also can interact directly with cellular proteins and interfere with protein folding, inducing misfolding and/or aggregation.24 Cr(VI) (Cr6+) has been shown to enhance mRNA mistranslation through the insertion of incorrect amino acids into peptide sequences during protein synthesis, potentially resulting in misfolding and aggregation of the corresponding proteins. mRNA mistranslation appears to be a primary cause of cellular toxicity evoked by Cr.25 That being said, cellular effects caused by ionic metals are still being elucidated.
Potential Source of Toxic Substances in Plastics: Plasticizers
Many medical devices on the market are manufactured from plastic materials (e.g., blood bags, tubing sets, syringes).26,27 Often-times, plasticizers (e.g., phthalates, bisphenol A) are added to plastic materials to increase flexibility and elasticity.28 Many plasticizers used in the medical device industry are phthalate esters, including bis(2-ethylhexyl) phthalate (DEHP). DEHP and its metabolite mono(2-ethylhexyl) phthalate (MEHP) have been linked to endocrine disruption.29 DEHP also is classified as CMR 1b (presumed carcinogenic, mutagenic, or reproductive toxicant) substance,30 and its use has been restricted by European authorities through banning its use in certain consumer products or regulating its presence and labeling.31 Alternatives to DEHP have been developed, including other surrogate phthalate plasticizers such as di(2-ethylhexyl) terephthalate (DEHT), trioctyl trimellitate (TOTM), and diisononyl phthalate (DINP). In addition, nonphthalate substitutes may be used. A few of the commonly used nonphthalate plasticizing compounds include 1,2-cyclohexane dicarboxylic acid diisononyl ester (DINCH), bis(2-ethylhexyl) adipat, and acetyl tributyl citrate.32 Due to the potential toxic effects of phthalates, as demonstrated in animal studies, and possibly the toxicity of alternative chemical substitutes, patient exposure during the use of medical devices containing phthalates is being rigorously assessed.33–35
Moreover, plasticizers are not incorporated into the plastic material via covalent bonding; instead, they are more loosely embedded in the plastic resin matrix. Therefore, they are more readily released and result in human exposure through direct and/or indirect contact. Common direct routes of exposure include oral and dermal contact (e.g., through devices placed inside the mouth or on skin),36 indirect exposure comes through inhalation37 or exposure to fluids (blood) that have contacted DEHP-containing materials. This may lead to negative health effects during the use of a medical device that incorporates these chemicals, especially among developing patient populations (e.g., neonates).
To provide an everyday example, the so-called “new car smell” often is attributed to plasticizers or their degradation products.38 When inhaled in abundance, plasticizers may have toxic adverse effects, such as affecting respiratory, immune, and reproductive systems.39 The rate at which each plasticizer migrates from materials is dependent on the characteristics of both the plasticizer and the embedded resin. Migration rates of different plasticizers from PVC tubing extracted in a solution of 50% ethanol showed major differences with TOTM released at a rate of 20 times less than DEHP. DEHT migrated from PVC at a rate that was three times less than DEHP, and a similar migration percentage to DEHP was observed with DINCH.40 Depending on the nature of a device, other factors also may influence the amount of plasticizer released, such as the flow rate of an infusion through PVC in an infusion set.41
Cytotoxic Adverse Effects of Plasticizers
Once in the body, plasticizers are hydrolyzed into metabolites. The primary metabolites vary depending on the specific plasticizer; however, the most common MEHPs, mono(2-ethylhexyl) terephthalate, mono-2-ethylhexyladipate, and mono-iso-nonyl phthalate, have been shown to be more cytotoxic to L929 murine cells than the parent plasticizers (DEHP, DINP, and DINCH). These MEHPs have been demonstrated to affect cell viability at concentrations as low as 0.01 mg/mL (10 μg/mL or 10 ppm).42 In vivo and in vitro research links DEHP or its metabolites to a range of adverse effects in the liver, reproductive tract, kidneys, lungs, and heart; in addition, developing mammals are particularly susceptible to effects on the reproductive system.43 Potential toxic effects experimentally observed in various species are believed to translate to the human endocrine system.44 Human and rodent data suggest that DEHP affects cells through multiple molecular signals, including DNA damage.45 It also is hypothesized that DEHP can cause oxidative stress and production of free radicals that can induce detrimental effects in the cells through activation of peroxisome proliferator–activated receptor α. Although this response generally is considered to contribute to the carcinogenic effects of DEHP in the liver, it can cause more generalized toxicity in organs such as the ovary.44
In Vitro Methods for Analyzing Cytotoxicity
Cells have a number of quality control systems that monitor the structural integrity of the genetic material and the correct functionality of the proteome. These systems allow the cell to overcome attacks and protect cells against the harmful accumulation of defects. However, if concentrations of harmful components become too high to tolerate, the subsequent DNA damage and/or abnormal biochemical reactions caused by cytotoxic compounds, such as metal ions and plasticizers, may lead to cellular dysfunctions, decreased metabolic activity, and ultimately unscheduled cell death.
To demonstrate that a medical device does not exert harmful effects to the end user, such as those described above, rigorous testing is required prior to marketing. Based on the intended use of the medical device, the FDA and other regulatory bodies worldwide require a specific set of biological risks to be evaluated with testing (or via other means, such as written risk assessments) to provide evidence of patient safety. A cytotoxicity test is considered a primary performance criterion under international standards (e.g., ISO 10993-13 ), regardless of the clinical use of the device, and it is regarded as the most reactive of the battery of biological tests available.6 For instance, this test might provide valuable initial data on the medical device or its extract, a chemical, or a chemical mixture in light of its potential reactivity, which may need to be taken into consideration prior to conducting in vivo testing. A severely cytotoxic device should be cautiously evaluated, or avoided, in an animal model to reduce the risk of animal welfare concerns (e.g., risk for corrosive reaction due to high or low pH, potential for major reactions). Alternatively, other methods (e.g., in vitro methods for testing) or a written toxicological risk assessment could be considered.
Despite the myriad of methods available for analyzing cytotoxicity of medical devices, including (1) the materials they are made of and (2) the manufacturing processes they undergo, many medical device manufacturers, as well as the regulatory agencies, historically have relied on the results of a minimum essential medium (MEM) elution cytotoxicity assay. This assay and other options for testing cytotoxicity are briefly described below. Of note, ISO 10993-5 also recognizes a variety of other tests (e.g., MTT or XTT assay, neutral red uptake [NRU]),5 some of which are discussed below.
MEM Cytotoxicity Assay
The MEM elution assay involves exposing the mouse fibroblast cell line L929 to device extracts in a controlled environment and examining the cells microscopically for overall cell presence (number of cells in viewing area) and viability, focusing on cell shape and cytoplasmic structures.5 Based on the criteria set forth in ISO 10993-5, each sample is given a score (from 0 to 4) depending on the severity of the deformations brought about by the device or its extract (Figure 1).
The MEM assay is a relatively easy test to perform; however, because cell viability is analyzed using a manual microscopic visualization, the accuracy of the results is not without bias, relying heavily on the level of experience of the assessor. Regardless of the qualitative nature of the MEM elution test, it remains a staple of biocompatibility testing for most U.S.-based medical device companies.
MTT Colorimetric Assay
European regulatory bodies often opt for another common cytotoxicity method known as the MTT assay [abbreviation used for the dye 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide used in the assay], which measures cellular metabolic activity and provides a colorimetric quantitative result (Figure 2).46 By taking out the human bias from the evaluation portion of the assessment, the MTT assay offers a nonsubjective quantitative measurement of absorbance from the colored solution produced by viable cells, where the increasing amount of viable cells or their overall metabolic activity results in an increase in color intensity. However, the MTT assay does not provide additional information on cell shape, intracellular structures, and other properties that would help determine the level and type of toxicity exerted by the test article extracts. Despite the lack of supplementary visual data, the quantitative outcome of the test provides a valuable basis for further analysis of the dose-response curves and can, more specifically, be used to define the toxicity thresholds and dose curve midpoints for each substance under focus. Use of the MTT assay also allows for the calculation of the half-maximal inhibitory concentration (IC50; often referred to as toxic concentration [TC50]), which is a concentration of a substance that yields a 50% decrease in cell metabolic activity. IC50 values often provide valuable toxicological information when evaluating the cytotoxic effects of a chemical solution in vitro.47
The MTT assay also is a routine test in academic research, resulting in a number of publications that address the effects of substances on cells, including cytotoxicity.48–51 This provides a pool of reliable data for characterization without the necessity to perform the testing of each substance multiple times. The downside of the available information, however, lies in the variability in the way the test is performed, including the cells used for the analysis. The purported cell-specific response arises from the unique nature of each cell line with its own individual biological characteristics and a specific epigenetic profile that can result in variable dose-response parameters between cells and cell lines, creating inconsistencies in reported IC50 values.52,53 The different biological characteristics contributing to the disparity of the toxicity assay results include the origin or lineage of the cells used,54 the expression and activity of proteins involved in drug resistance,55 and the activation status of different signaling pathways.56 In addition, the exposure time to the cells may result in different results; thus, validation of the methodologies used should be performed to provide more consistent results with known, acceptable variabilities. Because of this, comparisons should be made with caution and, if possible, performed with data from cell lines with similar characteristics. Further, when single end points are used to monitor cell viability, such as MTT assay, a higher incidence of false-positive and -negative data can occur. The use of one or two exposure concentrations does not provide the kind of quantitative information required to extrapolate the in vitro effects to a relevant in vivo reference value, where toxicity would be expected to occur.57 This especially is true for substances that don't have a linear correlation to cellular activity/viability. Therefore, if a reference value is desired, a full concentration-response curve would be required.58
It should be highlighted that other methods, such as XTT assay, NRU, or agar diffusion (often referred to as agar overlay), are discussed in ISO 10993-5.5 In general, the XTT test is similar to the MTT test, with the main difference being the dye added to the cells to measure viability. Also, NRU uses a similar setup to the MTT test, where cell viability is quantitatively measured by the amount of a weak cationic dye (neutral red) that is taken up and bound inside lysosomes of living cells.59,60
The agar diffusion test sometimes is chosen, especially for devices that come into contact with intact skin only. In this case, the agar layer that sits on top of the cells acts as a barrier and thus mimics the function of the top layer of the epidermis (the stratum corneum). Within this layer of skin, the keratinocytes that are attached together via cell-cell junctions and cytoskeletal proteins give the epidermis its mechanical strength.61 Further, the presence of highly organized lipid membranes, hydrolytic enzymes, and antimicrobial peptides provide an additional chemical barrier.61 In this test, the test article or its extract is placed on top of the agar layer, which sits on top of the cells that will be assessed for cytotoxic potential. Although this method allows specific devices to be tested directly, it is highlighted in ISO 10993-5 that some leachables may not be able to diffuse through the agar layer or may react with the agar; therefore, the use of this method should be justified.5 As a result, regulatory agencies have expressed reluctance to accept the agar diffusion assay in biocompatibility assessments for medical devices.
Table 1 summarizes the benefits and limitations of quantitative and qualitative cytotoxicity tests.
Evaluating Metals and Plasticizers in Cytotoxicity Failures
IC50 values for metals and plasticizers commonly used in manufacturing of medical devices are shown in Table 2. To address the question of inconsistencies with reported IC50 values for each metal and plasticizer highlighted in Table 2, most references have included data obtained from human or mouse fibroblast lineage cell lines, such as human gingival fibroblasts (HGF) and mouse subcutaneous connective tissue fibroblasts (L929). L929 cells can be easily cultured in a reproducible manner and are widely used for preliminary cytotoxicity evaluation for a wide range of biomaterials because of easy proliferation and adherence on most biomaterial surfaces.62 Further, L929 cells are recommended for use in the cytotoxicity tests accepted by ISO 10993-55 and the FDA.
Of note, the metal ions listed in Table 2 (and their relevant IC50 values) were tested as part of a metal salt, which leads to better dissolution and thus a higher exposure to the tested cells. As various salts may have been used in the studies, a specific CAS (Chemical Abstracts Service) number was not designated for the specific metal ions in Table 2. For reference, the lower the IC50 concentration, the more toxic the tested substance would be considered.
Complications with Translating In Vitro Data to In Vivo Responses
The cytotoxicity test is well known as a useful biocompatibility test for possible cytotoxic chemicals that may migrate from medical devices (either from materials or as residuals from processing). Due to its high reactivity,63 this test often has been used as a screening assay for materials, process residuals, and the final device configuration, as well as a tool to predict the potential clinical response. However, as indicated above, because of the potential cytotoxicity that some chemical compounds and elements (e.g., metals) may exert in vitro, cytotoxicity testing can lead to results that are not applicable to the clinical use and in vivo conditions. Also of note, section 10 of ISO 10993-5 highlights the following: “Any cytotoxic effect can be of concern. However, it is primarily an indication of potential for in vivo toxicity and the device cannot necessarily be determined to be unsuitable for a given clinical application based solely on cytotoxicity data.”5 Thus, the following question becomes pivotal: If a cytotoxic response is observed in an in vitro cytotoxicity test, how should one proceed with identifying the actual clinical risks for the medical device?
As discussed above, an in vitro cell culture is an exceedingly less complex system than that found with in vivo models. A number of in vivo tests therefore have been developed to gain more understanding of the clinical relevance for assessment of possible toxicological effects for the end user. In the ISO 10993 family of standards, these include tests related to sensitization, irritation, systemic toxicity, and others depending on the duration of contact and exposure route of the device.
In general, these tests are performed by exposing the animal model to extracts of a device, its components, or materials through injection or dermal exposure. Some animal tests are conducted by exposing test animals directly to the device through a procedure such as implantation. During the study, the test animals are assessed for possible toxic adverse effects and concentration thresholds causing adverse effects. Thus, these tests provide a better overview of the systemic effects of a device, such as impact to vital signs, accumulation in vital organs, affect on surrounding tissues, and immunological response, among others. Several research labs and government institutions also perform animal studies using worst-case exposure conditions (e.g., inhalation, oral ingestion, penile/vaginal/rectal exposure) to evaluate and define specific tolerable exposure (TE) levels for different known or suspected toxic substances. A wealth of knowledge on different in vivo experiments can be found in the published literature for specific chemical compounds, including metals. These types of studies aim to define levels of safety on an organism level and are an important resource for defining the true potential in vivo toxicological effects arising from individual chemicals, which then can be applied to evaluate the specific chemicals extracted from a medical device.
Altogether, to better understand the clinical risks associated with a potentially toxic substance, further assessment of how it might affect patient safety in the clinical setting often is needed. Below, a framework of toxicological risk assessment (often used for the assessment of long-term contacting medical devices) is provided as background information. In addition, a discussion on how cytotoxicity results may be tied to clinically relevant exposure levels and associated risks is included.
Fundamentals of Toxicological Risk Assessment
A more standardized framework for the use of clinically relevant toxicity data, including guidance for the derivation of the toxicological threshold values, such as tolerable intake (TI; in mg/kg/day) and TE (in mg/day) levels, are found in for example ANSI/AAMI/ISO 10993-17:2002/(R)2012.64 TI and TE levels represent the maximum dose at which an exposure to a substance does not produce adverse events or pose an unacceptable risk to human health. Both TI and TE are derived from experimental values shown to be without adverse effects, including the no observed adverse effect level (NOAEL) or lowest observed adverse effect level (LOAEL). TI and TE also take into account various uncertainty factors (UFs), including pharmacokinetic/toxicokinetic or metabolic differences among exposed people (UF1), extrapolation of effects between animals and people (UF2), and the quality and relevance of the experimental data (UF3).
Although ISO 10993-17 references default values for the UFs, the determination of a UF used should be defined by a qualified toxicologist who is familiar with the product and an expert in performing toxicological risk assessments. For medical devices, the TE incorporates a utilization factor (UTF) that accounts for the variables affecting clinical exposure, such as frequency of device use or adjustments based on contact time (proportional exposure factor [PEF]) and potential exposure to similar chemicals or compounds from other sources (concomitant exposure factor [CEF]), with a default recommendation value of 0.2 for CEF and 1.0 for PEF. The default UTF value of 0.2 accounts for possible concomitant exposure of five medical devices in a 24-hour period.
Valuable resources for toxicological data for deriving TI and TE levels include reference concentration values (inhalation exposures) and reference dose values from the Environmental Protection Agency (EPA) and minimal risk limits (MRL) from the Agency for Toxic Substances and Disease Registry (ATSDR). Other sources include, but are not limited to, those provided by the European Chemicals Agency (ECHA) and World Health Organization (WHO). Reference dose values typically are calculated from NOAEL values divided by UFs and/or modifying factors.65,66 Reference doses can be derived from laboratory animal dosing studies in which a NOAEL, LOAEL, or benchmark dose (with UFs generally applied to reflect limitations of the data used) can be obtained. Therefore, reference dose is an estimate (with uncertainty spanning approximately an order of magnitude) of a daily exposure for a chronic duration (up to a lifetime) to the human population (including sensitive subgroups) that is likely to be without an appreciable risk of deleterious effects during a human's lifetime. ATSDR also uses the NOAEL/UF approach to derive MRL levels for substances. They are set below levels that, based on current information, might cause adverse health effects in people most sensitive to such substance-induced effects. MRLs are derived for acute (1–14 days), intermediate (>14 to 364 days), and chronic (≥365 days) exposure durations and for oral and inhalation routes of exposure.
The reference daily intake (RDI; also known as recommended daily intake) is the updated term for recommended daily allowance. As noted above, a number of metals that are vital to the function of the body are introduced via oral intake, including food supplements. Thus, the defined RDIs may be used as the TI for dietary metals (e.g., calcium, phosphorus, potassium, sodium, magnesium, iron, cobalt, copper, zinc, manganese, molybdenum, selenium). RDIs are nutrient reference values developed by the Institute of Medicine (IOM). They are intended to serve as a guide for good nutrition and provide the scientific basis for the development of food guidelines in both the United States and Canada. Dietary reference intake values include the estimated average requirement, RDI, adequate intake, and tolerable upper intake level.
In general, exposure values below a published allowable limit for an essential metal would be considered safe. The FDA guidance on permitted daily exposures (PDEs) for elemental impurities in finished drug products [Q3D(R1) Elemental Impurities: Guidance for Industry]67 also can be used for metals that do not have an RDI. FDA guidance on application of Q3D(R1) is based directly on the ICH Q3D document, which is broadly accepted globally. Limits for metals not published in either of these sources can be drawn from the current EPA Table of Regulated Drinking Water Contaminates68 or the WHO's Guidelines for Drinking-water Quality,69 as these are applicable due to their worst-case daily oral intake values.
Animal tests that are guided by good laboratory practice are expected by governmental agencies (e.g., the FDA) in medical device submissions. These animal tests, which address the biocompatibility of a device and aid in addressing patient safety risks, typically are expensive, extensive, and require the use of an abundant number of animals. Hence the reason for the global initiative regarding animal welfare (the “3 Rs”: replacement, reduction, and refinement), which also is highlighted in AAMI/ISO 10993-2:2006/(R)2014.70 Further, national and international regulatory bodies are starting to move away from animal testing toward the use of in vitro or in silico (computational toxicology) alternative methods, such as analytical chemistry (i.e., the identification of extractables and leachable chemicals released from medical devices) and QSAR (quantitative structure-activity relationship).71
IC50 versus Systemically Toxic Concentrations
Currently, sparse data exist for relating cytotoxicity from medical devices to clinically obtained blood or tissue levels for metals/compounds.72,73 Several examples of discrepancies among cell culture IC50 data exist, thereby indicating that the concentration of a substance causes 50% of in vitro–cultured cells to perish, which could a result of the cell line used, quality of chemicals used, or experimental conditions. This highlights the fact that the lowest IC50 should be considered as a conservative overestimation of the actual cytotoxic dosage. It should especially be taken into consideration, as mentioned above, that many metals released during the extraction of a medical device for cytotoxicity testing play a vital role in a number of biochemical reactions in the body. As a result, their intake is necessary and therefore daily RDI (rather than restricted) levels apply.
For example, an oral intake of 11 mg/day Cr(III) is the PDE per Q3D67 ; at the same time, 743 μmol/L Cr(NO3)3 was found to reduce the viability of 50% of the cell culture cells (IC50).74 To put this into a medical device perspective, mechanically polished Co-Cr alloys (e.g., orthopedic permanent implants) have been shown to release 300 to 600 ng/cm2 of Co and less than 15 ng/cm2 of Cr during the first week of exposure when placed in physiologically relevant media.75,76 Cr(VI), on the other hand, has been associated with damage to, for example, the respiratory system, liver, and kidneys and has been recognized as a potential carcinogenic substance.77 An extensive review on the toxicological profile for Cr from the Agency for Toxic Substances and Disease Registry (ATSDR) discusses various exposure levels in relation to exposure routes and the potential adverse effects on human or animal health.78 An Occupational TE limit of 0.1 μg/m3 for Cr(VI) has been shown to be acceptable in terms of absolute excess risk (<4 per 10,000 according to the German Committee on Hazardous Substances).79 In a cell culture setup, however, Cr(VI) salts have been demonstrated to cause disturbances in cellular energy metabolism and cell cycle arrest already at 20 to 80 μmol/L, which makes it up to 500 times more cytotoxic than Cr(III) salts.80 Similarly, Co has been recognized (e.g., by ECHA and the International Agency for Research on Cancer) as a substance that may cause cancer, may damage fertility, and is suspected of causing genetic defects, among other effects. That being said, the dose is critical (as in any case), and based on the guidance provided in Q3D, permissible daily exposures range from 2.9 μg/day for inhalation and 50 μg/day for oral exposure,67 whereas IC50 in L929 cells with CoCl2 has been shown to be close to 80 μmol/L,74 demonstrating that the level of Co that will produce cytotoxic results is far below the levels shown to be safe for human systemic exposure.
Another example, Ni, though known to elicit sensitization and listed as a carcinogenic agent (CMR) in the cosmetic ingredient database COSING Annex II (under regulation EC no. 1223/2009), is considered an essential micronutrient and often is included in the diet through Ni supplements or as a trace element in vitamins. In general, Ni comes in the form of different salts, where soluble salts of Ni are considered to be more hazardous due to better absorption in the body. Based on dietary intake studies,81 the EPA has set a reference dose for Ni (soluble salts) at 0.02 mg/kg/day,83 while Q3D identifies a PDE of 220 μg/day for Ni through oral exposure and 22 μg/day for parenteral exposure.67 Yet, NiCl2 has been demonstrated to have an IC50 value as low as 106 μmol/L in a cell culture system.74 As another example, Cu is defined by the FDA as a Class 3 element that per categorization has relatively low toxicities by the oral route of administration (high PDEs, generally >500 μg/day) but could warrant consideration in the risk assessment for inhalation and parenteral routes.67 That being said, the IC50 value demonstrated for Cu salt (CuCl2) can be as low as 107 μmol/L.83
Zn is another metal that has been associated with increased cytotoxic effects in an MTT assay at concentrations as low as 25.5 μmol/L,84 but Q3D indicates that it has low inherent toxicity and, based on the FDA-recommended RDI up to 11 mg/day (11,000 μg/day for adults and children aged ≥4 years) or 3 mg/day (3,000 μg/day for infants) can be ingested without any clinical systemic effects. The above discussion highlights that the translation between IC50 values and clinical toxic concentrations can be difficult and requires thorough data analysis to define whether a medical device might have actual toxic adverse effects to the recipient.
The discrepancy between a cytotoxicity failure and its clinical relevance can be highlighted with an example that focuses on the importance of clinical dose rather than the concentration of a metal ion, as indicated by an IC50 value and manifested in a cytotoxicity test. For example, a nitinol staple typically has a surface area of 19 mm2, and ANSI/AAMI/ISO 10993-12:201285 stipulates that a device of this size should be extracted at a ratio of 6 cm2/mL. Practical experimental conditions require a certain extraction volume (e.g., 30 mL MEM fluid) to allow sufficient coverage of L929 mouse fibroblast cells in the required number of replicates for the test. Therefore, 948 nitinol staples would need to be pooled in a 30-mL extraction to satisfy all testing requirements. If this experiment fails cytotoxicity, it would likely be due to leaching of Ni to produce a concentration on the order of the IC50 of 0.106 mmol/L—a situation that can be confirmed by subsequent inductively coupled plasma mass spectrometry analysis (ICP/MS). If an amount of Ni were present at a concentration of the IC50, this would correspond to a total amount of 0.187 mg of Ni in 30 mL, or 0.2 μg/staple. Typical procedures involving stapling might have up to 20 staples applied; therefore, a clinical exposure from 20 staples would be 4 μg, which conservatively can be interpreted to mean 4 μg/day over time (recognizing that the true clinical exposure typically is less over time). The Q3D guidance provides an acceptable limit for chronic daily exposure to Ni through a parenteral route of 22 μg/day.67 Therefore, if the actual daily dose of Ni exposed to the patient from the staples is, at worst case, 4 μg/day, this is still more than five times lower than the conservative limit specified by the Q3D.
This example demonstrates that each medical device exhibiting a cytotoxicity failure should be considered on a case-by-case basis to determine the relationship between the cytotoxicity testing (surface area normalized) and actual clinical exposure, which is not surface area normalized but rather based on a per device per day amount of extractable.
Alternative Options In Case of MEM Cytotoxicity Testing Failure
As indicated above, medical devices submitted for market clearance have to undergo rigorous testing to ensure biocompatibility and patient safety. In a number of cases, the device can fail the most commonly used MEM cytotoxicity testing due to cellular exposure to medical device extraction fluid that contains, for instance, metals or plasticizers found in or on the device. Oftentimes, for the cytotoxicity test, a large sample also may be cut into smaller pieces to accommodate the extraction process, and that cutting could expose internal parts or nonpassivated edges that have no intended contact to the end user. This exposure to the cultured cells may yield an exaggerated response in the test system. Further, cutting of devices for extraction may result in the formation of particulate matter that would not occur in the clinical setting but could have an impact on the outcome of the in vitro test. Because of these and a multitude of other reasons (e.g., use of isolated cells rather than a complex interconnected tissue sample, as present in an organism), a failing MEM test does not necessarily translate to clinically relevant toxicity to the end user.
An alternative to the MEM test, in case of metals or plasticizers, may involve the evaluation of extractable/leachable chemicals and/or materials to further assess whether patient safety is at risk. If the possible culprits of the observed cytotoxic response can be identified through existing information or failure mode analysis, then a thorough literature review highlighting the above references and acceptable levels for the specific compounds may be sufficient to mitigate the risks associated with the medical device.
For instance, in the case of a wound-healing product that incorporates silver as an antibacterial component, the device most likely will fail an MEM cytotoxicity test; however, evaluating the amount of silver present in a given device and knowing its release and absorption kinetics will help in evaluating the actual potential toxicity to the patient during the intended use of the device.
Another example is irrigation tubing used to flush surgical sites. These tubes often are manufactured from PVC and may give a cytotoxic response in the standard test system where extractions are performed with mixed polarity cell culture medium. In this case, if it can be demonstrated that irrigation fluid used with the device is only polar and no other contact to the patient is expected, the cytotoxicity study may be adjusted to account for only polar extraction conditions to demonstrate whether clinically relevant conditions have an effect on the outcome of the test.
Additional in vivo testing data on the medical device or medical device extract with passing results also can help in strengthening the risk assessment and provide supplementary supportive information that no adverse effects are expected during clinical use. However, if multiple chemicals are playing a role and the material extractables profile is not fully understood, it is recommended to obtain specific data on the subject medical device through appropriate chemical characterization and its extractable and leachable profile, especially in cases where prolonged (>24-h) contact is expected in a clinical setting. In those cases, the concentrations of the chemical compounds migrating out of the medical device upon exposure to various environmental conditions (temperature and solvents) under exaggerated or exhaustive conditions are verified through an extractable/leachable (E/L) chemical analysis (also referred to as chemical characterization) using analytical chemistry methods, such as liquid chromatography/mass spectrometry (LC/MS), gas chromatography/mass spectrometry (GC/MS), and inductively coupled plasma mass spectrometry(ICP/MS).4 Mass spectrometry is a highly sensitive instrument86 that has been adapted by the medical device industry to separate and identify the trace concentrations of metals/compounds present in device extraction fluid. Based on the consequent analyses and available clinical data on each metal, plasticizer, or other possibly toxicologically significant substance, one can use scientific justification to determine whether the medical device with its specific chemical profile (extractables/leachables) has the potential to be harmful to the end user. This process can be completed through a toxicological risk assessment performed using the framework described above.
Conclusion
Cytotoxicity failures are highly scrutinized by the regulatory community. Medical devices can contain metals, plasticizers, or other materials that can pose a cytotoxic effect on cells as a result of these cytotoxicity assays being highly reactive to certain chemicals. These effects, however, do not always correlate with in vivo systemic toxicity or clinical effects. In spite of this well-known limitation of the cytotoxicity assay (highlighted in ISO 10993-5),5 it is a mandatory test that has to be performed for every medical device prior to submitting for market clearance.
Although a common cytotoxicity assay, such as an MEM elution assay or MTT assay, can provide valuable data about in vitro cytotoxic effects, it may overestimate the potential adverse in vivo reaction and thus does not always correspond to an accurate/valid clinical response. In those instances, analytical chemistry methods, such as chemical characterization (extractable/leachable testing), in combination with existing systemic toxicity data from scientific/government bodies (e.g., ATSDR, IRIS, ECHA) and published, peer-reviewed scientific articles, offer a framework for evaluating toxicological risk and clinical safety for the patient. Alternatively, supplementary animal testing on the final finished medical device might be helpful in investigating the clinical impact of the cytotoxicity failure, especially if dealing with devices that are not intended for long-term contact with end users and patients.
References
Author notes
Helin Räägel, PhD, is a senior biocompatibility expert at Nelson Laboratories in Salt Lake City, UT. Email: [email protected]
Audrey Turley, BS, is a senior biocompatibility expert at Nelson Laboratories in Salt Lake City, UT. Email: [email protected]
Trevor Fish, MS, is a toxicologist at Nelson Laboratories in Salt Lake City, UT. Email: [email protected]
Jeralyn Franson, MS, is an associate technical consultant at Nelson Laboratories in Salt Lake City, UT. Email: [email protected]
Thor Rollins, BS, is a director of toxicology and E&L consulting at Nelson Laboratories in Salt Lake City, UT. Email: [email protected]
Sarah Campbell, PhD, DABT, is a principal toxicologist at Nelson Laboratories in Salt Lake City, UT, and a title in the College of Pharmacy at the University of Utah, in Salt Lake City, UT. Email: [email protected]
Matthew R Jorgensen, PhD, DABT, is a chemist, materials scientist, and toxicologist at Nelson Laboratories in Salt Lake City, UT. Email: [email protected]