Comparing Photon and Charged Particle Therapy Using DNA Damage Biomarkers

Treatment modalities for cancer radiation therapy have become increasingly diversified given the growing number of facilities providing proton and carbon-ion therapy in addition to the more historically accepted photon therapy. An understanding of high-LET radiobiology is critical for optimization of charged particle radiation therapy and potential DNA damage response. In this review, we present a comprehensive summary and comparison of these types of therapy monitored primarily by using DNA damage biomarkers. We focus on their relative profiles of dose distribution and mechanisms of action from the level of nucleic acid to tumor cell death.


Introduction
In addition to the more historically accepted photon therapy, a growing number of facilities are providing proton and carbon-ion (C-ion) therapy. Consequently, treatment modalities for cancer radiation therapy have become increasingly diversified. An understanding of high-linear energy transfer (LET) radiobiology is critical for optimization of charged particle radiation therapy. Although the physical aspects of charged particle radiobiology are reasonably well understood, the differences in the DNA damage response from these various radiation qualities should be further studied to fully benefit from these new therapies. In this review, we present a comprehensive summary and comparison of these 3 types of therapy monitored primarily by using DNA damage biomarkers.

Important Facts about Photon Therapy
The absorption of energy from radiation in biologic material may lead to excitation or ionization. The raising of an electron in an atom or molecule to a higher energy level without actual ejection of the electron is called excitation. If the radiation has sufficient energy to eject 1 or more orbital electrons from the atom or molecule, the process is called ionization, and that radiation is said to be ionizing radiation. The important characteristic of ionizing radiation is the localized release of large amounts of energy. The energy dissipated per ionizing event is about 33 eV, which is more than enough to break a strong chemical bond, such as a C¼C bond (4.9 eV) [1].
Despite the fact that modern photon radiation therapy is delivered with much safer techniques, with computed tomography scanners and more accurate delivery with intensity-modulated radiation therapy, the relative depth dose distributions of photons are still much higher than those of protons [2]. Photons deposit their peak dose close to their http://theijpt.org entrance into the tissue, and thereafter, there is an exponential decrease of deposited dose with increasing depth [3] (see Figure 1 for illustration).

The Superior Energy Deposition and Tissue Relative Biological Efficiency of Charged Particles and Protons
In tissues and cells, heavy charged particles have been studied to deposit energy along their traversed path, forming a characteristic track structure unlike low-LET radiation that deposits its energy homogenously over targets [4]. The tracks consist of a cylindrical central core where most of the energy is deposited and an outer penumbra that contains energy from sporadically ionized electrons or d-rays originating from ionization events in the track core.
The specific distribution of energy deposition in tissue by a proton beam allows precise targeting of tumors, while requiring equally precise coordinates to avoid damage to healthy tissue. As previously discussed, low-energy (ie, keV) photons deposit their energy primarily at the entry of the tissue, whereas for high-energy photons (ie, MeV), their energy deposition peaks a few centimeters inside tissue (see Figure 1). In contrast, as long as charged particles move relatively fast (.100 MeV/n), energy deposited is relatively low, until the velocity of the particle gradually decreases with depth, allowing more energy deposition along their linear track. The unit used to measure the energy deposited by charged particles is called linear energy transfer (LET) with the unit of keV/lm. Therefore, charged particles start at lower LET at the entry of the tissue but have a rapid LET increase at the end of the track, where most of the energy is deposited in short and large bursts of energy in a region called the Bragg peak, where most of the particle energy is deposited in a thin slice. The initial energy of the particle beam determines the position of the Bragg peak, and coincides with the tumor location, leading to a much lower damage to the tissue in front of as well as behind the tumor as compared to photon therapy [5,6]. Figure 1 illustrates the difference of energy deposition as a function of depth into tissue for both MeV photons, protons, and carbons. In these profiles, the strong advantage of charged particles with much localized energy deposition deep in the tissue, with little build-up before the tumor, can be seen. On the other hand, the superiority of carbon therapy is not obvious from simple energy deposition and one must bring up the concept of relative biological effectiveness (RBE) and how heavier ions have higher RBE for DNA damage and cell death than proton radiation, as discussed below.
Owing to a complex 3D structure, tumors typically require targeting by multiple particle beams that have partially overlapping Bragg peaks, combining to form a spread-out Bragg peak (SOBP) that encompasses the entire tumor [7] as illustrated in Figure 1. Naturally, the strict requirement to match the SOBP with a tumor location poses a danger of imprecise targeting of the proton beam, which can damage the neighboring healthy tissue: for example, injuring the gastrointestinal tract while treating prostate cancer. The most common method to improve tumor targeting is intensity-modulated proton therapy, which combines a narrow beam of varied intensity with a precise adjustment of location by using a magnetic field [8].
Finally, the fact that the damage to healthy tissue is significantly lower than in photon therapy permits dose escalation and hypofractionation (fewer, but higher-dose irradiations for the same total dose received by the tumor), which may lead to a greater therapeutic advantage [7] and potentially tumor reoxygenation for better cell killing by increasing blood vessel leakage [9]. However, using higher individual doses may also change the long-term risk of increased damage to healthy tissue [5].

The Rise of Proton and Charged Particle Therapy
The limitations of energy deposition profiles of photon, combined with the growing technologic advances in delivering particle beams precisely, have led the way for the growing adoption of particle-based therapy for cancer treatment over the last 3 decades [10,11]. This adoption started in 1975 with the pioneering work of scientists at the Lawrence Berkeley National Laboratory using the Bevalac, a weak-focusing proton synchrotron combined with the SuperHILAC linear accelerator as an injector for heavy ions, to deliver charged-particle radiation therapy including proton and C-ions to patients. Even though the Bevalac is now decommissioned, this work has led to the opening of 11 centers in the world at the moment, treating patients with C-ions (https://www.ptcog.ch/index.php/facilities-in-operation) [12,13]. In Japan, National Institute of Radiological Sciences began conducting clinical trials on carbon-ion radiation therapy (CIRT) as early as 1994. Globally, to date more than 96 000 patients have been treated with particle beams, and about 15 000 patients have undergone CIRT [14][15][16].

Understanding the Link Between DNA Damage and Cell Death in the Context of Radiation
To compare and adjust the treatment profiles of different types of radiation therapy, the impact of radiation therapy on the tumor is represented by its RBE against photon therapy.
Given the complexity of response of tissues to charged particles, biological effect must be included when preparing the treatment plan of heavy ion radiation therapy, to minimize the radiosensitivity of normal tissue while maximizing death of cancerous cells. One promising computer model is the Local Energy Model developed at GSI, Darmstadt, Germany [17,18]. The Local Effect Model assumes that DNA damage and cell death can be solely predicted by the amount of energy deposited in a small subnuclear volume, independently of the radiation quality [17]. With such an approach, one can predict survival curves for any particles, based solely on biological properties derived from x-ray data alone. It is therefore a much more attractive model than using empirical survival curves to predict the response of both tumor and healthy tissue. For proton therapy, an RBE value of 1.1 is typically used in clinical situations to calculate the equivalent biologic dose for proton therapy relative to photon therapy [19]. However, the LET increases drastically in the last few microns of the particle track [20], which has been shown to translate into a higher RBE both in vitro and in vivo [21][22][23][24][25][26] and there is an increased concern that the RBE of 1.1 is an oversimplification, which may have clinical implications [26][27][28]. RBE greater than unity is thought to reflect the fact that DNA lesions induced by charged particles are more complex [29]. However, a unified formalism able to model DNA complexity in a way that can predict cell survival for various radiation qualities remains to be accepted [30]. Therefore, RBE is often measured for each cell, particle, and energy of interest. Survival curves are typically fitted with a decreasing exponential with a linear quadratic dose dependence, introduced by Douglas and Fowler [31] for both of the radiation reference (x-ray) and the particle of interest. The alpha-beta ratio is typically thought to represent the intrinsic radiosensitivity of the cell type, with alpha being the linear term and beta being the quadratic term. RBE is then calculated as the ratio of the reference dose to the dose of the test radiation necessary to produce the same level of cell survival, often 10% survival. A recent review suggests that tissues with high alpha-beta ratio, such as the brain, may be disproportionally sensitive to proton therapy and require an RBE adjustment along the particle track as a function of LET and alpha-beta tissue property to limit the damage done to healthy tissue [32][33][34]. Finally, most RBE studies have been done in vitro, since they require a relatively high-throughput process, and the RBE in vivo may be further confounded by off-target systemic effects [35].
The accuracy of the Local Effect Model mentioned above [17] suggests that the spatial distribution of radiation-induced DNA double-strand breaks (DSBs) in the nucleus is the critical factor for predicting cell death. Therefore, characterizing such spatial distribution in tumor cell becomes a critical endeavor to feed computer model for better treatment planning. The level of DSBs can be experimentally quantified by detecting small nuclear domain referred to as radiation-induced foci (RIF), which are formed at the site of DSBs by proteins involved in DNA repair. For example, RIF formed by either 53BP1 (tumor suppressor p53 binding protein 1) or by the phosphorylation of H2AX (cH2AX) have been studied after proton radiation [36][37][38]. Figure 2A through 2C illustrates the detection of Bragg peak using cH2AX as a marker of DSBs, showing its usefulness for characterizing radiation therapy. The ensuing damages from d-rays generated by charged particles are simpler and thought to be repaired efficiently, while the more complex damages in the core need activation of several repair pathways working in concert [13]. Oike et al [39] computed LET distribution in the treatment of tumor cells and compared it to 53BP1 foci size distribution in fresh biopsy samples of uterine cervical cancer, with RIF showing bimodal peaks at the center of the SOBP just like predicted for LET. Presumably, the low-LET peak led to small foci similar in size to the ones induced by x-ray (ie, simple DSBs). In contrast, the higher-LET peaks presumably induced larger RIF, only visible in irradiated tumor as the larger peak. These data are evidence that C-ion-induced complex lesions can be detected in clinical settings by using 53BP1 RIF size distribution as a surrogate marker of DSB complexity and probably cell death [39].
To better understand the role of DNA damage in cell death, our group characterized the spatial property of RIF induced by particles of increasing LET and we modified the Local Effect model to introduce the concept of RIF coalescence as a new mechanism to predict cell death for any LET [40]. In the RIF coalescence model, DSBs move into single repair unit characterized by large RIF before the repair machinery kicks in. This model is contrary to the classic ''contact-first'' model where DSBs are assumed as immobile and repaired at the lesion site [41,42]. The RIF coalescence model, first suggested by Aten et al [43], and recently confirmed by our laboratory by using time-lapse fluorescent microscopy of 53BP1 fused to GFP in response to high doses of x-ray [44], has the advantage of being much more efficient molecularly. In this model, the spatial distribution of DSBs becomes a critical factor influencing DNA repair efficiency. For example, high-LET radiation generates several close-by DSBs along its linear track, increasing significantly the probability of its coalescing into the same RIF, which can lead to increased chromosomal rearrangement [45] or DNA misrepair in general. Evidence of RIF coalescence is shown in Figure 2D and 2E where one can see that the number of RIF/lm reaches a plateau at about 200 keV/lm. Above such LET, the linear density of DSBs keeps increasing with LET (as indicated by the linear increase in the intensity of 53bp1, Figure 2F), while the number of RIF only increases modestly and thus the average number of DSBs per RIF increases [46,47]. Meanwhile, low doses of low-LET radiation, such as x-rays, generate DSBs randomly in the nucleus and do not cause RIF coalescence. We thus propose that RIF coalescence is an alternative explanation of DSB complexity, which can lead to hypersensitivity to high-LET radiation for patient or cell type with strong clustering properties. With this model, one can explain why mutation, cell death, and cancer is much more prevalent following exposure to charged particles, as the damaged cells are being overwhelmed by the number of DSBs to repair within single repair units. This model is a paradigm shift where increased spatial proximity of DSBs is an alternate explanation for hypersensitivity to high LET in contrast to the classical DNA damage complexity as the primary mechanism [48][49][50], which we will describe next.

DNA Repair Pathways and Radiation Quality
In contrast to RIF coalescence occurring over several micrometers in the nucleus, at the DNA scale, damages induced by high-LET irradiation are also much different from x-rays [51,52] and are characterized by multiple bases being affected within a few turns of DNA helix. These are referred to as clustered DNA damage or multiple damage sites [53][54][55] and are considered much more complex to repair than simple DNA lesions. These complex lesions can be observed in DSBs, single strand breaks (SSBs) and oxidative clustered DNA lesions, such as abasic sites, and/or base damages (oxidized purines or pyrimidines) [56,57] and occur within 20-to 30-bp regions measuring 10 to 20 nm [58,59]. Immunofluorescent imaging using DNA repair markers such as c-H2AX and 53BP1, XRCC1, and hOGG1 foci, surrogate markers for DSBs, SSBs, and base damages, respectively, have been used to detect tracks and the sites of clustered DNA damage [49].
The impact of these complex lesions has been proposed as another explanation for higher biological effects of charged particles. For example, the biological impacts of C-ions and x-rays were compared by Zhao et al [60] whereby three human tumor cell lines showed increased survival for x-rays compared to C-ions, while neutral comet assays showed substantially higher amount of residual DSBs, 0.5 and 4 hours after C-ion irradiation than with x-rays. Strong induction of cH2AX after 30 minutes was observed in all cell lines and persisted for 24 hours after C-ion exposure [60]. For human pancreatic cancer stemlike cells, colony and spheroid formation and tumorigenicity were higher than for noncancer stemlike cells, and a higher decrease in surviving fractions was observed with C-ions than with x-rays, with increasing doses [61]. At the genetic level, decondensation of heterochromatin, along with twisting of the tracks proximal to the heterochromatic regions, has been observed after C-ion irradiation [62]. These clustered double-stranded lesions introduce substantial amount of challenge in the DNA repair processes [63]. To prevent genomic instability by unrepaired DSBs or cell death (by mitotic catastrophe, apoptosis, or senescence), multiple DNA repair pathways are activated in response to clustered DNA damage. Three critical pathways, namely, homologous recombination (HR), nonhomologous end joining (NHEJ), and alternative end joining, are usually used to repair DSBs [64]. Studies suggest that DSBs induced by high-LET irradiation might be preferentially repaired by HR [58,65], by recruiting DNA strand invasion proteins and Rad51 [67]. One interpretation of these results is the inability of the Ku heterodimer to bind to the DNA owing to short DNA fragments induced by clustered damage prohibiting an essential step in the NHEJ pathway [59]. There are also reports suggesting HR might also be the preferential repair pathway for proton irradiation of human cancer cells, even with low-LET proton irradiation [37,67]. On the other hand, data from Gerelchuluun et al [65] point at the NHEJ pathway playing an important role in repairing DSBs induced by both clinical proton and C-ion beams. Recent reports have also indicated that NHEJ might influence DNA repair following C-ion damage [66,68]. Shibata et al [69] provide an interesting report that examined the factors that favored the speed of DNA repair using NHEJ and HR in cells in G2 phase. With chromatin complexity as one of the factors, the group demonstrated that DNA lesion complexity was the second factor favoring the choice of NHEJ to repair complex DSBs induced by high-LET radiation. Their results show that, while the first attempt to repair DSBs in G2 is made by NHEJ, there is a competing backup NHEJ pathway that occurs in the absence of Ku but requires DNA-dependent protein kinase catalytic subunit (DNA-PKcs). Their model depicts that, following a lesion, quick binding of DNA-PK recruits NHEJ as the first repair pathway. If the next steps can progress without hindrance, NHEJ rapidly repairs the DSBs. If on the other hand, rejoining cannot occur owing to the lesion or chromatin complexity, DSB end resection and HR pathway is chosen. Again if resection cannot occur, NHEJ takes over again and efficiently repairs the DSBs, provided this happens in an earlier stage before committing to HR.
Response to clustered DNA damage repair is affected by a transition of ataxia-telangiectasia mutated (ATM) and Rad3related transition at lesion sites and switch from NHEJ to HR [70,71]. Yet another mechanism of DNA damage induced by CIRT is mitotic catastrophe, ensuing from atypical mitosis activated by lack of G2 endpoint [49,72]. Cellular senescence as indicated by high expression of senescence-associated beta-galactosidase SA-b-Gal and low levels of Ki-67, was also found to be induced by high-LET radiation such as 80 keV/lm C-ions, as compared to X-rays, on human melanoma cells. Five days post irradiation, persistent levels of ATM kinase activation and 53BP1 expression were noted with C-ions as compared to X-rays. 53BP1 was found to colocalize with XRCC1, for both X-rays and C-ions, signifying formation of SSB and DSB complexes [73].
The initial RIF number per Gy has been found to be comparable between photon and proton irradiation, and independent of LET in cells irradiated with protons in therapeutic beams with relative low LET [36,37], whereas studies focusing on higher LET values, both with protons and C-ions, have demonstrated increasing RIF number with higher LET [74,75]. The residual number of DNA damage after 24 to 26 hours, which is presumably reflecting unrepaired DSBs, has been shown to be LET dependent in more studies, also in the clinical relevant LET range for proton irradiation [36,38,75,76] (Figure 3). In the studies with protons in therapeutic beams, this is pointing towards the repair rather than the induction of DNA damage, which is different from photon irradiation. This increased number of residual foci could be due to preference for HR as repair mechanism, as HR compared to NHEJ is a slower repair mechanism [37].

Other Caveats about Proton and Particle Therapy
In addition to direct DNA damage, exposure to protons generates reactive oxygen species (ROS), which further damage the DNA and alter normal functions of the cell directly in the proton path as well the neighboring cells [6]. However, ROS production is stimulated by tissue oxygen content, which is frequently reduced in tumor environment, suggesting tumor oxygenation as a method to improve the effectiveness of proton therapy. On the other hand, excessive ROS production is also likely to cause bystander damage to cells that are not directly targeted by the protons. Thus, while proton therapy is particularly efficient at causing tumor cell apoptosis [77], it may also be the mechanism behind increased apoptosis as well as necrosis of surrounding tissue [78].
Other tissue-level effects of proton radiation therapy include altered angiogenesis and inflammation. Interestingly, while photon therapy has been shown to increase angiogenesis, proton therapy has the opposite effect of inhibiting it [79]. Lower angiogenesis is therapeutically beneficial: it limits tumor survival by reducing nutrient availability. In contrast, the research on inflammatory changes due to proton therapy is comparatively scarce and conflicting: protons have been indicated to increase both traditionally inflammatory and anti-inflammatory cytokines. Ultimately, the relative importance of inflammation in therapeutic outcomes may be determined on the tissue that is receiving the irradiation and the possibility of it causing systemic effects, which could lead to impaired responses to infection or cognitive damage [79][80][81].

Future Considerations
In spite of the aforementioned benefits and the excellent outcome presented in clinical data, CIRT is still limited to about 11 centers globally. The primary hindrance in establishing CIRT centers is the estimated high costs of initial investment, maintenance, and treatment to patients. To better compare the experimental, preclinical, and clinical data from CIRT to conventional photon and proton therapy, thorough investigations should be conducted in beam line and SOBP characterization, dose dynamics including understanding of absorbed dose, cellular damage, and repair kinetics and welldesigned clinical studies. A special emphasis on studies characterizing better DNA repair following high LET, including the coalescence of DSBs into single unit of repair, will be essential for a future model to predict accurate biological response of the tumor, while preserving healthy tissue from potentially long-term effect currently unseen with photon therapy. The increased availability of CIRT to the patient population and clinicians in North America will highly depend on such studies. . Residual numbers of cH2AX foci formed per cell (24 hours relative to initial number of foci at 0.5 hours after irradiation), in OE21 and KYSE450 cells irradiated with 8-Gy proton beam irradiation (235 MeV). Data are extracted from the study of Hojo et al. 39 Error bars represent the combined relative standard error of the mean of the original data. Abbreviations: cH2AX, phosphorylation of H2AX; LET, linear energy transfer.

ADDITIONAL INFORMATION AND DECLARATIONS
Conflicts of Interest: The authors have no conflicts of interest to disclose.