Abstract
Monte Carlo simulations are increasingly considered the most accurate tool for calculating particle interactions with tissue. This contribution reviews the basics of Monte Carlo methods and their emerging role for application to several areas of macroscopic simulation in the worldwide rapidly growing field of carbon ion therapy, spanning from dosimetric calculations to imaging of secondary radiation.
Introduction
History and Scope of the Monte Carlo Method in Particle Therapy
The Monte Carlo (MC) method is a technique for solving mathematical problems using random sampling, either those that are fully deterministic (eg, estimation of an integral) or those of a stochastic nature (eg, particle scattering). Although variants of MC methods can be traced back as far the 18th century, the fundamentals of the modern MC technique for solving stochastic problems were developed in the 1930s and 1940s. Enrico Fermi in 1932 and Stanislaw Ulam together with John von Neumann in 1947 used the method to calculate neutron diffusion and multiplication [1].
With the development and wide use of modern computers, MC statistical methods have become promising and powerful computational tools for solving complex problems, including particle transport and interaction in matter. The capability of three-dimensional transport and faithful representation of the physical interactions undergone by a particle beam and the resulting secondary products make MC well suited for computational applications in radiation therapy. Despite extensive computational demands, nowadays MC tools are considered the “gold standard” not only for dose calculations and incorporation of biological effects but also for reliable prediction of emitted secondary radiation, which is of utmost importance in emerging areas of research, such as in vivo treatment verification [2, 3] and risk estimation of secondary cancer induction [4].
Brief Introduction to Monte Carlo Simulation
The MC method becomes a simulation of a stochastic process when the probability density function (PDF) of the pseudorandom number sequence used in the MC method is the same as, or approaches the PDF describing the stochastic process itself. Most of the stochastic processes that are interesting to simulate using MC do not follow uniform PDFs. This fact generates the need to transform the initial uniform PDF (obtained from a random number generator) into the statistical distributions relevant to the problem, for example, energy spectra, angular distributions, reaction cross section, for simulation of particle transport and interactions. Different techniques have been developed for enabling random sampling from distributions other than the uniform PDF. One of the oldest and simplest is the inverse transform sampling, also called the golden rule of sampling. Following are the basic steps to sample from a PDF f:
- 1.
Normalize the PDF function f and calculate the cumulative distribution function (CDF) F, where
- 2.
Invert the CDF F to obtain x = F-1(y)
- 3.
Sample y from a uniform distribution and calculate x from step 2; x will then be distributed according to the initial PDF function f
Despite the simplicity and elegance of the inverse transform sampling method, a large number of distributions (including the important Gaussian distribution), do not lend themselves to an easy calculation of the inverted CDF. Therefore, other methods have to be used, such as the acceptance-rejection method, Box-Muller transform, and Metropolis-Hastings algorithm [5].
The efficiency ɛ of an MC calculation, which produces a result of variance s2 when running for a time T, is proportional to (s–T)−1 and is independent of the number of primary particles simulated [6, 7]. Variance reduction techniques are methods to improve efficiency by reducing the variance of the result of interest without altering its expectation value and without increasing the number of primary particles or biasing the result (despite commonly called “biasing” techniques). A few basic variance reduction techniques are particle splitting, range rejection, Russian roulette, and forced interactions, as described in detail by Rogers [8] and briefly reviewed in the following paragraphs.
Particle splitting involves replicating a number of reaction products (secondary particles) with randomized properties to avoid exact repetition of the original particle. To reduce the variance associated with the replicated particle without biasing the result, a weight is applied to each copied particle and its descendant. An example of particle splitting is described by Sommerer et al [3] for primary 16O and 12C ion beams, to replicate secondary β+ emitting nuclei used for positron emission tomography (PET) range monitoring. In the same study, an additional directional biasing toward a detector system was also applied, yielding a total gain in execution speed by a factor of 250.
In the range rejection technique, the residual range that a charged particle can travel within the material/region is calculated based on material and particle properties. If the residual range is shorter than the region dimensions the particle is locally absorbed, thus avoiding extra computational time for detailed transport.
The Russian roulette technique aims at eliminating particles of low interest created in the simulation. To retain the result of interest unbiased, the surviving particles' weight is increased accordingly.
In forced interaction techniques, a particular interaction of interest is enforced along the particle path in a certain region. To avoid fluence distortion, the original particle's weight, as well as that of its descendants, is reduced by an appropriate factor.
Besides variance reduction techniques, appropriate settings of particle creation and transport (eg, energy cuts) can be chosen to optimize the history time T without affecting the accuracy of the results, for example, avoiding time-consuming secondary electron transport at the micrometer scale for determining dose at the millimeter scale.
Simulation of Particle Transport and Interactions
An MC simulation of particle transport and interaction is performed according to the following 4 major steps [7]:
Random selection of the distance to the next interaction, based on random sampling following the exponential path length distribution. The mean free path of this distribution is defined by the total cross section for the given particle and material properties;
Transport of the particle along a step to the interaction location, taking into account geometry constraints and potential presence of electromagnetic fields;
Random selection of the interaction type according to the partial cross sections;
Simulation of the selected interaction and production of secondaries, with particle properties sampled from the corresponding distributions
These four main simulation steps are iterated until the primary ions and all the descendants are either locally absorbed or escape from the simulation geometry, in a so-called “analog” MC. However, following this approach could lead to extremely high computational times, for example, due to >106/cm [9] Coulomb interactions for ion beams in matter. Hence, macroscopic MC codes employed in particle therapy exploit a “condensed history” technique, where the cumulative effect of several “small-effect” discrete interactions can be condensed in a continuous approach along a particle step. Sampling the necessary physics information (eg, changes of particle energy, direction, and position) from appropriate distributions of grouped single interactions, like multiple Coulomb scattering instead of single scattering [7], simulates the net effect of individual discrete interactions. In the typically adopted “class II scheme” [10], a distinction is made between “soft” and “hard” collisions, where the former are subject to the condensed history grouping while the latter are explicitly simulated in an analog way. In certain cases, the user can control the boundary between the two categories by appropriate thresholds.
For particle transport, additional constraints to be accounted for are geometrical boundaries, user-set maximum step length, and transport thresholds. As described previously, the transport of particles is performed in discrete steps. Unless there are other limitations, the step length of a particle is defined by the free path length sampled from the cross sections. When the particle's free path crosses a geometrical boundary, the step is terminated at the geometrical boundary. Furthermore, the user can define a maximum step length, for example, related to the scale of the geometry of the problem and the simulation time. A transport threshold is defined as the lowest energy down to which a particle will be transported. Below that threshold the particle is not tracked anymore and is locally absorbed, depositing all its energy at this location.
In terms of interactions, a package of models handles different processes in their code-specific validity interval. Such processes are typically classified according to the underlying physics as electromagnetic and hadronic/nuclear interactions. The first category models mainly interactions of electrons, positrons, photons (including optical photons), and charged hadrons. Examples of electromagnetic interactions include ionization, Bremsstrahlung, Coulomb scattering, positron annihilation, photoelectric effect, Compton scattering, and pair production [11–15]. According to the condensed history approach, some of the electromagnetic interactions are not modeled as discrete processes but rather handled via cumulative effects, such as multiple Coulomb scattering and continuous energy loss.
Hadronic/nuclear interactions concern hadron-hadron, hadron-nucleus, and nucleus-nucleus elastic and inelastic interactions, with the addition of photonuclear and lepton-nucleus reactions [13, 15–17]. For the purposes of carbon ion therapy, which means energies below a few hundred milli-electron volts per nucleon, intranuclear cascade, quantum molecular dynamic (QMD) and pre-equilibrium models coupled with appropriate nuclear de-excitation models are employed to handle hadronic interactions, typically resulting in the production of secondaries that originate a complex mixed radiation field.
In addition to the production of charged secondary particles, which are the main contributors to the physical and biological dose distribution, radioactive decay modeling is of special interest for dedicated heavy ion therapy MC applications, such as activation studies and beam range monitoring via PET. Although the exact implementation differs between MC codes, a common workflow in the most widely used MC codes in medical physics applications features generation and transport of decay radiation limited to x-rays, γ, β+, β–, and alphas [13–15, 18]. The radioactive nuclei themselves are produced either as secondaries of hadronic reactions (see previous paragraph), including creation time possibly associated to detailed timing information of the therapeutic beam delivery, or defined by the user as a radioactive isotope source. After production of a radioactive nucleus, the simulation of the decay relies on information from external data libraries such as the Evaluated Nuclear Structure Data File (ENSDF) [19], describing nuclear half-lives, decay branching ratios, energy of the decay process, and nuclear level structures. The products of radioactive decay will be then transported according to the general scheme described earlier.
Neutron interactions (elastic/inelastic scattering, radiative capture, and fission) and transport, although having only a marginal contribution to the in-field dose, are modeled in all general-purpose MC codes commonly used in particle therapy. Due to the lack of electric charge, neutrons can interact with nuclei down to very low energies and therefore are transported down to thermal energies. Owing to the several production mechanisms, computation of neutrons for ambient dose and secondary cancer risk assessment becomes even more cumbersome in heavy ion therapy than in proton therapy. Depending on their energy, modeling of neutron interactions and transport can be subdivided into 2 energy regimens: low energy neutrons [13–15, 18], typically below 20 MeV (although this low energy definition includes fast neutrons according to the typical energy classification of neutrons) or neutrons of energy above this threshold. In the case of energies above 20 MeV, the internal models of each code are used. For low energy neutrons, codes rely on nuclear data libraries [19] that provide experimental neutron cross sections for all channels.
Application of Monte Carlo Codes to Carbon Ion Therapy
The applicability of an MC code to particle therapy in general and carbon ion therapy in particular demands a reliable description not only of electromagnetic but also of nuclear processes. Furthermore, such a code should support simulation of production, interaction, and transport of a large variety of particles ranging from electrons and photons to heavier charged particles and neutrons, which can be either primary beam particles or secondary products. Although dedicated solutions might be developed for specific applications, the focus here is on full MC codes.
Currently, the full MC codes used for carbon ion therapy applications are established general-purpose codes like Geant4 [20], FLUKA [18, 21], MCNPX [14, 15], PHITS [22], and Shield-HIT [23], which were all initially developed for high-energy physics. These packages can transport a large variety of particles in a wide energy interval, typically from few kiloelectron volts up to several terelectron volts, except for Shield-HIT code, which is not supporting electromagnetic radiation transport.
It is useful to summarize here some of the main physical and biological differences between protons and carbon ions, which pose additional challenges to MC modeling of carbon ion therapy applications. During the beam passage through matter, the primary beam is attenuated due to nuclear reactions and secondary particles are produced, resulting in the already mentioned mixed radiation field. A simple calculation can highlight the different effect of charged secondaries for protons and carbon ion beams of comparable range in water (160 MeV protons, 290 MeV/n 12C): about 20% [24] of the protons undergo nuclear reactions as they slow down. The corresponding percentage for carbon ions increases to about 40% [25]. For protons and up to the Bragg peak depth, the contribution of secondary particles to the dose is around 10% (up to 10% from secondary protons and at least an order of magnitude lower from all higher Z secondaries). For carbon ions and again up to the Bragg peak, the corresponding contribution can reach up to 20% [26]. Beyond the Bragg peak, the dose in the case of protons is negligible, while for the carbon ion beam it can reach up to 10% of the maximum dose in the peak [27] and is dominated by a fragmentation tail of lower Z secondary fragments. It is obvious that a reliable modeling of nuclear reactions and their produced secondaries is not only more challenging in carbon beams (nucleus-nucleus versus hadron-nucleus nuclear reactions, with reduced availability of experimental data for benchmarking) but also has a larger effect on the final dose calculation, which has to take into account not only physical energy deposition (as typically done for protons using a constant biological weighting factor of 1.1) but also the varying biological effectiveness of all the Z components of the complex mixed radiation field.
In Lühr et al [28], the effect of nuclear modeling uncertainties, hence of the energy resolved fragment spectra on physical dose and relative biological effectiveness (RBE) was studied. For the tests, carbon spreadout Bragg peaks (SOBPs) in water were simulated with SHIELD-HIT [23]. The results were reported as relative differences between simulations with the standard settings in the code and simulations with 20% uncertainty in total inelastic nuclear reactions, as well as with varying the Fermi breakup-related parameters. Differences in the physical dose were found to vary between 6% and 15%. The authors of that study used the local-effect model (LEM) [29] to calculate RBE. The variation in RBE was found to be less than 1.5% before the Bragg peak and around 3% in the region dominated by the fragmentation tail after the Bragg peak.
The complexity of simulating the aforementioned effects are also reflected in the simulation time required to perform calculations for carbons with respect to protons under similar conditions. For example, using FLUKA with the same recommended settings to compare central processing unit (CPU) time difference between proton and carbon beams (monoenergetic beam impinging a water target, 150 MeV protons, and 290 MeV/n 12C corresponding to about a 15-cm range), the simulation time per primary (in the order of milliseconds) is about 40 times higher in the case of carbons.
Monte Carlo Dose Calculation in Carbon Ion Therapy
Modeling the Beam Delivery
The first step for application of MC methods to dose calculations in particle therapy is modeling the beam delivery, typically starting from the last few meters prior to the treatment isocenter. For broad-beam delivery this requires detailed modeling of a large amount of components of the beam nozzle, which are used to broaden (laterally: scatterers and/or wobbling magnets; longitudinally: range modulators) and shape (laterally: collimators; longitudinally: compensators) the patient-specific irradiation field, in addition to the standard beam line monitors. Conversely, scanning pencil-like beams intrinsically involves only a reduced number of non–patient-specific materials in the beam line, namely transmission beam monitors and a few optional passive devices used to broaden Bragg peaks that are too narrow (ridge/ripple filters) or degrade the beam energy (bolus/range shifter).
As in conventional therapy [7], for broad ion beam delivery the MC nozzle description requires disclosure of technical drawings and detailed information on the geometry and elemental composition of the several components by the vendors. Moreover, the MC geometry builder must be capable of handling complex shapes, often requiring dedicated solutions beyond regular predefined shapes. A part of the nozzle setup is fixed and patient independent for the different beam line options (eg, combination of beam shaping devices such as scatterers and specific modulation wheel tracks). Hence, its implementation and corresponding MC calculation can be done only once by storing a phase space file of sufficient statistics (to avoid so-called latency problems [7]) with information on particle type, position, energy, and direction at the exit of the patient-independent part of the treatment head. The other patient- and field-specific part requires coupling to the treatment planning and/or control system for complete generation of the geometrical setup. Moreover, elements like the beam modulation wheel or wobbling magnets introduce time-dependent modifications of the irradiation field, which can be accounted for by summation of separate simulations or, in a more elegant way, by dynamic update of the geometry or magnetic field settings in the same MC run, using the correlation between the known time dependence and the corresponding number of transported ions with related secondaries. Over the past years customized applications of general-purpose MC codes to the modeling of beam nozzles have been reported (eg, Bae et al [30]), and new tools are being developed to support MC users in this endeavor [31, 32], although so far priority has been given to the more widely used protons.
The reduced budget of transmission beam monitors permanently installed in scanning beam lines relaxes the demands on detailed geometrical modeling, especially for the less scattering carbon ions. Approximations as thin layers of experimentally determined water-equivalent thicknesses may be adequate in the absence of detailed vendor's information, except for precise characterization of beam perturbations, especially those introduced by high Z materials like the metallic wires of position-sensitive detectors [33]. Conversely, accurate description of beam shaping devices like ripple/ridge filters is mandatory to correctly reproduce longitudinal modulation and position of the pristine Bragg peaks as well as their influence on fluence variations due to Coulomb scattering and nuclear processes [33].
The scanning process, that is, the dynamic delivery of a position-dependent fluence of ions of selected energy and lateral dimension, can be equivalently simulated with explicit modeling of the magnetic field of the scanning magnets or the geometrical information of the pencil-beam position at the isocenter as specified by the treatment planning system (TPS) [34]. In both cases, the MC simulation must directly handle input information from the TPS and/or beam control system for sampling field-specific properties of the several pencil beams building up the entire dose delivery. When transporting only a fraction of the planned number of ions for faster execution, random sampling must preserve the relative weight of the different pencil beams in the treatment field. Since transport in the relatively simple beam line geometry is not as time consuming as it is for passive treatment heads, saving the intermediate phase space beam information prior to entrance in the phantom or patient geometry is typically not needed.
Physical and Biological Dose Calculations
Prior to patient-specific dose calculations, validation and fine-tuning of the MC framework has to be performed in comparison to dosimetric reference measurements such as depth- and lateral-dose profiles in water for pristine and extended Bragg peaks without patient-specific beam modifiers. This way important input parameters of the MC simulation, such as initial beam size and energy spread, choice of physics models and/or their settings, as well as physical values like the ionization potential of water Iw, can be adjusted to bring the MC calculations into optimal agreement with the measurements for the same nominal beam energy at each specific facility [33]. Additional comparisons of cross-field fluences in air are encouraged to validate the initial beam characterization and its modeled interaction in the beam line. Further validations may include dose calculations in heterogeneous phantoms and coupling of the MC with patient-specific beam modifiers or beam scanning patterns. As with other computational engines, all experimental validations require an understanding of the detector's influence on the measured data for meaningful comparisons.
A thoroughly benchmarked MC engine can then support several applications, such as generation of TPS basic input data (eg, depth and lateral pristine dose distributions in water, fragment spectra) [33], TPS validation in geometrical phantoms, and risk assessment studies [35], reducing expensive beam time. However, the increased computational accuracy of MC is especially beneficial for detailed dosimetric calculations in the patient geometry given by x-ray computed tomography (CT) scans. This requires, first of all, the capability to handle the x-ray CT image used for treatment planning, as well as time-efficient tracking algorithms for transport in voxelized geometries. Depending on the code, translation of CT images into MC geometry is performed either via direct import of the CT data or mediated by a preprocessor. The latter step is unavoidable if the CT image consists of slices of different thicknesses and the MC code supports only voxel geometries of fixed slice thickness.
The MC voxel geometry has to be positioned with respect to the beam delivery as specified by the TPS, and different beam incident directions can be implemented either by rotating the beam line around the fixed CT geometry or vice versa. Next, the CT information must be translated into material assignment for particle interaction and transport. Analytical TPS engines traditionally rely on CT Hounsfield unit (HU) conversion into stopping power ratios relative to water for stretching the patient CT into a water-equivalent system along the beam penetration depth. The MC calculations can better account for interaction with the specific tissue properties according to the elemental composition and density estimated from empirical CT-stoichiometric calibrations [36]. To reduce the number of material definitions in the MC for faster initialization times and reduced memory load, the many thousands of gray values in the CT image can be segmented into a smaller amount of HU intervals sharing the same material properties. In MC applications to conventional radiotherapy, only a coarse differentiation of about 5 tissue materials from air to bone is introduced for the HU conversion into mass or electron density. Conversely, a higher granularity of typically more than 20 materials is used in ion therapy due to the higher sensitivity of ion beam interactions to specific tissue elemental composition. However, this finer CT to material segmentation still neglects variations of electromagnetic and nuclear interactions within the HU intervals sharing the same elemental composition and a “nominal” mass density (eg, density at the center of each HU interval). To overcome this, special capabilities have been introduced into different codes for HU-dependent correction of the “nominal” mass density into the “real” one during particle transport (see Bauer et al [37] and references therein). In this step, also an HU-dependent adjustment of the MC stopping power calculation can be introduced to force the MC engine to reproduce the same CT-range calibration curve (ie, stopping power ratio relative to water) as the TPS, which is specific to each facility and CT scanner/imaging protocol. This approach enables overcoming limitations of stoichiometric calibration curves (especially if based on single-energy CT scanners) and from limited knowledge of real tissue physical parameters (eg, ionization potential) influencing the energy loss calculation. To this end, the MC engine must be carefully calibrated to reproduce the same range in water as the TPS, and the approximately energy-independent MC stopping power ratio to water must be determined for different HU values of given composition and retrieved “real” density. This way, the quotient between TPS and MC stopping power ratios yields the correction factor bringing the MC range calculation in agreement with the clinical CT-range calibration curve.
Whereas particle transport should be performed on the CT grid for more detailed characterization of physical beam interactions, dose deposition can be stored on the typically coarser TPS grid for direct comparison and improved computational efficiency [37]. Although MC enables a more realistic description of the dose delivered to the patient-specific tissue (so-called dose to medium Dm), a conversion scheme into dose to-water Dw is strongly recommended for TPS comparison (in Figure 1 a comparison between dose to medium in MC, dose to water in MC, and dose to water in TPS is shown), as typically done for MC application to conventional therapy [7]. In conditions of charged particle equilibrium, the conversion can simply rely on the Bragg-Gray cavity theory, Dw = Dm(S/ρ)w/m, where (S/ρ)w/m represents the spectrum averaged unrestricted water to medium mass collision stopping power ratio. Due to the mixed radiation field generated by carbon ions, the calculation should be best performed during run time to account for the energy- and particle-dependent relative stopping power of primary carbon ions and all produced nuclear charged secondaries. Implementations reported so far neglected corrections for more complex perturbations of particle fluences in the different media to intentionally preserve the mixed radiation field produced in the patient anatomy [37].
Since clinical carbon ion treatments rely on biological dose calculation, the absorbed dose (to water) also has to be corrected for differences in RBE of ions to allow for comparison with clinical trials and protocols of conventional radiotherapy. Since RBE exhibits a complex dependence on mixed radiation field, irradiated tissue type, and biological end point, MC computational engines have to be interfaced to biological models for biological dose calculations in ion beam therapy. To date, different MC implementations of RBE-weighted biological dose have been reported for carbon ion beams using the general-purpose codes Geant4, FLUKA, and PHITS. In Kase et al [38], Geant4 was coupled to the same approach used in the dedicated TPS for scattered carbon ion therapy at the National Institute of Radiological Sciences, where the biological dose is defined as the product of the physical dose and the RBE of the radiation at 10% surviving fraction for human salivary gland tumor cells. Although accounting for the LET-dependent α and β parameters of the linear quadratic model for carbon ions only, satisfactory results could be demonstrated in water [38, 39] and extended the PHITS code capabilities to accommodate RBE-weighted dose calculations based on the more recent microdosimetric kinetic model [40]. Similarly, Inaniwa et al [41] used Geant4 to calculate biological dose using a modified microdosimetric kinetic model, and this method has been fully integrated into the TPS for the scanned carbon beam at National Institute of Radiological Sciences in Japan. Finally, Mairani et al [42] coupled FLUKA to RBE-weighted biological calculations based on the LEM framework [29], which was first clinically applied in the German pilot carbon ion therapy project and is now included in the commercial TPS Syngo PT (Siemens, Munich, Germany) used at the Heidelberg Ion Beam Therapy Center and the Centro Nazionale di Terapia Oncologica.
To this aim, LEM-specific tables are used to derive the corresponding macroscopic linear and quadratic components αIonD and βIonD for each ion species During run time, whenever energy is deposited by a certain radiation type, not only the absorbed dose to water contribution DIonw but also the quantities αIonD·DIonw and (βIonD)1/2·DIonw are scored to enable calculation of RBE-weighted dose in the final postprocessing after the simulation end [37, 42].
So far, CT-based MC calculations in carbon ion therapy have been focused on recalculation of TPS-optimized RBE-weighted dose to address potential limitations of analytical algorithms in heterogeneous patient anatomy. Interestingly enough, first studies suggest that the advantage of MC for the less-scattering carbon ions does not come from reduced range mixing effects in heterogeneities, as in proton therapy, but rather from improved description of the mixed radiation field and its related biological effectiveness [37]. Nevertheless, initial studies have also shown the prospects of accurate MC calculations for direct treatment plan optimization for offering the flexibility of transporting a large variety of ions for exploring new treatment concepts, for example, of mixed ion irradiation [43].
Modeling Nuclear Reaction Secondaries for In Vivo Carbon Ion Range Monitoring
To fully exploit the favorable steep dose gradient of carbon ion beams, in vivo range monitoring is required. As the primary carbon ions are fully absorbed into the patient body, range monitoring can mainly be done by means of secondary radiation. The products of nuclear reactions, induced mostly by the primary carbon ions interacting with target nuclei of the patient tissue, are prompt gammas, positrons, charged fragments, and neutrons.
The modeling of nuclear reactions is typically performed in the following steps: a dynamic/kinetic interaction phase, handled by the intranuclear cascades or quantum molecular dynamic models, during which excited or compound nuclei are formed. This is followed by a pre-equilibrium emission phase leading an excited nucleus to an equilibrium state. During this step, protons, neutrons, or other light fragments (d, t, 3He, α) can be emitted. Finally, a statistical emission/evaporation phase brings the excited products of the reaction to their final ground state. Depending on atomic number, mass number, and excitation energy of the excited products, different de-excitation processes occur. For high excitation energies, multifragmentation, fission, and Fermi breakup (light nuclei) are the main mechanisms for de-excitation, also producing heavier fragments. Evaporation models are responsible for the de-excitation of nuclei by emission of p, n, d, t, 3He, α, and gammas. Besides those, peripheral collisions of primary carbon ions with 12C, 16O tissue nuclei produce β+ emitters like 11,10C, 15O, and 13N target fragments, in addition to an autoactivation of the beam itself (eg, 11,10C projectile fragments).
To image the beam range, the most extensively explored process relies on the eventual annihilation of the positrons from the β+ emitting nuclei, yielding back-to-back emitted 511 keV photons, which can be detected with PET. A good correlation between carbon ion range and β+ emitter distribution has been already demonstrated in patient studies using general-purpose MC codes and offline PET acquisitions [44] (Figure 2). In-depth experimental validation of PET-relevant computational models has also been performed in phantom studies, for example, in Sommerer et al [3] in the case of FLUKA.
Prompt gammas have been shown to be closely correlated to the carbon ion range in a number of experimental studies [45–47], which also served to validate the MC tools. Benchmarking of Geant4 with prompt gamma experimental data for carbon ion beams was first done in a study by Le Foulher et al [48], where large discrepancies (up to a factor of 12) were observed for both 95 MeV/n and 310 MeV/n 12C beams due to the absence of the Fermi breakup model. In the study of Agodi et al [49] for 80 MeV/n 12C beam, including Fermi breakup, the discrepancy reduces to a factor of 2. Similar studies have been done with FLUKA [50] and show good agreement with the aforementioned experimental data. A comparison of important observables related to carbon ion range monitoring is presented in Robert et al [51]. A discrepancy of a factor of 2 between FLUKA and Geant4 (with Fermi breakup) prompt gamma yields has been identified for the simulation of a 260 MeV/n 12C beam impinging on a polymethylmethacrylate target. Subsequent optimizations in the QMD model in Geant4 [52] have led to improved description of experimental prompt-gamma yields.
The interest in an accurate MC modeling of the processes that produce charged fragments is dual. On the one hand, charged secondaries have a large contribution to the total dose distribution for carbon ion beams. More details were provided in the beginning of section 2, summarizing the contribution of secondary fragments to the dose in the case of both carbon ion and proton beams. Comparisons between general-purpose MC code predictions of charged secondaries distributions and experimental data are presented in Böhlen et al [53], De Napoli et al [54], and Pshenichnov et al [55]. In Böhlen et al [53], FLUKA and Geant4 are compared with experimental data of 400 MeV/n 12C beam impinging to water. For nondifferential quantities, discrepancies of some tens of percent are reported (Figure 3). Summarizing Figure 3, FLUKA is predicting H and He fragment buildup within 10%, it underestimates Li up to 20% and for heavier fragments it is accurate to within the experimental errors. Geant4 (QMD) is predicting all fragments' buildup accurately within the experimental errors, except for an underestimation of up to 30% for He. For differential quantities the discrepancies are even larger. Böhlen et al [53] make a rough estimation of the impact of the fragmentation modeling inaccuracies to the calculated dose, being in the order of a few percent (4% to 5%). As it is also stated in their study, such an estimation should be interpreted with great care as it is based on a single Bragg peak benchmarking, toward the higher end of clinical energies, therefore maximizing the effect of charged secondary fragments. Additional benchmarks of Geant4 on the same experimental data are presented in Pshenichnov et al [55], which showed comparable performance to the previously mentioned studies, and in De Napoli et al [54], which focused on lower energies and He fragments. It should be emphasized that there is currently a very limited set of experimental fragmentation data with substantial uncertainties.
On the other hand, some of those fragments, and especially secondary protons, have sufficient energy to escape the patient, thus providing information about primaries' carbon ions range. The latter assumption was investigated by Henriquet et al [57] using Geant4, presenting a proof of principle of performing carbon ions range monitoring using secondary protons, which was also investigated experimentally [58].
Conclusion
Despite remaining experimental uncertainties in the knowledge of fundamental quantities, like cross-section data in the energy range of therapeutic interest, a number of general-purpose MC codes already exhibit very satisfactory performances and have already been successfully applied to support start-up and clinical routines at different carbon ion therapy facilities. Hence, MC methods promise to play an increasing role in supporting many fundamental aspects of carbon ion therapy from accurate dosimetric calculations to detailed modeling of secondary radiation for in vivo range monitoring and, even if not covered in this contribution, secondary cancer induction. Recognizing speed as the main obstacle to clinical use of full MC dose calculations, there are several ongoing efforts to overcome it. A recent study [59] showed that for protons the dose calculation time decreases from 4 CPU hours per million particles (2.8 to 2.9 GHz Intel X5600) to 2.4 sec per million particles (NVIDIA TESLA C2075) for a graphics processing unit (GPU) based MC code. Recent efforts on a GPU implementation of MC dose calculations for carbon ions have been also reported [60].
Owing to continued increase in computational power and larger exploration of acceleration methods (eg, variance reduction techniques) or GPU implementations, it is expected that the major drawbacks of computational times will also be soon overcome and pave the way toward increasing usage of MC for macroscopic simulation of particle interaction in tissue, including the more challenging goal of inverse treatment planning.
ADDITIONAL INFORMATION AND DECLARATIONS
Conflicts of interest: The authors have no conflicts of interest to disclose.