Most treatment planning systems apply deterministic numerical algorithms for dose calculation and optimization, which are fast and reliable but show certain deficiencies when dealing with inhomogeneities. Monte Carlo simulations may improve these special cases. We report on a novel approach implementing an efficient Monte Carlo engine in the framework of the TRiP98 (treatment planning for particles) treatment planning system, now in use for research purposes.
To reduce the calculation time, history repetition was used and the physical interactions were restricted to the major ones: energy loss, fragmentation, attenuation, energy loss straggling, and multiple scattering. The dose calculations are performed on the computed tomography grid, whose regularity greatly reduces the complexity of the irradiation volume we have to deal with.
We implemented a Monte Carlo algorithm within the clinically tested TRiP98 treatment planning system. We performed simulations with proton and carbon ions in water to recalculate dose distributions for patients in the head and pelvic region. Relative dose difference was used to compare with the pencil beam algorithm. Significant deviations are observed for protons, and to a lesser extent for carbon ions, in the pelvic region. For dose calculation of protons we obtained calculation times of approximately 40 minutes for 10 million protons on a Pentium 4 central processing unit with 3.4 GHz and 3GB RAM for a target tissue located at a depth of 30 cm.
Our efficient Monte Carlo method for dose calculation appears to be a useful tool for verification of the delivered dose distribution in the treatment region. Calculation times could make the algorithm also suitable for dose optimization purposes.
The availability of radiation therapy with ions has dramatically increased in the last years with the start of many new clinical facilities. A lot of effort was put in the development of commercial treatment planning systems (TPSs), able to fulfill clinical requirements of fast and accurate dose calculation. Most of them took advantage of the experience of the already-existing research prototypes. The TPSs used in combination with the state of art raster scanning delivery technique are still largely based on pencil beam (PB) algorithms owing to their computational speed and sufficient accuracy in clinical routine. Nevertheless, several studies [1, 2] have shown that Monte Carlo (MC) is the method of choice if better accuracy is the goal. In vivo range verification with positron emission tomography imaging , the development of the prompt-γ method , improvement of quality assurance procedures [5–7], search for better parameterization, and hence improvements of the PB models [8–10], have established MC simulations as an important auxiliary tool in particle therapy. Although the fast-evolving computer technology tremendously increased the calculation power, general-purpose MC simulation programs such as Geant4  or FLUKA are still not well suited for clinical routine. The effort shifted therefore into the development of MC-based engines customized for ion beam therapy [12, 13].
We developed an efficient MC algorithm within TRiP98 (treatment planning for particles) [14, 15]. This TPS was successfully used for treatment planning in the carbon ion pilot project at GSI for more than 400 patients  and served as a prototype for the Siemens Syngo PT product (Siemens Healthcare GmbH, Erlangen, Germany) in use at some ion radiation therapy sites. TRiP98 uses a 1-dimensional transport algorithm, where the broadening of the ion beam as a function of depth is accounted for by means of a double Gaussian function. For most cases, this approximation works reasonably well. However, shortcomings are to be expected in the vicinity of inhomogeneities owing to scattering processes, whose proper description might favor MC over PB techniques.
Standard TRiP Pencil Beam Calculation
TRiP98 is designed to support active raster scanner systems. Such systems use discrete energy steps to cover the target volume in depth. In the lateral direction, with respect to the beam, magnetic deflection generates a discrete pattern of beam spots, organized in slices of equal beam energy, called isoenergy slices. It is the task of the TPS to find the optimal number of particles within each of the several 10 000 spots forming a treatment plan that satisfies the medical dose prescription. At present, PB algorithms are the most efficient way to solve this task.
TRiP98 is organized in several modules, which are independent from each other and fulfill different requirements. Common to all modules is the usage of water as target material, using the concept of water-equivalent length to account for the difference in energy loss of real physical materials. TRiP98 requirements and implementation of the beam model, of the biological effect, and of the dose-calculating machinery are described elsewhere . We only schematically present here the parts that were used as a base to realize the MC engine.
For the generation of the PB database, an analytic 1-dimensional model calculates the energy deposition of an ion beam passing through matter. The evolution of the depth-dose distribution of the beam as transported through the target material is given by the transition of the spectral energy distribution from slice to slice:
where the physical interactions such as energy loss, energy loss straggling, particle attenuation, and fragment production are described in greater detail in the work of Krämer et al . dNi /dE is the spectral energy distribution of one of the particle species i (representing an isotope Ai, Zi) moving through a slice of water of thickness Δz = zk+1 − zk placed at a depth z. Ni reduces with attenuation and increases through nuclear fragmentation of heavier ion species into this one. Of note, in this analytic model for the depth-dose distribution, the dNi/dE frequencies are not integer numbers, as they express mean values. The beam model calculation produces a set of depth-dose distribution files describing the dose deposition as a function of depth and a corresponding set of energy spectra files that comprise the energy histograms dNi/dE of all contributing species i. This database used for clinical operation  is validated through quality assurance measurements, that is, depth-dose curves and positions of Bragg peaks.
The dose calculation of TRiP in water-equivalent length is based on the presumption that the deviations remain small when using water of different density instead of the real tissue chemical composition. Mainly, 2 effects are approximated through this method: the change in beam composition, that is, yield of the particle species, and the scattering of the beam. The presence of heavier target elements, such as calcium, influences the outcome of the simulation. However, as can be seen in Figure 1, the differences are small even for extreme examples. Figure 1 shows a Geant4 simulation of a carbon beam field at 270 AMeV with the left side of the beam running through a spongy bone tissue of 2-cm thickness (composition and density as indicated in International Commission on Radiological Protection 1975). The deviations when compared to the usage of dense water are within 1% of the maximum dose.
On the other hand, the broadening of the 3-dimensional PB in the database of TRiP is calculated in water once. For treatment plan computations, the water-equivalent depth is scaled accordingly, such that it fits the real geometric depth . This results in a lateral spread, which is underestimated in air and overestimated in dense tissues such as bones. Furthermore, along the border of different densities the dose distribution might differ greatly, as shown later in “Results” (Figure 5–1a, 5-1b, and 5-1c) where the TRiP98 PB is compared with our MC calculations for a 210-MeV proton beam. The dose differences at sharp edges might be as high as 40%, although typical values are approximately 10%.
Following these observations we decided to sample through MC the angular distribution of the beam, ensuring a better description of the geometric spread out. However, in this study we kept the concept of the water-equivalent translation of the computed tomography (CT) Hounsfield Units (HU) values, because the differences compared to the real chemical composition are thought to be small even for extreme cases.
Monte Carlo Ensemble Calculation Method
Setup of the ensemble data
Rather than implementing another ab initio MC model such as FLUKA or GEANT, our approach starts with the reuse of the precalculated database (see “Standard TRiP Pencil Beam Calculation”), especially with the spectral data dNi/dE described above. A so-called particle ensemble is derived from these spectral data. The ensemble comprises the 1-dimensional information of each single particle (of a species type i) along its path through the slices zk that corresponds to the slices zk from the energy spectral data. One single particle of the ensemble (with index ℓ) is then characterized by the following:
its species type (iℓ),
the energy Eℓ(zk) at the depth zk,
the starting slice kℓ,start (for primary particles, kℓ,start = 1) and the stopping slice kℓ,end, where the particle normally stops or where it changes to another species by a nuclear fragmentation process, and
ℓ'(ℓ), ℓ''(ℓ),… : the link to eventually produced daughter fragment(s).
Figure 2B illustrates the connection between the spectral data and the corresponding ensemble. An ensemble (for a beam of the primary particles with energy E0) is created by a sorting process of the data for all spectra dN/dE(E, zk) for all species in all depths of the beam. Therefore, the spectral data are rounded to integer values. The data are sorted such that each point zk of the trajectory of a particle ℓ is related to an incremental step of the bin with energy E of the corresponding spectra (for the corresponding species i at depth zk). The goal of this sorting algorithm is as follows:
Summing up these ensemble data Eℓ(zk) for all particles in each slice zk yields an integer distribution ni(En, zk) that is exactly equal (except for the rounding errors) to the analytic probability distributions dNi/dE(En, zk) and assures that the energy spectra of the ensemble reproduces the database (e.g., the Bragg curves) used in standard TRiP (PB). En is the discrete binning value of the energy.
The 1-dimensional particle history Eℓ(zk) is physically reasonable, meaning it best reproduces the spectral data according to the rules of energy loss and energy loss straggling from slice zk to slice zk+1.
It should be mentioned that for this transition of the data structure from a set of particle energy spectra dNi/dE(En, zk) to a set of 1D particle tracks Eℓ(zk), the second criteria cannot be fulfilled exactly in all cases. However, since the balanced (summed up) distributions ni(zk, En) exactly match the database, some minor inconsistency (eg, an unlikely energy loss for some remaining particles) is not relevant for the randomized calculation of lateral scattering processes, which is described below.
Reusability of the ensemble data
The key feature of the ensemble is its reusability. Basically, a raster scanning treatment plan consists of a collection of isoenergy slices, each slice corresponding to a different energy. For each energy slice it is sufficient to sample the ensembles only once and then to let it, repeatedly, walk randomly through the CT in order to perform an MC calculation (see “Monte Carlo Simulation for Lateral Scattering”) for the lateral scattering and to integrate the 3D dose deposition. Moreover, the number of repetitions can be varied between the full number of ions needed to sum up to the required dose, down to a few repetitions, taking care that the statistical fluctuations of the deposited dose remain below an acceptable limit. The sufficient amount of impinging particles as well as the statistical uncertainty is discussed in “Statistical Effects and Minimum Required Number of Particles.”
Figure 2A shows the logical scheme of the TRiP PB program as well as the integration of the MC dose recalculation algorithm. As already stated, both use the same input physical data and precalculated tables.
Monte Carlo Transport through the Computed Tomography Voxel
Once the ensemble for a given energy step is created, the MC simulation for the raster scanning process of the corresponding isoenergy slice can be started. The simplest strategy for multiple usage of the given ensemble is described as follows. If the ensemble was calculated, for example if n = 10 000 particles, then each raster point of the slice will be processed with nEns ensembles, where nEns is the minimum number of ensembles needed to account for the planned number of particles nR in one raster point. To correct for the cases where the nR is not a multiple of n one needs to scale the result by nR/nEns × (10 000). For example, for 46 789 planned particles on a beam spot, 1 ensemble is used 5 times and the dose contribution is scaled by 0.936. This is what we call “full MC”; another strategy using less ensembles is discussed in “Statistical Effects and Minimum Required Number of Particles.”
The initial position and the direction of the incident particles are randomized by using the beam parameters (ie, width and divergence), being specific for the treatment place and the energy of the current scanning slice. For the standard simulations we use a circular Gaussian beam spot and a normal distributed angular distribution (eg, σα = 1 mrad). Then each beam particle is propagated through the CT voxels until it stops or leaves the CT cube. Therefore, the intersections of the particle within the actual CT voxel are calculated and used to define the traveled length. Comparing and interpolating the ensemble data for that particle, the program calculates the change in energy and the energy loss inside the voxel, as well as the stopping position or the position where the particle undergoes nuclear fragmentation.
Monte Carlo simulation for lateral scattering
To calculate a realistic scattered zigzag path through the CT voxels, the angular scattering is computed and used to determine the new momentum direction. We consider 2 contributions to lateral scattering: multiple Coulomb scattering and angular distributions after nuclear collisions (see “Monte Carlo Simulation for Inelastic Scattering”). For the multiple scattering, the concept of Gottschalk et al  for the approximation of the Moliere theory was applied.
Even if our MC code can process arbitrary material compositions, for reasons of simplicity and robustness, only water is assumed (see “Water-Equivalent Calculation”) and the standard HU table  was used to calculate the water-equivalent density from the HU values of each voxel. Then, from the path length in the voxel and the mean energy of the particle in the voxel, the Moliere angular distribution is calculated as described by Gottschalk et al , and a randomized scattering angle is sampled. This angle is used to execute a deflection at the end of the path in the voxel. Since our MC calculations are performed for voxel sizes ≤ 2 mm, a subdividing of the zigzag path inside the single voxels is not necessary. To realize fast MC simulation, several lookup tables are precalculated, which is especially important for the Bessel terms in the Moliere approximation of the angular distribution by Gottschalk et al .
Path length corrections (in comparison to the 1D PB database) due to a microscopic zigzag scattering  can be neglected, since the mean deflection angles for the high and medium energies of the carbons, and even for the protons, are much smaller than 50 mrad (1 − cos(50 mrad) ≈ 0.1%). On the other hand, for low-energy particles (<50 MeV) the absolute range is too low for a relevant effect of path-length enlargement. Therefore, for all beam energies we could not observe any difference in the Bragg peak position between the 1-dimensional PB database and the MC dose calculation.
Monte Carlo simulation for inelastic scattering
For calculation of the inelastic scattering we used experimental data from GSI  in combination with the model of Morrissey . The experimental data give the shape of angular distributions and absolute scaling of the beam spread out for the measured energies and species, whereas the Morrissey model describes the relative energy dependency and a scaling for species that have not been measured. Especially for carbon beams, this concept yields good consistency with the experimental data from GSI. For the MC calculation of the scattering angles, a double-Gaussian angular distribution (fitted before to the data of Golovkov et al  and scaled by Morrissey ) is randomly sampled.
Statistical Effects and Minimum Required Number of Particles
Typical treatment plans contain approximately 1010 particles for a proton plan and 109 particles for a carbon plan. The number of particles per raster point ranges between several thousand to several hundred thousand. Simulating all beam ions is possible, but still very time-consuming; for example, prostate carcinoma for carbon ions: 350 hours. We investigate in the following section what is the necessary number of particles to be simulated to achieve a level of accuracy of 1% for the dose calculation, which is considered to be enough for most practical purposes and matches the precision of dose measurements.
There are 2 aspects that need to be considered. Firstly, the uncertainty with which the precalculated ensemble reproduces the PB, that is, mainly the energy distribution of all species created and transported; secondly, the statistical uncertainty of the dose deposition inside each voxel.
To figure out the difference between the ensemble concept and that of “conventional” MC, calculations with different established MC engines were performed. Figure 3 shows the root-mean-square deviation between a reference Bragg curve and the corresponding MC-simulated Bragg curve with respect to different numbers (102...105) of initial beam particles. For Geant4 , SHIELD_HIT12A , and FLUKA [23, 24], the reference Bragg curves are calculated by using 108 beam particles, resulting in very low statistical noise. The default set of configuration parameters was used for each of these simulations. For MC-TRiP the reference curve is calculated from the standard database of TRiP. To facilitate the comparison of the 4 programs, all root-mean-square deviation values are calculated from the delta values in comparison to the corresponding reference curve, where only the deviations at points up to the depth of 1.2 times the Bragg peak position are considered. All curves are normalized to the same dose at z = 0. For the simulation of the TRiP ensemble, the uncertainty related with the production and transport reduces rapidly to a value below 1%. The faster decrease of the statistical fluctuations is related to the (nonrandomized) adaptation to the analytic spectra (see “Setup of the Ensemble Data”); therefore, statistical fluctuations for the 1-dimensional Bragg curve are suppressed and only fluctuations from rounding effects remain.
Lean Monte Carlo concept
The dose deposited inside a voxel fluctuates stochastically according to the Poisson statistics. To limit the statistical error to a value below 1% we estimated the necessary number of particles nN per raster point as being different (i.e., smaller) from the planned number (nR; see section “Monte Carlo Transport through Computed Tomography Voxel” and “full MC”). The following simplifications were used: only initial beam particles are considered, each beam particle deposits the same amount of dose, that is, the dose-weighted mean value and the attenuation of beam particles are conservatively estimated from the database. Taking this into account the following empirical formula was used:
where, nN is the number of required particles for a raster point, nV is the number of particles counted inside 1 voxel and set as 10 000 to achieve a relative standard deviation of not more than 1%, ΔlVox, min is the smallest side length of a voxel, Δr is the raster scan step size, Δz is the depth, and λ is the attenuation length for the beam, which is conservatively chosen to be 70 cm for protons and 22 cm for carbons ions. For a typical irradiation, the CT voxel has a size of 1 mm3 and the scanning raster is 2×2 mm2. This leads to a minimum number between 45 000 and 100 000 ions per raster point depending on the irradiation depth. Varying the raster step would vary the number of ions per raster point as well as the total number of points necessary to cover the target volume. For MC dose recalculations we used a flux of 50 000 protons per raster point and 65 000 carbon ions per raster point.
Based on these numbers, several possibilities are available. Beside the simulation of all beam particles (see “Monte Carlo Ensemble Calculation Method”), a so-called “lean MC” strategy was used. In so doing, a beam ensemble of 10 000 particles was chosen to assure that the statistical deviation of the energy spectra is below 1% (Figure 3). Then for each raster point the ensemble was repeatedly processed, that is, once for raster points with values of nR lower than 10 000, twice for those lower than 20 000 but higher than 10 000, and so on, and finally, nN/10 000 (rounded up) times for raster points with values of nR > nN. The dose contribution for this raster point was then scaled accordingly.
Dose Difference Scoring
The relative dose difference with respect to the planned dose is used to account for the difference between the 2 calculated dose values, that is, PB and MC. The formula then is as follows:
where Dprescribed is the dose used to irradiate the target volume.
We performed calculations of carbon and proton beams for a skull chondrosarcoma and a prostate carcinoma. The goal of the procedure was to optimize the scanner pattern with the PB engine and use MC to recalculate the dose distribution. The degree of agreement between PB and MC calculations is scored according to the relative dose difference method (equation 3). A dose-volume histogram (DVH) is calculated for each plan.
Figure 4a shows the DVH of the proton plans for a skull chondrosarcoma. It shows the comparison between the TRiP MC (solid lines) and PB (dashed lines) calculations. The beam particles in the MC calculation will deposit their dose at slightly higher distances from the irradiation axis, carrying dose out of the target volume into the surrounding tissue. This can also be seen in the reduced dose conformity of the planning target volume (PTV) where 95% of the dose covers only 86% of the target volume in the case of MC and 96% of it for PB. Figure 5–2a, 5-2b, and 5-2c shows representative slices for this plan. Relative dose differences up to 5% are observed inside PTV, but at the edges significant deviations as high as 20% of the prescribed dose may occur. The magnitude of these deviations seems to increase just behind regions with HU values considerably different from those of water, that is, bones or air gaps, and is caused by the long non-Gaussian tails of the lateral scattering.
The DVH variations of PTV and organ at risk (OAR) dose are smaller for the carbon plan (Figure 4b), which is to be expected as the lateral scattering in this case is reduced owing to the higher mass of the projectile. Ninety-five percent of the dose covers 98% of the PTV in the PB calculation, compared to 95% of the PTV in the MC recalculation. This is consistent with Figure 5–3a, 5-3b, and 5-3c where the inner part of the PTV shows basically no dose difference.
Figure 4c shows the DVH of a prostate carcinoma irradiated with a proton beam. Overall weaker dose conformity is observed due to the strong diffusion of the protons over larger distances. Ninety-five percent of the dose covers only 90% of the PTV in the PB calculation and 82% of the PTV in the MC recalculation. Figure 5–2c, 5-3c, and 5-4c shows that the relative dose difference has its maximum at approximately 12% compared to 30% in the case of chondrosarcoma.
Estimation of Calculation Time
All plans were performed on a Pentium 4 machine (Intel Corporation, Santa Clara, USA) running at 3.4 GHz and with a RAM of 3 GB. The calculation time strongly depends on the position of the target volume. Table 1A presents the central processing unit (CPU) time to simulate 1 million particles for typical PBs, covering the useful therapeutic depth range. For the treatment plan of a real patient, the calculation time will be a linear combination of the times needed for its constituent beams. Table 1B shows the run time values for chondrosarcoma calculations given as the lower limit for proton and carbon beams with prostate carcinoma as an upper time limit. All times were scaled to 10 million particles to facilitate comparison. The run times of 20 to 40 minutes for 10 million protons are faster than those of general-purpose MC-based codes such as TOPAS , which was reported to calculate 10 million protons in 3 to 80 hours depending on the position of the target region.
We should note that all our calculations were performed on a single CPU core. With only modest modifications of the production code, multicore CPUs and their multithreading capabilities can be exploited. For example, a speed-up factor up to 20 is conceivable on IBM Power7 hardware (IBM Corporation, New York, United States) with 8 CPU cores at 4.2 GHz and 64 GB of RAM .
We have developed a fast MC algorithm in TRiP, one of the treatment planning systems already used in clinical environments. The usage of the same input database, that is, cross-sections and energy loss tables, assures consistency of the calculation methods, making it easier to directly compare the results.
We have shown that, especially for highly inhomogeneous tissue, the recalculation through MC may help in recognizing critical configurations and may possibly account for them in treatment planning. For carbon beams this effect is lower than 0.5% of the prescribed dose in the target region, owing to the low lateral scattering. For proton beams the difference between MC and PB can be as high as 5% of the prescribed dose; therefore, MC recalculation is more relevant for a clinical proton treatment. The differences in the dose distribution of the 2 calculation methods are most significant around regions where strong inhomogeneities are present. This might lead to deviations as high as 20% in the PTV, as for the case of chondrosarcoma proton treatment plan.
Our MC approach is a hybrid concept, which combines the 1-dimensional analytically derived database with an MC process for lateral scattering. The realistic calculation of the lateral scattering and partial beam effects at inhomogeneous tissue is the most important advantage of MC in comparison to PB algorithm.
Compared to conventional MC techniques our hybrid approach offers several specific advantages:
With computing times comparable to plan optimization, it appears to be fast enough for clinical routine even on nonspecialized computing hardware.
Owing to the ensemble concept, for the same number of simulated particles, the statistical fluctuations are reduced by a factor of 5 compared to those of general-purpose MC.
It is integrated in a dedicated TPS and thus may share all future enhancements, such as new optimization algorithms.
One of the next steps would be to include the calculation of RBE-weighted dose distributions as well and to replace the water-equivalent voxel through the International Commision on Radiation Units and Measurements chemical composition of the corresponding tissue.
ADDITIONAL INFORMATION AND DECLARATIONS
Conflicts of Interest: The authors have no conflicts to disclose.
Acknowledgments: The authors would like to thank Toke Prinz Ringbæk and Yuri Simeonov for the hours spent in running comparison simulation with ShieldHIT and FLUKA.