Index of content:
Volume 43, Issue 9, September 2016
Low dose-rate brachytherapy for the treatment of cervix cancer is outdated and should be discontinued43(2016); http://dx.doi.org/10.1118/1.4959547View Description Hide Description
- THERAPEUTIC INTERVENTIONS
- Research Articles
43(2016); http://dx.doi.org/10.1118/1.4959999View Description Hide DescriptionPurpose:
To identify policy and system related weaknesses in treatment planning and plan check work-flows.Methods:
The authors’ web deployed plan check automation solution, PlanCheck, which works with all major planning and record and verify systems (demonstrated here for mosaiq only), allows them to compute violation rates for a large number of plan checks across many facilities without requiring the manual data entry involved with incident filings. Workflows and failure modes are heavily influenced by the type of record and verify system used. Rather than tackle multiple record and verify systems at once, the authors restricted the present survey to mosaiq facilities. Violations were investigated by sending inquiries to physicists running the program.Results:
Frequent violations included inadequate tracking in the record and verify system of total and prescription doses. Infrequent violations included incorrect setting of patient orientation in the record and verify system. Peaks in the distribution, over facilities, of violation frequencies pointed to suboptimal policies at some of these facilities. Correspondence with physicists often revealed incomplete knowledge of settings at their facility necessary to perform thorough plan checks.Conclusions:
The survey leads to the identification of specific and important policy and system deficiencies that include: suboptimal timing of initial plan checks, lack of communication or agreement on conventions surrounding prescription definitions, and lack of automation in the transfer of some parameters.
43(2016); http://dx.doi.org/10.1118/1.4960000View Description Hide DescriptionPurpose:
To develop a fast optimization method for station parameter optimized radiation therapy (SPORT) and show that SPORT is capable of matching VMAT in both plan quality and delivery efficiency by using three clinical cases of different disease sites.Methods:
The angular space from 0° to 360° was divided into 180 station points (SPs). A candidate aperture was assigned to each of the SPs based on the calculation results using a column generation algorithm. The weights of the apertures were then obtained by optimizing the objective function using a state-of-the-art GPU based proximal operator graph solver. To avoid being trapped in a local minimum in beamlet-based aperture selection using the gradient descent algorithm, a stochastic gradient descent was employed here. Apertures with zero or low weight were thrown out. To find out whether there was room to further improve the plan by adding more apertures or SPs, the authors repeated the above procedure with consideration of the existing dose distribution from the last iteration. At the end of the second iteration, the weights of all the apertures were reoptimized, including those of the first iteration. The above procedure was repeated until the plan could not be improved any further. The optimization technique was assessed by using three clinical cases (prostate, head and neck, and brain) with the results compared to that obtained using conventional VMAT in terms of dosimetric properties, treatment time, and total MU.Results:
Marked dosimetric quality improvement was demonstrated in the SPORT plans for all three studied cases. For the prostate case, the volume of the 50% prescription dose was decreased by 22% for the rectum and 6% for the bladder. For the head and neck case, SPORT improved the mean dose for the left and right parotids by 15% each. The maximum dose was lowered from 72.7 to 71.7 Gy for the mandible, and from 30.7 to 27.3 Gy for the spinal cord. The mean dose for the pharynx and larynx was reduced by 8% and 6%, respectively. For the brain case, the doses to the eyes, chiasm, and inner ears were all improved. SPORT shortened the treatment time by ∼1 min for the prostate case, ∼0.5 min for brain case, and ∼0.2 min for the head and neck case.Conclusions:
The dosimetric quality and delivery efficiency presented here indicate that SPORT is an intriguing alternative treatment modality. With the widespread adoption of digital linac, SPORT should lead to improved patient care in the future.
Registration of human skull computed tomography data to an ultrasound treatment space using a sparse high frequency ultrasound hemispherical array43(2016); http://dx.doi.org/10.1118/1.4960362View Description Hide DescriptionPurpose:
Transcranial focused ultrasound (FUS) shows great promise for a range of therapeutic applications in the brain. Current clinical investigations rely on the use of magnetic resonance imaging (MRI) to monitor treatments and for the registration of preoperative computed tomography (CT)-data to the MR images at the time of treatment to correct the sound aberrations caused by the skull. For some applications, MRI is not an appropriate choice for therapy monitoring and its cost may limit the accessibility of these treatments. An alternative approach, using high frequency ultrasound measurements to localize the skull surface and register CT data to the ultrasound treatment space, for the purposes of skull-related phase aberration correction and treatment targeting, has been developed.Methods:
A prototype high frequency, hemispherical sparse array was fabricated. Pulse-echo measurements of the surface of five ex vivo human skulls were made, and the CT datasets of each skull were obtained. The acoustic data were used to rigidly register the CT-derived skull surface to the treatment space. The ultrasound-based registrations of the CT datasets were compared to the gold-standard landmark-based registrations.Results:
The results show on an average sub-millimeter (0.9 ± 0.2 mm) displacement and subdegree (0.8° ± 0.4°) rotation registration errors. Numerical simulations predict that registration errors on this scale will result in a mean targeting error of 1.0 ± 0.2 mm and reduction in focal pressure of 1.0% ± 0.6% when targeting a midbrain structure (e.g., hippocampus) using a commercially available low-frequency brain prototype device (InSightec, 230 kHz brain system).Conclusions:
If combined with ultrasound-based treatment monitoring techniques, this registration method could allow for the development of a low-cost transcranial FUS treatment platform to make this technology more widely available.
Time-integrated activity coefficient estimation for radionuclide therapy using PET and a pharmacokinetic model: A simulation study on the effect of sampling schedule and noise43(2016); http://dx.doi.org/10.1118/1.4961012View Description Hide DescriptionPurpose:
The aim of this study was to investigate the accuracy of PET-based treatment planning for predicting the time-integrated activity coefficients (TIACs).Methods:
The parameters of a physiologically based pharmacokinetic (PBPK) model were fitted to the biokinetic data of 15 patients to derive assumed true parameters and were used to construct true mathematical patient phantoms (MPPs). Biokinetics of 150 MBq 68Ga-DOTATATE-PET was simulated with different noise levels [fractional standard deviation (FSD) 10%, 1%, 0.1%, and 0.01%], and seven combinations of measurements at 30 min, 1 h, and 4 h p.i. PBPK model parameters were fitted to the simulated noisy PET data using population-based Bayesian parameters to construct predicted MPPs. Therapy simulations were performed as 30 min infusion of 90Y-DOTATATE of 3.3 GBq in both true and predicted MPPs. Prediction accuracy was then calculated as relative variability v organ between TIACs from both MPPs.Results:
Large variability values of one time-point protocols [e.g., FSD = 1%, 240 min p.i., v kidneys = (9 ± 6)%, and v tumor = (27 ± 26)%] show inaccurate prediction. Accurate TIAC prediction of the kidneys was obtained for the case of two measurements (1 and 4 h p.i.), e.g., FSD = 1%, v kidneys = (7 ± 3)%, and v tumor = (22 ± 10)%, or three measurements, e.g., FSD = 1%, v kidneys = (7 ± 3)%, and v tumor = (22 ± 9)%.Conclusions:
68Ga-DOTATATE-PET measurements could possibly be used to predict the TIACs of 90Y-DOTATATE when using a PBPK model and population-based Bayesian parameters. The two time-point measurement at 1 and 4 h p.i. with a noise up to FSD = 1% allows an accurate prediction of the TIACs in kidneys.
Predicting variation in subject thermal response during transcranial magnetic resonance guided focused ultrasound surgery: Comparison in seventeen subject datasets43(2016); http://dx.doi.org/10.1118/1.4955436View Description Hide DescriptionPurpose:
In transcranial magnetic resonance-guided focused ultrasound (tcMRgFUS) treatments, the acoustic and spatial heterogeneity of the skull cause reflection, absorption, and scattering of the acoustic beams. These effects depend on skull-specific parameters and can lead to patient-specific thermal responses to the same transducer power. In this work, the authors develop a simulation tool to help predict these different experimental responses using 3D heterogeneous tissue models based on the subject CT images. The authors then validate and compare the predicted skull efficiencies to an experimental metric based on the subject thermal responses during tcMRgFUS treatments in a dataset of seventeen human subjects.Methods:
Seventeen human head CT scans were used to create tissue acoustic models, simulating the effects of reflection, absorption, and scattering of the acoustic beam as it propagates through a heterogeneous skull. The hybrid angular spectrum technique was used to model the acoustic beam propagation of the InSightec ExAblate 4000 head transducer for each subject, yielding maps of the specific absorption rate (SAR). The simulation assumed the transducer was geometrically focused to the thalamus of each subject, and the focal SAR at the target was used as a measure of the simulated skull efficiency. Experimental skull efficiency for each subject was calculated using the thermal temperature maps from the tcMRgFUS treatments. Axial temperature images (with no artifacts) were reconstructed with a single baseline, corrected using a referenceless algorithm. The experimental skull efficiency was calculated by dividing the reconstructed temperature rise 8.8 s after sonication by the applied acoustic power.Results:
The simulated skull efficiency using individual-specific heterogeneous models predicts well (R 2 = 0.84) the experimental energy efficiency.Conclusions:
This paper presents a simulation model to predict the variation in thermal responses measured in clinical ctMRGFYS treatments while being computationally feasible.
43(2016); http://dx.doi.org/10.1118/1.4961010View Description Hide DescriptionPurpose:
The pretreatment physics plan review is a standard tool for ensuring treatment quality. Studies have shown that the majority of errors in radiation oncology originate in treatment planning, which underscores the importance of the pretreatment physics plan review. This quality assurance measure is fundamentally important and central to the safety of patients and the quality of care that they receive. However, little is known about its effectiveness. The purpose of this study was to analyze reported incidents to quantify the effectiveness of the pretreatment physics plan review with the goal of improving it.Methods:
This study analyzed 522 potentially severe or critical near-miss events within an institutional incident learning system collected over a three-year period. Of these 522 events, 356 originated at a workflow point that was prior to the pretreatment physics plan review. The remaining 166 events originated after the pretreatment physics plan review and were not considered in the study. The applicable 356 events were classified into one of the three categories: (1) events detected by the pretreatment physics plan review, (2) events not detected but “potentially detectable” by the physics review, and (3) events “not detectable” by the physics review. Potentially detectable events were further classified by which specific checks performed during the pretreatment physics plan review detected or could have detected the event. For these events, the associated specific check was also evaluated as to the possibility of automating that check given current data structures. For comparison, a similar analysis was carried out on 81 events from the international SAFRON radiation oncology incident learning system.Results:
Of the 356 applicable events from the institutional database, 180/356 (51%) were detected or could have been detected by the pretreatment physics plan review. Of these events, 125 actually passed through the physics review; however, only 38% (47/125) were actually detected at the review. Of the 81 events from the SAFRON database, 66/81 (81%) were potentially detectable by the pretreatment physics plan review. From the institutional database, three specific physics checks were particularly effective at detecting events (combined effectiveness of 38%): verifying the isocenter (39/180), verifying DRRs (17/180), and verifying that the plan matched the prescription (12/180). The most effective checks from the SAFRON database were verifying that the plan matched the prescription (13/66) and verifying the field parameters in the record and verify system against those in the plan (23/66). Software-based plan checking systems, if available, would have potential effectiveness of 29% and 64% at detecting events from the institutional and SAFRON databases, respectively.Conclusions:
Pretreatment physics plan review is a key safety measure and can detect a high percentage of errors. However, the majority of errors that potentially could have been detected were not detected in this study, indicating the need to improve the pretreatment physics plan review performance. Suggestions for improvement include the automation of specific physics checks performed during the pretreatment physics plan review and the standardization of the review process.
- Technical Notes
43(2016); http://dx.doi.org/10.1118/1.4960369View Description Hide DescriptionPurpose:
To introduce a simplified quality assurance (QA) procedure that integrates tests for the linac’s imaging components and the robotic couch. Current QA procedures for evaluating the alignment of the imaging system and linac require careful positioning of a phantom at isocenter before image acquisition and analysis. A complementary procedure for the robotic couch requires an initial displacement of the phantom and then evaluates the accuracy of repositioning the phantom at isocenter. We propose a two-in-one procedure that introduces a custom software module and incorporates both checks into one motion for increased efficiency.Methods:
The phantom was manually set with random translational and rotational shifts, imaged with the in-room imaging system, and then registered to the isocenter using a custom software module. The software measured positioning accuracy by comparing the location of the repositioned phantom with a CAD model of the phantom at isocenter, which is physically verified using the MV port graticule. Repeatability of the custom software was tested by an assessment of internal marker location extraction on a series of scans taken over differing kV and CBCT acquisition parameters.Results:
The proposed method was able to correctly position the phantom at isocenter within acceptable 1 mm and 1° SRS tolerances, verified by both physical inspection and the custom software. Residual errors for mechanical accuracy were 0.26 mm vertically, 0.21 mm longitudinally, 0.55 mm laterally, 0.21° in pitch, 0.1° in roll, and 0.67° in yaw. The software module was shown to be robust across various scan acquisition parameters, detecting markers within 0.15 mm translationally in kV acquisitions and within 0.5 mm translationally and 0.3° rotationally across CBCT acquisitions with significant variations in voxel size. Agreement with vendor registration methods was well within 0.5 mm; differences were not statistically significant.Conclusions:
As compared to the current two-step approach, the proposed QA procedure streamlines the workflow, accounts for rotational errors in imaging alignment, and simulates a broad range of variations in setup errors seen in clinical practice.
- DIAGNOSTIC IMAGING (IONIZING AND NON-IONIZING)
- Research Articles
43(2016); http://dx.doi.org/10.1118/1.4959544View Description Hide DescriptionPurpose:
Color images are being used more in medical imaging for a broad range of modalities and applications. While in the past, color was mostly used for annotations, today color is also widely being used for diagnostic purposes. Surprisingly enough, there is no agreed upon standard yet that describes how color medical images need to be visualized and how calibration and quality assurance of color medical displays need to be performed. This paper proposes color standard display function (CSDF) which is an extension of the DICOM GSDF standard toward color. CSDF defines how color medical displays need to be calibrated and how QA can be performed to obtain perceptually linear behavior not only for grayscale but also for color.Methods:
The proposed CSDF algorithm uses DICOM GSDF calibration as a starting point and subsequently uses a color visual difference metric to redistribute colors in order to obtain perceptual linearity not only for the grayscale behavior but also for the color behavior. A clear calibration and quality assurance algorithm is defined and is validated on a wide range of different display systems.Results:
A detailed description of the proposed CSDF calibration and quality assurance algorithms is provided. These algorithms have been tested extensively on three types of display systems: consumer displays, professional displays, and medical grade displays. Test results are reported both for the calibration algorithm as well as for the quantitative and visual quality assurance methods. The tests confirm that the described algorithm generates consistent results and is able to increase perceptual linearity for color and grayscale visualization. Moreover the proposed algorithms are working well on a wide range of display systems.Conclusions:
CSDF has been proposed as an extension of the DICOM GSDF standard toward color. Calibration and QA algorithms for CSDF have been described in detail. The proposed algorithms have been tested on several types of display systems and the results confirm that CSDF largely increases the perceptual linearity of visualized colors, while at the same time remaining compliant with DICOM GSDF.
43(2016); http://dx.doi.org/10.1118/1.4960365View Description Hide DescriptionPurpose:
The primary clinical role of positron emission tomography (PET) imaging is the detection of anomalous regions of 18F-FDG uptake, which are often indicative of malignant lesions. The goal of this work was to create a task-configurable fillable phantom for realistic measurements of detectability in PET imaging. Design goals included simplicity, adjustable feature size, realistic size and contrast levels, and inclusion of a lumpy (i.e., heterogeneous) background.Methods:
The detection targets were hollow 3D-printed dodecahedral nylon features. The exostructure sphere-like features created voids in a background of small, solid non-porous plastic (acrylic) spheres inside a fillable tank. The features filled at full concentration while the background concentration was reduced due to filling only between the solid spheres.Results:
Multiple iterations of feature size and phantom construction were used to determine a configuration at the limit of detectability for a PET/CT system. A full-scale design used a 20 cm uniform cylinder (head-size) filled with a fixed pattern of features at a contrast of approximately 3:1. Known signal-present and signal-absent PET sub-images were extracted from multiple scans of the same phantom and with detectability in a challenging (i.e., useful) range. These images enabled calculation and comparison of the quantitative observer detectability metrics between scanner designs and image reconstruction methods. The phantom design has several advantages including filling simplicity, wall-less contrast features, the control of the detectability range via feature size, and a clinically realistic lumpy background.Conclusions:
This phantom provides a practical method for testing and comparison of lesion detectability as a function of imaging system, acquisition parameters, and image reconstruction methods and parameters.
43(2016); http://dx.doi.org/10.1118/1.4960631View Description Hide DescriptionPurpose:
To propose a new method for estimating scatter in x-ray imaging. Conventional antiscatter grids reject scatter at an efficiency that is constant or slowly varying over the surface of the grid. A striped ratio antiscatter grid, composed of stripes that alternate between high and low grid ratio, could be used instead. Such a striped ratio grid would reduce scatter-to-primary ratio as a conventional grid would, but more importantly, the signal discontinuities at the boundaries of stripes can be used to estimate local scatter content.Methods:
Signal discontinuities provide information on scatter, but are contaminated by variation in primary radiation. A nonlinear image processing algorithm is used to estimate the scatter content in the presence of primary variation. We emulated a striped ratio grid by imaging phantoms with two sequential CT scans, one with and one without a conventional grid. These two scans are processed together to mimic a striped ratio grid. This represents a best case limit of the striped ratio grid, in that the extent of grid ratio modulation is very high and the scatter contrast is maximized.Results:
In a uniform cylinder, the striped ratio grid virtually eliminates cupping. Artifacts from scatter are improved in an anthropomorphic phantom. Some banding artifacts are induced by the striped ratio grid.Conclusions:
Striped ratio grids could be a simple and effective evolution of conventional antiscatter grids. Construction and validation of a physical prototype remains an important future step.
Impact of compressed breast thickness and dose on lesion detectability in digital mammography: FROC study with simulated lesions in real mammograms43(2016); http://dx.doi.org/10.1118/1.4960630View Description Hide DescriptionPurpose:
The aim of this work was twofold: (1) to examine whether, with standard automatic exposure control (AEC) settings that maintain pixel values in the detector constant, lesion detectability in clinical images decreases as a function of breast thickness and (2) to verify whether a new AEC setup can increase lesion detectability at larger breast thicknesses.Methods:
Screening patient images, acquired on two identical digital mammography systems, were collected over a period of 2 yr. Mammograms were acquired under standard AEC conditions (part 1) and subsequently with a new AEC setup (part 2), programmed to use the standard AEC settings for compressed breast thicknesses ≤49 mm, while a relative dose increase was applied above this thickness. The images were divided into four thickness groups: T1 ≤ 29 mm, T2 = 30–49 mm, T3 = 50–69 mm, and T4 ≥ 70 mm, with each thickness group containing 130 randomly selected craniocaudal lesion-free images. Two measures of density were obtained for every image: a BI-RADS score and a map of volumetric breast density created with a software application (VolparaDensity, Matakina, NZ). This information was used to select subsets of four images, containing one image from each thickness group, matched to a (global) BI-RADS score and containing a region with the same (local) volpara volumetric density value. One selected lesion (a microcalcification cluster or a mass) was simulated into each of the four images. This process was repeated so that, for a given thickness group, half the images contained a single lesion and half were lesion-free. The lesion templates created and inserted in groups T3 and T4 for the first part of the study were then inserted into the images of thickness groups T3 and T4 acquired with higher dose settings. Finally, all images were visualized using the ViewDEX software and scored by four radiologists performing a free search study. A statistical jackknife-alternative free-response receiver operating characteristic analysis was applied.Results:
For part 1, the alternative free-response receiver operating characteristic curves for the four readers were 0.80, 0.65, 0.55 and 0.56 in going from T1 to T4, indicating a decrease in detectability with increasing breast thickness. P-values and the 95% confidence interval showed no significant difference for the T3-T4 comparison (p = 0.78) while all the other differences were significant (p < 0.05). Separate analysis of microcalcification clusters presented the same results while for mass detection, the only significant difference came when comparing T1 to the other thickness groups. Comparing the scores of part 1 and part 2, results for the T3 group acquired with the new AEC setup and T3 group at standard AEC doses were significantly different (p = 0.0004), indicating improved detection. For this group a subanalysis for microcalcification detection gave the same results while no significant difference was found for mass detection.Conclusions:
These data using clinical images confirm results found in simple QA tests for many mammography systems that detectability falls as breast thickness increases. Results obtained with the AEC setup for constant detectability above 49 mm showed an increase in lesion detection with compressed breast thickness, bringing detectability of lesions to the same level.
- Technical Notes
Technical Note: Development of a 3D printed subresolution sandwich phantom for validation of brain SPECT analysis43(2016); http://dx.doi.org/10.1118/1.4960003View Description Hide DescriptionPurpose:
To make an adaptable, head shaped radionuclide phantom to simulate molecular imaging of the brain using clinical acquisition and reconstruction protocols. This will allow the characterization and correction of scanner characteristics, and improve the accuracy of clinical image analysis, including the application of databases of normal subjects.Methods:
A fused deposition modeling 3D printer was used to create a head shaped phantom made up of transaxial slabs, derived from a simulated MRI dataset. The attenuation of the printed polylactide (PLA), measured by means of the Hounsfield unit on CT scanning, was set to match that of the brain by adjusting the proportion of plastic filament and air (fill ratio). Transmission measurements were made to verify the attenuation of the printed slabs. The radionuclide distribution within the phantom was created by adding 99mTc pertechnetate to the ink cartridge of a paper printer and printing images of gray and white matter anatomy, segmented from the same MRI data. The complete subresolution sandwich phantom was assembled from alternate 3D printed slabs and radioactive paper sheets, and then imaged on a dual headed gamma camera to simulate an HMPAO SPECT scan.Results:
Reconstructions of phantom scans successfully used automated ellipse fitting to apply attenuation correction. This removed the variability inherent in manual application of attenuation correction and registration inherent in existing cylindrical phantom designs. The resulting images were assessed visually and by count profiles and found to be similar to those from an existing elliptical PMMA phantom.Conclusions:
The authors have demonstrated the ability to create physically realistic HMPAO SPECT simulations using a novel head-shaped 3D printed subresolution sandwich method phantom. The phantom can be used to validate all neurological SPECT imaging applications. A simple modification of the phantom design to use thinner slabs would make it suitable for use in PET.
43(2016); http://dx.doi.org/10.1118/1.4961400View Description Hide DescriptionPurpose:
The aim of this work is to propose a general and simple procedure for the calibration and validation of kilo-voltage cone-beam CT (kV CBCT) models against experimental data.Methods:
The calibration and validation of the CT model is a two-step procedure: the source model then the detector model. The source is described by the direction dependent photon energy spectrum at each voltage while the detector is described by the pixel intensity value as a function of the direction and the energy of incident photons. The measurements for the source consist of a series of dose measurements in air performed at each voltage with varying filter thicknesses and materials in front of the x-ray tube. The measurements for the detector are acquisitions of projection images using the same filters and several tube voltages. The proposed procedure has been applied to calibrate and assess the accuracy of simple models of the source and the detector of three commercial kV CBCT units. If the CBCT system models had been calibrated differently, the current procedure would have been exclusively used to validate the models. Several high-purity attenuation filters of aluminum, copper, and silver combined with a dosimeter which is sensitive to the range of voltages of interest were used. A sensitivity analysis of the model has also been conducted for each parameter of the source and the detector models.Results:
Average deviations between experimental and theoretical dose values are below 1.5% after calibration for the three x-ray sources. The predicted energy deposited in the detector agrees with experimental data within 4% for all imaging systems.Conclusions:
The authors developed and applied an experimental procedure to calibrate and validate any model of the source and the detector of a CBCT unit. The present protocol has been successfully applied to three x-ray imaging systems. The minimum requirements in terms of material and equipment would make its implementation suitable in most clinical environments.
- QUANTITATIVE IMAGING AND IMAGE PROCESSING
- Research Articles
Use of local noise power spectrum and wavelet analysis in quantitative image quality assurance for EPIDs43(2016); http://dx.doi.org/10.1118/1.4959541View Description Hide DescriptionPurpose:
To investigate the use of local noise power spectrum (NPS) to characterize image noise and wavelet analysis to isolate defective pixels and inter-subpanel flat-fielding artifacts for quantitative quality assurance (QA) of electronic portal imaging devices (EPIDs).Methods:
A total of 93 image sets including custom-made bar-pattern images and open exposure images were collected from four iViewGT a-Si EPID systems over three years. Global quantitative metrics such as modulation transform function (MTF), NPS, and detective quantum efficiency (DQE) were computed for each image set. Local NPS was also calculated for individual subpanels by sampling region of interests within each subpanel of the EPID. The 1D NPS, obtained by radially averaging the 2D NPS, was fitted to a power-law function. The r-square value of the linear regression analysis was used as a singular metric to characterize the noise properties of individual subpanels of the EPID. The sensitivity of the local NPS was first compared with the global quantitative metrics using historical image sets. It was then compared with two commonly used commercial QA systems with images collected after applying two different EPID calibration methods (single-level gain and multilevel gain). To detect isolated defective pixels and inter-subpanel flat-fielding artifacts, Haar wavelet transform was applied on the images.Results:
Global quantitative metrics including MTF, NPS, and DQE showed little change over the period of data collection. On the contrary, a strong correlation between the local NPS (r-square values) and the variation of the EPID noise condition was observed. The local NPS analysis indicated image quality improvement with the r-square values increased from 0.80 ± 0.03 (before calibration) to 0.85 ± 0.03 (after single-level gain calibration) and to 0.96 ± 0.03 (after multilevel gain calibration), while the commercial QA systems failed to distinguish the image quality improvement between the two calibration methods. With wavelet analysis, defective pixels and inter-subpanel flat-fielding artifacts were clearly identified as spikes after thresholding the inversely transformed images.Conclusions:
The proposed local NPS (r-square values) showed superior sensitivity to the noise level variations of individual subpanels compared with global quantitative metrics such as MTF, NPS, and DQE. Wavelet analysis was effective in detecting isolated defective pixels and inter-subpanel flat-fielding artifacts. The proposed methods are promising for the early detection of imaging artifacts of EPIDs.
Evaluation of a deformable registration algorithm for subsequent lung computed tomography imaging during radiochemotherapy43(2016); http://dx.doi.org/10.1118/1.4960366View Description Hide DescriptionPurpose:
Rating both a lung segmentation algorithm and a deformable image registration (DIR) algorithm for subsequent lung computed tomography (CT) images by different evaluation techniques. Furthermore, investigating the relative performance and the correlation of the different evaluation techniques to address their potential value in a clinical setting.Methods:
Two to seven subsequent CT images (69 in total) of 15 lung cancer patients were acquired prior, during, and after radiochemotherapy. Automated lung segmentations were compared to manually adapted contours. DIR between the first and all following CT images was performed with a fast algorithm specialized for lung tissue registration, requiring the lung segmentation as input. DIR results were evaluated based on landmark distances, lung contour metrics, and vector field inconsistencies in different subvolumes defined by eroding the lung contour. Correlations between the results from the three methods were evaluated.Results:
Automated lung contour segmentation was satisfactory in 18 cases (26%), failed in 6 cases (9%), and required manual correction in 45 cases (66%). Initial and corrected contours had large overlap but showed strong local deviations. Landmark-based DIR evaluation revealed high accuracy compared to CT resolution with an average error of 2.9 mm. Contour metrics of deformed contours were largely satisfactory. The median vector length of inconsistency vector fields was 0.9 mm in the lung volume and slightly smaller for the eroded volumes. There was no clear correlation between the three evaluation approaches.Conclusions:
Automatic lung segmentation remains challenging but can assist the manual delineation process. Proven by three techniques, the inspected DIR algorithm delivers reliable results for the lung CT data sets acquired at different time points. Clinical application of DIR demands a fast DIR evaluation to identify unacceptable results, for instance, by combining different automated DIR evaluation methods.
43(2016); http://dx.doi.org/10.1118/1.4960364View Description Hide DescriptionPurpose:
Tooth segmentation is an essential step in acquiring patient-specific dental geometries from cone-beam computed tomography (CBCT) images. Tooth segmentation from CBCT images is still a challenging task considering the comparatively low image quality caused by the limited radiation dose, as well as structural ambiguities from intercuspation and nearby alveolar bones. The goal of this paper is to present and discuss the latest accomplishments in semisupervised tooth segmentation with adaptive 3D shape constraints.Methods:
The authors propose a 3D exemplar-based random walk method of tooth segmentation from CBCT images. The proposed method integrates semisupervised label propagation and regularization by 3D exemplar registration. To begin with, the pure random walk method is to get an initial segmentation of the teeth, which tends to be erroneous because of the structural ambiguity of CBCT images. And then, as an iterative refinement, the authors conduct a regularization by using 3D exemplar registration, as well as label propagation by random walks with soft constraints, to improve the tooth segmentation. In the first stage of the iteration, 3D exemplars with well-defined topologies are adapted to fit the tooth contours, which are obtained from the random walks based segmentation. The soft constraints on voxel labeling are defined by shape-based foreground dentine probability acquired by the exemplar registration, as well as the appearance-based probability from a support vector machine (SVM) classifier. In the second stage, the labels of the volume-of-interest (VOI) are updated by the random walks with soft constraints. The two stages are optimized iteratively. Instead of the one-shot label propagation in the VOI, an iterative refinement process can achieve a reliable tooth segmentation by virtue of exemplar-based random walks with adaptive soft constraints.Results:
The proposed method was applied for tooth segmentation of twenty clinically captured CBCT images. Three metrics, including the Dice similarity coefficient (DSC), the Jaccard similarity coefficient (JSC), and the mean surface deviation (MSD), were used to quantitatively analyze the segmentation of anterior teeth including incisors and canines, premolars, and molars. The segmentation of the anterior teeth achieved a DSC up to 98%, a JSC of 97%, and an MSD of 0.11 mm compared with manual segmentation. For the premolars, the average values of DSC, JSC, and MSD were 98%, 96%, and 0.12 mm, respectively. The proposed method yielded a DSC of 95%, a JSC of 89%, and an MSD of 0.26 mm for molars. Aside from the interactive definition of label priors by the user, automatic tooth segmentation can be achieved in an average of 1.18 min.Conclusions:
The proposed technique enables an efficient and reliable tooth segmentation from CBCT images. This study makes it clinically practical to segment teeth from CBCT images, thus facilitating pre- and interoperative uses of dental morphologies in maxillofacial and orthodontic treatments.
43(2016); http://dx.doi.org/10.1118/1.4960629View Description Hide DescriptionPurpose:
To develop a practical background compensation (BC) technique to improve quantitative 90Y-bremsstrahlung single-photon emission computed tomography (SPECT)/computed tomography (CT) using a commercially available imaging system.Methods:
All images were acquired using medium-energy collimation in six energy windows (EWs), ranging from 70 to 410 keV. The EWs were determined based on the signal-to-background ratio in planar images of an acrylic phantom of different thicknesses (2–16 cm) positioned below a 90Y source and set at different distances (15–35 cm) from a gamma camera. The authors adapted the widely used EW-based scatter-correction technique by modeling the BC as scaled images. The BC EW was determined empirically in SPECT/CT studies using an IEC phantom based on the sphere activity recovery and residual activity in the cold lung insert. The scaling factor was calculated from 20 clinical planar 90Y images. Reconstruction parameters were optimized in the same SPECT images for improved image quantification and contrast. A count-to-activity calibration factor was calculated from 30 clinical 90Y images.Results:
The authors found that the most appropriate imaging EW range was 90–125 keV. BC was modeled as 0.53× images in the EW of 310–410 keV. The background-compensated clinical images had higher image contrast than uncompensated images. The maximum deviation of their SPECT calibration in clinical studies was lowest (<10%) for SPECT with attenuation correction (AC) and SPECT with AC + BC. Using the proposed SPECT-with-AC + BC reconstruction protocol, the authors found that the recovery coefficient of a 37-mm sphere (in a 10-mm volume of interest) increased from 39% to 90% and that the residual activity in the lung insert decreased from 44% to 14% over that of SPECT images with AC alone.Conclusions:
The proposed EW-based BC model was developed for 90Y bremsstrahlung imaging. SPECT with AC + BC gave improved lesion detectability and activity quantification compared to SPECT with AC only. The proposed methodology can readily be used to tailor 90Y SPECT/CT acquisition and reconstruction protocols with different SPECT/CT systems for quantification and improved image quality in clinical settings.
43(2016); http://dx.doi.org/10.1118/1.4961391View Description Hide DescriptionPurpose:
Partial volume correction (PVC) methods typically improve quantification at the expense of increased image noise and reduced reproducibility. In this study, the authors developed a novel voxel-based PVC method that incorporates anatomical knowledge to improve quantification while suppressing noise for cardiac SPECT/CT imaging.Methods:
In the proposed method, the SPECT images were first reconstructed using anatomical-based maximum a posteriori (AMAP) with Bowsher’s prior to penalize noise while preserving boundaries. A sequential voxel-by-voxel PVC approach (Yang’s method) was then applied on the AMAP reconstruction using a template response. This template response was obtained by forward projecting a template derived from a contrast-enhanced CT image, and then reconstructed using AMAP to model the partial volume effects (PVEs) introduced by both the system resolution and the smoothing applied during reconstruction. To evaluate the proposed noise suppressed PVC (NS-PVC), the authors first simulated two types of cardiac SPECT studies: a 99mTc-tetrofosmin myocardial perfusion scan and a 99mTc-labeled red blood cell (RBC) scan on a dedicated cardiac multiple pinhole SPECT/CT at both high and low count levels. The authors then applied the proposed method on a canine equilibrium blood pool study following injection with 99mTc-RBCs at different count levels by rebinning the list-mode data into shorter acquisitions. The proposed method was compared to MLEM reconstruction without PVC, two conventional PVC methods, including Yang’s method and multitarget correction (MTC) applied on the MLEM reconstruction, and AMAP reconstruction without PVC.Results:
The results showed that the Yang’s method improved quantification, however, yielded increased noise and reduced reproducibility in the regions with higher activity. MTC corrected for PVE on high count data with amplified noise, although yielded the worst performance among all the methods tested on low-count data. AMAP effectively suppressed noise and reduced the spill-in effect in the low activity regions. However it was unable to reduce the spill-out effect in high activity regions. NS-PVC yielded superior performance in terms of both quantitative assessment and visual image quality while improving reproducibility.Conclusions:
The results suggest that NS-PVC may be a promising PVC algorithm for application in low-dose protocols, and in gated and dynamic cardiac studies with low counts.
- Technical Notes
43(2016); http://dx.doi.org/10.1118/1.4961121View Description Hide DescriptionPurpose:
Multiatlas based segmentation is largely used in many clinical and research applications. Due to its good performances, it has recently been included in some commercial platforms for radiotherapy planning and surgery guidance. Anyway, to date, a software with no restrictions about the anatomical district and image modality is still missing. In this paper we introduce plastimatch mabs, an open source software that can be used with any image modality for automatic segmentation.Methods:
plastimatch mabs workflow consists of two main parts: (1) an offline phase, where optimal registration and voting parameters are tuned and (2) an online phase, where a new patient is labeled from scratch by using the same parameters as identified in the former phase. Several registration strategies, as well as different voting criteria can be selected. A flexible atlas selection scheme is also available. To prove the effectiveness of the proposed software across anatomical districts and image modalities, it was tested on two very different scenarios: head and neck (H&N) CT segmentation for radiotherapy application, and magnetic resonance image brain labeling for neuroscience investigation.Results:
For the neurological study, minimum dice was equal to 0.76 (investigated structures: left and right caudate, putamen, thalamus, and hippocampus). For head and neck case, minimum dice was 0.42 for the most challenging structures (optic nerves and submandibular glands) and 0.62 for the other ones (mandible, brainstem, and parotid glands). Time required to obtain the labels was compatible with a real clinical workflow (35 and 120 min).Conclusions:
The proposed software fills a gap in the multiatlas based segmentation field, since all currently available tools (both for commercial and for research purposes) are restricted to a well specified application. Furthermore, it can be adopted as a platform for exploring MABS parameters and as a reference implementation for comparing against other segmentation algorithms.