The linear-quadratic model is one of the key tools in radiation biology and physics. It provides a simple relationship between cell survival and delivered dose: , and has been used extensively to analyse and predict responses to ionising radiation both in vitro and in vivo. Despite its ubiquity, there remain questions about its interpretation and wider applicability—Is it a convenient empirical fit or representative of some deeper mechanistic behaviour? Does a model of single-cell survival in vitro really correspond to clinical tissue responses? Is it applicable at very high and very low doses? Here, we review these issues, discussing current usage of the LQ model, its historical context, what we now know about its mechanistic underpinnings, and the potential challenges and confounding factors that arise when trying to apply it across a range of systems.
The aim of the Institute of Physics and Engineering in Medicine (IPEM) is to promote the advancement of physics and engineering applied to medicine and biology for the public benefit. Its members are professionals working in healthcare, education, industry and research.
IPEM publishes scientific journals and books and organises conferences to disseminate knowledge and support members in their development. It sets and advises on standards for the practice, education and training of scientists and engineers working in healthcare to secure an effective and appropriate workforce.
ISSN: 1361-6560
The international journal of biomedical physics and engineering, published by IOP Publishing on behalf of the Institute of Physics and Engineering in Medicine (IPEM).
Browse IPEM-IOP ebooks Series in Physics and Engineering in Medicine and Biology.
Open all abstracts, in this tab
Stephen Joseph McMahon 2019 Phys. Med. Biol. 64 01TR01
Wayne D Newhauser and Rui Zhang 2015 Phys. Med. Biol. 60 R155
The physics of proton therapy has advanced considerably since it was proposed in 1946. Today analytical equations and numerical simulation methods are available to predict and characterize many aspects of proton therapy. This article reviews the basic aspects of the physics of proton therapy, including proton interaction mechanisms, proton transport calculations, the determination of dose from therapeutic and stray radiations, and shielding design. The article discusses underlying processes as well as selected practical experimental and theoretical methods. We conclude by briefly speculating on possible future areas of research of relevance to the physics of proton therapy.
Yolanda Prezado et al 2024 Phys. Med. Biol. 69 10TR02
Spatially fractionated radiation therapy (SFRT) is a therapeutic approach with the potential to disrupt the classical paradigms of conventional radiation therapy. The high spatial dose modulation in SFRT activates distinct radiobiological mechanisms which lead to a remarkable increase in normal tissue tolerances. Several decades of clinical use and numerous preclinical experiments suggest that SFRT has the potential to increase the therapeutic index, especially in bulky and radioresistant tumors. To unleash the full potential of SFRT a deeper understanding of the underlying biology and its relationship with the complex dosimetry of SFRT is needed. This review provides a critical analysis of the field, discussing not only the main clinical and preclinical findings but also analyzing the main knowledge gaps in a holistic way.
Conor K McGarry et al 2020 Phys. Med. Biol. 65 23TR01
Tissue mimicking materials (TMMs), typically contained within phantoms, have been used for many decades in both imaging and therapeutic applications. This review investigates the specifications that are typically being used in development of the latest TMMs. The imaging modalities that have been investigated focus around CT, mammography, SPECT, PET, MRI and ultrasound. Therapeutic applications discussed within the review include radiotherapy, thermal therapy and surgical applications. A number of modalities were not reviewed including optical spectroscopy, optical imaging and planar x-rays. The emergence of image guided interventions and multimodality imaging have placed an increasing demand on the number of specifications on the latest TMMs. Material specification standards are available in some imaging areas such as ultrasound. It is recommended that this should be replicated for other imaging and therapeutic modalities. Materials used within phantoms have been reviewed for a series of imaging and therapeutic applications with the potential to become a testbed for cross-fertilization of materials across modalities. Deformation, texture, multimodality imaging and perfusion are common themes that are currently under development.
Shaoyan Pan et al 2023 Phys. Med. Biol. 68 105004
Objective. Artificial intelligence (AI) methods have gained popularity in medical imaging research. The size and scope of the training image datasets needed for successful AI model deployment does not always have the desired scale. In this paper, we introduce a medical image synthesis framework aimed at addressing the challenge of limited training datasets for AI models. Approach. The proposed 2D image synthesis framework is based on a diffusion model using a Swin-transformer-based network. This model consists of a forward Gaussian noise process and a reverse process using the transformer-based diffusion model for denoising. Training data includes four image datasets: chest x-rays, heart MRI, pelvic CT, and abdomen CT. We evaluated the authenticity, quality, and diversity of the synthetic images using visual Turing assessments conducted by three medical physicists, and four quantitative evaluations: the Inception score (IS), Fréchet Inception Distance score (FID), feature similarity and diversity score (DS, indicating diversity similarity) between the synthetic and true images. To leverage the framework value for training AI models, we conducted COVID-19 classification tasks using real images, synthetic images, and mixtures of both images. Main results. Visual Turing assessments showed an average accuracy of 0.64 (accuracy converging to indicates a better realistic visual appearance of the synthetic images), sensitivity of 0.79, and specificity of 0.50. Average quantitative accuracy obtained from all datasets were IS = 2.28, FID = 37.27, FDS = 0.20, and DS = 0.86. For the COVID-19 classification task, the baseline network obtained an accuracy of 0.88 using a pure real dataset, 0.89 using a pure synthetic dataset, and 0.93 using a dataset mixed of real and synthetic data. Significance. A image synthesis framework was demonstrated for medical image synthesis, which can generate high-quality medical images of different imaging modalities with the purpose of supplementing existing training sets for AI model deployment. This method has potential applications in many data-driven medical imaging research.
Mingzhe Hu et al 2024 Phys. Med. Biol. 69 10TR01
This review paper aims to serve as a comprehensive guide and instructional resource for researchers seeking to effectively implement language models in medical imaging research. First, we presented the fundamental principles and evolution of language models, dedicating particular attention to large language models. We then reviewed the current literature on how language models are being used to improve medical imaging, emphasizing a range of applications such as image captioning, report generation, report classification, findings extraction, visual question response systems, interpretable diagnosis and so on. Notably, the capabilities of ChatGPT were spotlighted for researchers to explore its further applications. Furthermore, we covered the advantageous impacts of accurate and efficient language models in medical imaging analysis, such as the enhancement of clinical workflow efficiency, reduction of diagnostic errors, and assistance of clinicians in providing timely and accurate diagnoses. Overall, our goal is to have better integration of language models with medical imaging, thereby inspiring new ideas and innovations. It is our aspiration that this review can serve as a useful resource for researchers in this field, stimulating continued investigative and innovative pursuits of the application of language models in medical imaging.
Stefan Gundacker and Arjan Heering 2020 Phys. Med. Biol. 65 17TR01
The silicon photomultiplier (SiPM) is an established device of choice for a variety of applications, e.g. in time of flight positron emission tomography (TOF-PET), lifetime fluorescence spectroscopy, distance measurements in LIDAR applications, astrophysics, quantum-cryptography and related applications as well as in high energy physics (HEP).
To fully utilize the exceptional performances of the SiPM, in particular its sensitivity down to single photon detection, the dynamic range and its intrinsically fast timing properties, a qualitative description and understanding of the main SiPM parameters and properties is necessary. These analyses consider the structure and the electrical model of a single photon avalanche diode (SPAD) and the integration in an array of SPADs, i.e. the SiPM. The discussion will include the front-end readout and the comparison between analog-SiPMs, where the array of SPADs is connected in parallel, and the digital SiPM, where each SPAD is read out and digitized by its own electronic channel.
For several applications a further complete phenomenological view on SiPMs is necessary, defining several SiPM intrinsic parameters, i.e. gain fluctuation, afterpulsing, excess noise, dark count rate, prompt and delayed optical crosstalk, single photon time resolution (SPTR), photon detection effieciency (PDE) etc. These qualities of SiPMs influence directly and indirectly the time and energy resolution, for example in PET and HEP. This complete overview of all parameters allows one to draw solid conclusions on how best performances can be achieved for the various needs of the different applications.
Mats Danielsson et al 2021 Phys. Med. Biol. 66 03TR01
The introduction of photon-counting detectors is expected to be the next major breakthrough in clinical x-ray computed tomography (CT). During the last decade, there has been considerable research activity in the field of photon-counting CT, in terms of both hardware development and theoretical understanding of the factors affecting image quality. In this article, we review the recent progress in this field with the intent of highlighting the relationship between detector design considerations and the resulting image quality. We discuss detector design choices such as converter material, pixel size, and readout electronics design, and then elucidate their impact on detector performance in terms of dose efficiency, spatial resolution, and energy resolution. Furthermore, we give an overview of data processing, reconstruction methods and metrics of imaging performance; outline clinical applications; and discuss potential future developments.
Didier Lustermans et al 2024 Phys. Med. Biol. 69 105018
Objective. Newer cone-beam computed tomography (CBCT) imaging systems offer reconstruction algorithms including metal artifact reduction (MAR) and extended field-of-view (eFoV) techniques to improve image quality. In this study a new CBCT imager, the new Varian HyperSight CBCT, is compared to fan-beam CT and two CBCT imagers installed in a ring-gantry and C-arm linear accelerator, respectively. Approach. The image quality was assessed for HyperSight CBCT which uses new hardware, including a large-size flat panel detector, and improved image reconstruction algorithms. The decrease of metal artifacts was quantified (structural similarity index measure (SSIM) and root-mean-squared error (RMSE)) when applying MAR reconstruction and iterative reconstruction for a dental and spine region using a head-and-neck phantom. The geometry and CT number accuracy of the eFoV reconstruction was evaluated outside the standard field-of-view (sFoV) on a large 3D-printed chest phantom. Phantom size dependency of CT numbers was evaluated on three cylindrical phantoms of increasing diameter. Signal-to-noise and contrast-to-noise were quantified on an abdominal phantom. Main results. In phantoms with streak artifacts, MAR showed comparable results for HyperSight CBCT and CT, with MAR increasing the SSIM (0.97–0.99) and decreasing the RMSE (62–55 HU) compared to iterative reconstruction without MAR. In addition, HyperSight CBCT showed better geometrical accuracy in the eFoV than CT (Jaccard Conformity Index increase of 0.02–0.03). However, the CT number accuracy outside the sFoV was lower than for CT. The maximum CT number variation between different phantom sizes was lower for the HyperSight CBCT imager (∼100 HU) compared to the two other CBCT imagers (∼200 HU), but not fully comparable to CT (∼50 HU). Significance. This study demonstrated the imaging performance of the new HyperSight CBCT imager and the potential of applying this CBCT system in more advanced scenarios by comparing the quality against fan-beam CT.
Steven L Jacques 2013 Phys. Med. Biol. 58 R37
A review of reported tissue optical properties summarizes the wavelength-dependent behavior of scattering and absorption. Formulae are presented for generating the optical properties of a generic tissue with variable amounts of absorbing chromophores (blood, water, melanin, fat, yellow pigments) and a variable balance between small-scale scatterers and large-scale scatterers in the ultrastructures of cells and tissues.
Open all abstracts, in this tab
Eve S Shalom et al 2024 Phys. Med. Biol. 69 115034
Objective. Standard models for perfusion quantification in DCE-MRI produce a bias by treating voxels as isolated systems. Spatiotemporal models can remove this bias, but it is unknown whether they are fundamentally identifiable. The aim of this study is to investigate this question in silico using one-dimensional toy systems with a one-compartment blood flow model and a two-compartment perfusion model. Approach. For each of the two models, identifiability is explored theoretically and in-silico for three systems. Concentrations over space and time are simulated by forward propagation. Different levels of noise and temporal undersampling are added to investigate sensitivity to measurement error. Model parameters are fitted using a standard gradient descent algorithm, applied iteratively with a stepwise increasing time window. Model fitting is repeated with different initial values to probe uniqueness of the solution. Reconstruction accuracy is quantified for each parameter by comparison to the ground truth. Main results. Theoretical analysis shows that flows and volume fractions are only identifiable up to a constant, and that this degeneracy can be removed by proper choice of parameters. Simulations show that in all cases, the tissue concentrations can be reconstructed accurately. The one-compartment model shows accurate reconstruction of blood velocities and arterial input functions, independent of the initial values and robust to measurement error. The two-compartmental perfusion model was not fully identifiable, showing good reconstruction of arterial velocities and input functions, but multiple valid solutions for the perfusion parameters and venous velocities, and a strong sensitivity to measurement error in these parameters. Significance. These results support the use of one-compartment spatiotemporal flow models, but two-compartment perfusion models were not sufficiently identifiable. Future studies should investigate whether this degeneracy is resolved in more realistic 2D and 3D systems, by adding physically justified constraints, or by optimizing experimental parameters such as injection duration or temporal resolution.
Jianli Song et al 2024 Phys. Med. Biol. 69 115033
Although the U-shaped architecture, represented by UNet, has become a major network model for brain tumor segmentation, the repeated convolution and sampling operations can easily lead to the loss of crucial information. Additionally, directly fusing features from different levels without distinction can easily result in feature misalignment, affecting segmentation accuracy. On the other hand, traditional convolutional blocks used for feature extraction cannot capture the abundant multi-scale information present in brain tumor images. This paper proposes a multi-scale feature-aligned segmentation model called GMAlignNet that fully utilizes Ghost convolution to solve these problems. Ghost hierarchical decoupled fusion unit and Ghost hierarchical decoupled unit are used instead of standard convolutions in the encoding and decoding paths. This transformation replaces the holistic learning of volume structures by traditional convolutional blocks with multi-level learning on a specific view, facilitating the acquisition of abundant multi-scale contextual information through low-cost operations. Furthermore, a feature alignment unit is proposed that can utilize semantic information flow to guide the recovery of upsampled features. It performs pixel-level semantic information correction on misaligned features due to feature fusion. The proposed method is also employed to optimize three classic networks, namely DMFNet, HDCNet, and 3D UNet, demonstrating its effectiveness in automatic brain tumor segmentation. The proposed network model was applied to the BraTS 2018 dataset, and the results indicate that the proposed GMAlignNet achieved Dice coefficients of 81.65%, 90.07%, and 85.16% for enhancing tumor, whole tumor, and tumor core segmentation, respectively. Moreover, with only 0.29 M parameters and 26.88G FLOPs, it demonstrates better potential in terms of computational efficiency and possesses the advantages of lightweight. Extensive experiments on the BraTS 2018, BraTS 2019, and BraTS 2020 datasets suggest that the proposed model exhibits better potential in handling edge details and contour recognition.
Shihao Shan et al 2024 Phys. Med. Biol. 69 115028
Objective. The primary objective of this study is to address the reconstruction time challenge in magnetic particle imaging (MPI) by introducing a novel approach named SNR-peak-based frequency selection (SPFS). The focus is on improving spatial resolution without compromising reconstruction speed, thereby enhancing the clinical potential of MPI for real-time imaging. Approach. To overcome the trade-off between reconstruction time and spatial resolution in MPI, the researchers propose SPFS as an innovative frequency selection method. Unlike conventional SNR-based selection, SPFS prioritizes frequencies with signal-to-noise ratio (SNR) peaks that capture crucial system matrix information. This adaptability to varying quantities of selected frequencies enhances versatility in the reconstruction process. The study compares the spatial resolution of MPI reconstruction using both SNR-based and SPFS frequency selection methods, utilizing simulated and real device data. Main results. The research findings demonstrate that the SPFS approach substantially improves image resolution in MPI, especially when dealing with a limited number of frequency components. By focusing on SNR peaks associated with critical system matrix information, SPFS mitigates the spatial resolution degradation observed in conventional SNR-based selection methods. The study validates the effectiveness of SPFS through the assessment of MPI reconstruction spatial resolution using both simulated and real device data, highlighting its potential to address a critical limitation in the field. Significance. The introduction of SPFS represents a significant breakthrough in MPI technology. The method not only accelerates reconstruction time but also enhances spatial resolution, thus expanding the clinical potential of MPI for various applications. The improved real-time imaging capabilities of MPI, facilitated by SPFS, hold promise for advancements in drug delivery, plaque assessment, tumor treatment, cerebral perfusion evaluation, immunotherapy guidance, and in vivo cell tracking.
Kishore Rajendran et al 2024 Phys. Med. Biol. 69 115029
Objective. Photon-counting detector (PCD) CT enables routine virtual-monoenergetic image (VMI) reconstruction. We evaluated the performance of an automatic VMI energy level (keV) selection tool on a clinical PCD-CT system in comparison to an automatic tube potential (kV) selection tool from an energy-integrating-detector (EID) CT system from the same manufacturer. Approach. Four torso-shaped phantoms (20–50 cm width) containing iodine (2, 5, and 10 mg cc−1) and calcium (100 mg cc−1) were scanned on PCD-CT and EID-CT. Dose optimization techniques, task-based VMI energy level and tube-potential selection on PCD-CT (CARE keV) and task-based tube potential selection on EID-CT (CARE kV), were enabled. CT numbers, image noise, and dose-normalized contrast-to-noise ratio (CNRd) were compared. Main results. PCD-CT produced task-specific VMIs at 70, 65, 60, and 55 keV for non-contrast, bone, soft tissue with contrast, and vascular settings, respectively. A 120 kV tube potential was automatically selected on PCD-CT for all scans. In comparison, EID-CT used x-ray tube potentials from 80 to 150 kV based on imaging task and phantom size. PCD-CT achieved consistent dose reduction at 9%, 21% and 39% for bone, soft tissue with contrast, and vascular tasks relative to the non-contrast task, independent of phantom size. On EID-CT, dose reduction factor for contrast tasks relative to the non-contrast task ranged from a 65% decrease (vascular task, 70 kV, 20 cm phantom) to a 21% increase (soft tissue with contrast task, 150 kV, 50 cm phantom) due to size-specific tube potential adaptation. PCD-CT CNRd was equivalent to or higher than those of EID-CT for all tasks and phantom sizes, except for the vascular task with 20 cm phantom, where 70 kV EID-CT CNRd outperformed 55 keV PCD-CT images. Significance. PCD-CT produced more consistent CT numbers compared to EID-CT due to standardized VMI output, which greatly benefits standardization efforts and facilitates radiation dose reduction.
Ye Yuan et al 2024 Phys. Med. Biol. 69 115027
Objective. Automatic and accurate airway segmentation is necessary for lung disease diagnosis. The complex tree-like structures leads to gaps in the different generations of the airway tree, and thus airway segmentation is also considered to be a multi-scale problem. In recent years, convolutional neural networks have facilitated the development of medical image segmentation. In particular, 2D CNNs and 3D CNNs can extract different scale features. Hence, we propose a two-stage and 2D + 3D framework for multi-scale airway tree segmentation. Approach. In stage 1, we use a 2D full airway SegNet(2D FA-SegNet) to segment the complete airway tree. Multi-scale atros spatial pyramid and Atros Residual Skip connection modules are inserted to extract different scales feature. We designed a hard sample selection strategy to increase the proportion of intrapulmonary airway samples in stage 2. 3D airway RefineNet (3D ARNet) as stage 2 takes the results of stage 1 as a priori information. Spatial information extracted by 3D convolutional kernel compensates for the loss of in 2D FA-SegNet. Furthermore, we added false positive losses and false negative losses to improve the segmentation performance of airway branches within the lungs. Main results. We performed data enhancement on the publicly available dataset of ISICDM 2020 Challenge 3, and on which evaluated our method. Comprehensive experiments show that the proposed method has the highest dice similarity coefficient (DSC) of 0.931, and IoU of 0.871 for the whole airway tree and DSC of 0.699, and IoU of 0.543 for the intrapulmonary bronchi tree. In addition, 3D ARNet proposed in this paper cascaded with other state-of-the-art methods to increase detected tree length rate by up to 46.33% and detected tree branch rate by up to 42.97%. Significance. The quantitative and qualitative evaluation results show that our proposed method performs well in segmenting the airway at different scales.
Open all abstracts, in this tab
Robert P Johnson 2024 Phys. Med. Biol. 69 11TR02
Six decades after its conception, proton computed tomography (pCT) and proton radiography have yet to be used in medical clinics. However, good progress has been made on relevant detector technologies in the past two decades, and a few prototype pCT systems now exist that approach the performance needed for a clinical device. The tracking and energy-measurement technologies in common use are described, as are the few pCT scanners that are in routine operation at this time. Most of these devices still look like detector R&D efforts as opposed to medical devices, are difficult to use, are at least a factor of five slower than desired for clinical use, and are too small to image many parts of the human body. Recommendations are made for what to consider when engineering a pre-clinical pCT scanner that is designed to meet clinical needs in terms of performance, cost, and ease of use.
Shiman Li et al 2024 Phys. Med. Biol. 69 11TR01
Precise delineation of multiple organs or abnormal regions in the human body from medical images plays an essential role in computer-aided diagnosis, surgical simulation, image-guided interventions, and especially in radiotherapy treatment planning. Thus, it is of great significance to explore automatic segmentation approaches, among which deep learning-based approaches have evolved rapidly and witnessed remarkable progress in multi-organ segmentation. However, obtaining an appropriately sized and fine-grained annotated dataset of multiple organs is extremely hard and expensive. Such scarce annotation limits the development of high-performance multi-organ segmentation models but promotes many annotation-efficient learning paradigms. Among these, studies on transfer learning leveraging external datasets, semi-supervised learning including unannotated datasets and partially-supervised learning integrating partially-labeled datasets have led the dominant way to break such dilemmas in multi-organ segmentation. We first review the fully supervised method, then present a comprehensive and systematic elaboration of the 3 abovementioned learning paradigms in the context of multi-organ segmentation from both technical and methodological perspectives, and finally summarize their challenges and future trends.
Yolanda Prezado et al 2024 Phys. Med. Biol. 69 10TR02
Spatially fractionated radiation therapy (SFRT) is a therapeutic approach with the potential to disrupt the classical paradigms of conventional radiation therapy. The high spatial dose modulation in SFRT activates distinct radiobiological mechanisms which lead to a remarkable increase in normal tissue tolerances. Several decades of clinical use and numerous preclinical experiments suggest that SFRT has the potential to increase the therapeutic index, especially in bulky and radioresistant tumors. To unleash the full potential of SFRT a deeper understanding of the underlying biology and its relationship with the complex dosimetry of SFRT is needed. This review provides a critical analysis of the field, discussing not only the main clinical and preclinical findings but also analyzing the main knowledge gaps in a holistic way.
Mingzhe Hu et al 2024 Phys. Med. Biol. 69 10TR01
This review paper aims to serve as a comprehensive guide and instructional resource for researchers seeking to effectively implement language models in medical imaging research. First, we presented the fundamental principles and evolution of language models, dedicating particular attention to large language models. We then reviewed the current literature on how language models are being used to improve medical imaging, emphasizing a range of applications such as image captioning, report generation, report classification, findings extraction, visual question response systems, interpretable diagnosis and so on. Notably, the capabilities of ChatGPT were spotlighted for researchers to explore its further applications. Furthermore, we covered the advantageous impacts of accurate and efficient language models in medical imaging analysis, such as the enhancement of clinical workflow efficiency, reduction of diagnostic errors, and assistance of clinicians in providing timely and accurate diagnoses. Overall, our goal is to have better integration of language models with medical imaging, thereby inspiring new ideas and innovations. It is our aspiration that this review can serve as a useful resource for researchers in this field, stimulating continued investigative and innovative pursuits of the application of language models in medical imaging.
Christian P Karger et al 2024 Phys. Med. Biol. 69 06TR01
Modern radiotherapy delivers highly conformal dose distributions to irregularly shaped target volumes while sparing the surrounding normal tissue. Due to the complex planning and delivery techniques, dose verification and validation of the whole treatment workflow by end-to-end tests became much more important and polymer gel dosimeters are one of the few possibilities to capture the delivered dose distribution in 3D. The basic principles and formulations of gel dosimetry and its evaluation methods are described and the available studies validating device-specific geometrical parameters as well as the dose delivery by advanced radiotherapy techniques, such as 3D-CRT/IMRT and stereotactic radiosurgery treatments, the treatment of moving targets, online-adaptive magnetic resonance-guided radiotherapy as well as proton and ion beam treatments, are reviewed. The present status and limitations as well as future challenges of polymer gel dosimetry for the validation of complex radiotherapy techniques are discussed.
Open all abstracts, in this tab
Xu et al
Focused ultrasound spinal cord neuromodulation studies have demonstrated the capacity for neuromodulation of the spinal cord in small animals. The safe and efficacious translation of these approaches to human scale requires an understanding of ultrasound propagation and heat deposition within the human spine. To address this, combined acoustic and thermal modelling was used to assess the pressure and heat distributions produced by a 500 kHz source focused to the C5/C6 level of the cervical spine via two approaches a) the posterior acoustic window between vertebral posterior arches, or b) the lateral intervertebral foramen from which the C6 spinal nerve exits. Pulse trains of 150 0.1 s pulses with a pulse repetition frequency of 0.33 Hz and free-field spatial peak pulse-averaged intensity of 10 W/cm2 were simulated for the CT volumes of four subjects and for ±10 mm translational and ±10° rotational source positioning errors. Target pressures ranged between 20% and 70% of free-field spatial peak pressures with the posterior approach, and 20% and 100% with the lateral approach. When the source was optimally positioned with the posterior approach, peak spine heating values were below 1°C, but source mis-positioning resulted in bone heating up to 4°C. Heating with the lateral approach did not exceed 2°C within the mispositioning range. There were substantial inter-subject differences in target pressures and peak heating values. Target pressure varied three to four-fold between subjects, depending on approach, while peak heating varied approximately two-fold between subjects. This results in a nearly ten-fold range in the target pressure achieved per degree of maximum heating between subjects. This study highlights the importance of developing trans-spine ultrasound simulation software for the assurance of subject-specific safety and efficacy of focused ultrasound spinal cord therapies.
Bottauscio et al
Objective. Numerical simulations are largely adopted to estimate dosimetric quantities, e.g., specific absorption rate (SAR) and temperature increase, in tissues to assess the patient exposure to the radiofrequency field generated during magnetic resonance imaging (MRI). Simulations rely on reference anatomical human models and tabulated data of electromagnetic and thermal properties of biological tissues. However, concerns may arise about the applicability of the computed results to any phenotype, introducing a significant degree of freedom in the simulation input data. In addition, simulation input data can be affected by uncertainty in relative positioning of the anatomical model with respect to the radiofrequency coil. The objective of this work is the to estimate the variability of SAR and temperature increase at 3 T head MRI due to different sources of variability in input data, with the final aim to associate a global uncertainty to the dosimetric outcomes. Approach. A stochastic approach based on arbitrary Polynomial Chaos Expansion is used to evaluate the effects of several input variabilities (anatomy, tissue properties, body position) on dosimetric outputs, referring to head imaging with a 3 T MRI scanner. Main results. It is found that head anatomy is the prevailing source of variability for the considered dosimetric quantities, rather than the variability due to tissue properties and head positioning. From knowledge of the variability of the dosimetric quantities, an uncertainty can be attributed to the results obtained using a generic anatomical head model when SAR and temperature increase values are compared with safety exposure limits. Significance. This work associates a global uncertainty to SAR and temperature increase predictions, to be considered when comparing the numerically evaluated dosimetric quantities with reference exposure limits. The adopted methodology can be extended to other exposure scenarios for MRI safety purposes.
Zhang et al
Objective: Thermoacoustic tomography (TAT) is a promising imaging technique used for early cancer diagnosis, tumor therapy, animal study and brain imaging. Although it's widely known that the TAT frequency response depends on the pulse width of the source and the size of the object, a thorough comprehension of the quantitative frequency modulation in TAT and the mechanism governing the shift in the thermoacoustic pressure spectrum towards lower frequencies with respect to the excitation source is still lacking. This study aims to understand why the acoustic pressure spectrum and the final voltage signals shift towards lower frequencies in TAT. Approach: We employed a linear time-invariant model. In the proposed model, the applied current thermoacoustic imaging (ACTAI) processes is divided into the thermoacoustic stage and the acoustoelectric stage. These two stages are characterized by the thermoacoustic transfer function and the transducer transfer function respectively. We confirmed the effectiveness of our model through a rigorous examination involving both simulations and experiments. Main results: Simulation results indicate that the thermoacoustic transfer function behaves as a low-pass filter. The inherent low-pass nature induces a shift towards low frequencies in the acoustic pressure spectrum. Experiments further confirm this behavior, demonstrating that the final electrical voltage also shift towards low frequencies. Notably, employing the proposed model, there is a remarkable consistency between the main frequency bands of the synthesized and measured final voltage spectrum. Significance: The propoed model thoroughly explains how the thermoacoustic transfer function causes shifts to low frequencies in both the acoustic pressure spectrum and the final voltage spectrum in TAT. These insights deepen our understanding of optimizing TAT systems in the frequency domain, including aspects like filter design and transducer selection. Furthermore, we underscore the potential significance of this discovery on medical applications, particularly in the context of cancer diagnosis.
Waid et al
One challenge on the path to delivering FLASH-compatible beams with a synchrotron is facilitating an accurate dose control for the required ultra-high dose rates. We propose the use of pulsed RFKO extraction instead of continuous beam delivery as a way to control the dose delivered per Voxel. In a first feasibility test, dose rates in pulses of up to 600 Gy/s were observed, while the granularity at which the dose was delivered is expected to be well below 0.5 Gy.
Ma et al
Objective. In-beam Positron Emission Tomography (PET) is a promising technology for real-time monitoring of proton therapy. Random coincidences between prompt radiation events and positron annihilation photon pairs can deteriorate imaging quality during beam-on operation. This study aimed to improve the PET image quality by filtering out the prompt radiation events. Approach. We investigated a prompt radiation event filtering method based on the accelerator radio frequency (RF) phase and assessed its performance using various prompt gamma energy thresholds. An in-beam PET prototype was used to acquire the data when the 70 MeV proton beam irradiated a water phantom and a mouse. The signal-to-background ratio indicator was utilized to evaluate the quality of the PET reconstruction image. Main results. The selection of the prompt gamma energy threshold will affect the quality of the reconstructed image. Using the optimal energy threshold of 580 keV can obtain a signal-to-background ratio of 1.6 times for the water phantom radiation experiment and 2.0 times for the mouse radiation experiment compared to those without background removal, respectively. Significance. Our results show that using this optimal threshold can reduce the prompt radiation events, enhancing the signal-to-background ratio of the reconstructed image. This advancement contributes to more accurate real-time range verification in subsequent steps.
Trending on Altmetric
Open all abstracts, in this tab
Rui Xu et al 2024 Phys. Med. Biol.
Focused ultrasound spinal cord neuromodulation studies have demonstrated the capacity for neuromodulation of the spinal cord in small animals. The safe and efficacious translation of these approaches to human scale requires an understanding of ultrasound propagation and heat deposition within the human spine. To address this, combined acoustic and thermal modelling was used to assess the pressure and heat distributions produced by a 500 kHz source focused to the C5/C6 level of the cervical spine via two approaches a) the posterior acoustic window between vertebral posterior arches, or b) the lateral intervertebral foramen from which the C6 spinal nerve exits. Pulse trains of 150 0.1 s pulses with a pulse repetition frequency of 0.33 Hz and free-field spatial peak pulse-averaged intensity of 10 W/cm2 were simulated for the CT volumes of four subjects and for ±10 mm translational and ±10° rotational source positioning errors. Target pressures ranged between 20% and 70% of free-field spatial peak pressures with the posterior approach, and 20% and 100% with the lateral approach. When the source was optimally positioned with the posterior approach, peak spine heating values were below 1°C, but source mis-positioning resulted in bone heating up to 4°C. Heating with the lateral approach did not exceed 2°C within the mispositioning range. There were substantial inter-subject differences in target pressures and peak heating values. Target pressure varied three to four-fold between subjects, depending on approach, while peak heating varied approximately two-fold between subjects. This results in a nearly ten-fold range in the target pressure achieved per degree of maximum heating between subjects. This study highlights the importance of developing trans-spine ultrasound simulation software for the assurance of subject-specific safety and efficacy of focused ultrasound spinal cord therapies.
Oriano Bottauscio et al 2024 Phys. Med. Biol.
Objective. Numerical simulations are largely adopted to estimate dosimetric quantities, e.g., specific absorption rate (SAR) and temperature increase, in tissues to assess the patient exposure to the radiofrequency field generated during magnetic resonance imaging (MRI). Simulations rely on reference anatomical human models and tabulated data of electromagnetic and thermal properties of biological tissues. However, concerns may arise about the applicability of the computed results to any phenotype, introducing a significant degree of freedom in the simulation input data. In addition, simulation input data can be affected by uncertainty in relative positioning of the anatomical model with respect to the radiofrequency coil. The objective of this work is the to estimate the variability of SAR and temperature increase at 3 T head MRI due to different sources of variability in input data, with the final aim to associate a global uncertainty to the dosimetric outcomes. Approach. A stochastic approach based on arbitrary Polynomial Chaos Expansion is used to evaluate the effects of several input variabilities (anatomy, tissue properties, body position) on dosimetric outputs, referring to head imaging with a 3 T MRI scanner. Main results. It is found that head anatomy is the prevailing source of variability for the considered dosimetric quantities, rather than the variability due to tissue properties and head positioning. From knowledge of the variability of the dosimetric quantities, an uncertainty can be attributed to the results obtained using a generic anatomical head model when SAR and temperature increase values are compared with safety exposure limits. Significance. This work associates a global uncertainty to SAR and temperature increase predictions, to be considered when comparing the numerically evaluated dosimetric quantities with reference exposure limits. The adopted methodology can be extended to other exposure scenarios for MRI safety purposes.
Simon Waid et al 2024 Phys. Med. Biol.
One challenge on the path to delivering FLASH-compatible beams with a synchrotron is facilitating an accurate dose control for the required ultra-high dose rates. We propose the use of pulsed RFKO extraction instead of continuous beam delivery as a way to control the dose delivered per Voxel. In a first feasibility test, dose rates in pulses of up to 600 Gy/s were observed, while the granularity at which the dose was delivered is expected to be well below 0.5 Gy.
Eve S Shalom et al 2024 Phys. Med. Biol. 69 115034
Objective. Standard models for perfusion quantification in DCE-MRI produce a bias by treating voxels as isolated systems. Spatiotemporal models can remove this bias, but it is unknown whether they are fundamentally identifiable. The aim of this study is to investigate this question in silico using one-dimensional toy systems with a one-compartment blood flow model and a two-compartment perfusion model. Approach. For each of the two models, identifiability is explored theoretically and in-silico for three systems. Concentrations over space and time are simulated by forward propagation. Different levels of noise and temporal undersampling are added to investigate sensitivity to measurement error. Model parameters are fitted using a standard gradient descent algorithm, applied iteratively with a stepwise increasing time window. Model fitting is repeated with different initial values to probe uniqueness of the solution. Reconstruction accuracy is quantified for each parameter by comparison to the ground truth. Main results. Theoretical analysis shows that flows and volume fractions are only identifiable up to a constant, and that this degeneracy can be removed by proper choice of parameters. Simulations show that in all cases, the tissue concentrations can be reconstructed accurately. The one-compartment model shows accurate reconstruction of blood velocities and arterial input functions, independent of the initial values and robust to measurement error. The two-compartmental perfusion model was not fully identifiable, showing good reconstruction of arterial velocities and input functions, but multiple valid solutions for the perfusion parameters and venous velocities, and a strong sensitivity to measurement error in these parameters. Significance. These results support the use of one-compartment spatiotemporal flow models, but two-compartment perfusion models were not sufficiently identifiable. Future studies should investigate whether this degeneracy is resolved in more realistic 2D and 3D systems, by adding physically justified constraints, or by optimizing experimental parameters such as injection duration or temporal resolution.
Andrew Chacon et al 2024 Phys. Med. Biol.
Purpose: To compare the accuracy with which different hadronic inelastic physics models across ten Geant4 Monte Carlo simulation toolkit versions can predict positron-emitting fragments produced along the beam path during carbon and oxygen ion therapy.

Materials and Methods: Phantoms of polyethylene, gelatin or poly(methyl methacrylate) were irradiated with monoenergetic carbon and oxygen ion beams. Post-irradiation, 4D PET images were acquired and parent 11C, 10C and 15O radionuclides contributions in each voxel were determined from the extracted time activity curves. Next, the experimental configurations were simulated in Geant4 Monte Carlo versions 10.0 to 11.1, with three different fragmentation models - binary ion cascade (BIC), quantum molecular dynamics (QMD) and the Liege intranuclear cascade (INCL++) - 30 model-version combinations. Total positron annihilation and parent isotope production yields predicted by each simulation were compared between simulations and experiments using normalised mean squared error and Pearson cross-correlation coefficient. Finally, we compared the depth of maximum positron annihilation yield and the distal point at which positron yield decreases to 50% of peak between each model and the experimental results.

Results: Performance varied considerably across versions and models, with no one version/model combination providing the best prediction of all positron-emitting fragments in all evaluated target materials and irradiation conidiations. BIC in Geant4 10.2 provided the best overall agreement with experimental results in the largest number of test cases. QMD consistently provided the best estimates of both the depth of peak positron yield (10.4 and 10.6) and the distal 50%-of-peak point (10.2), while BIC also performed well and INCL generally performed the worst across most Geant4 versions.

Conclusions: Best spatial prediction of annihilation yield and positron-emitting fragment production during carbon and oxygen ion therapy was found to be 10.2.p03 with BIC or QMD. These version/model combinations are recommended for future heavy ion therapy research.
Andrew Bertinetti et al 2024 Phys. Med. Biol.
OBJECTIVE: In this work, we present and evaluate a technique for performing interface 
measurements of beta particle-emitting radiopharmaceutical therapy agents in solution.
APPROACH: Unlaminated EBT3 film was calibrated for absorbed dose to water using a NIST 
matched x-ray beam. Custom acrylic source phantoms were constructed and placed above 
interfaces comprised of bone, lung, and water equivalent materials. The film was placed 
perpendicular to these interfaces and measurements for absorbed dose to water using solutions of 
90Y and 177Lu were performed and compared to Monte Carlo absorbed dose to water estimates 
simulated with EGSnrc. Surface and depth dose profile measurements were also performed.
MAIN RESULTS: Surface absorbed dose to water measurements agreed with predicted results 
within 3.6 % for 177Lu and 2.2 % for 90Y. The agreement between predicted and measured absorbed 
dose to water was better for 90Y than 177Lu for depth dose and interface profiles. In general, 
agreement within k = 1 uncertainty bounds was observed for both radionuclides and all interfaces. 
An exception to this was found for the bone to water interface for 177Lu due to the increased 
sensitivity of the measurements to imperfections in the material surfaces.
SIGNIFICANCE: This work demonstrates the feasibility of using radiochromic film for 
performing absorbed dose to water measurements on beta-emitting radiopharmaceutical therapy 
agents across material interfaces.
David Stocker et al 2024 Phys. Med. Biol. 69 115026
Optimizing complex imaging procedures within Computed Tomography, considering both dose and image quality, presents significant challenges amidst rapid technological advancements and the adoption of machine learning (ML) methods. A crucial metric in this context is the Difference-Detailed Curve, which relies on human observer studies. However, these studies are labor-intensive and prone to both inter- and intra-observer variability. To tackle these issues, a ML-based model observer utilizing the U-Net architecture and a Bayesian methodology is proposed. In order to train a model observer unaffected by the spatial arrangement of low-contrast objects, the image preprocessing incorporates a Gaussian Process-based noise model. Additionally, gradient-weighted class activation mapping is utilized to gain insights into the model observer's decision-making process. By training on data from a diverse group of observers, well-calibrated probabilistic predictions that quantify observer variability are achieved. Leveraging the principles of Beta regression, the Bayesian methodology is used to derive a model observer performance metric, effectively gauging the model observer's strength in terms of an 'effective number of observers'. Ultimately, this framework enables to predict the DDC distribution by applying thresholds to the inferred probabilities (Part of this work has been presented at: Stocker D, Sommer C, Gueng S, Stäuble J, Özden I, Griessinger J, Weyland M S, Lutters G, Scheidegger S (2023). Probabilistic U-Net Model Observer for the DDC Method in CT Scan Protocol Optimization. The 56th SSRMP Annual Meeting 2023, November 30. - December 1., 2023, Luzern, Switzerland).
Hossein Jafarzadeh et al 2024 Phys. Med. Biol. 69 115024
Objective. Treatment plan optimization in high dose rate brachytherapy often requires manual fine-tuning of penalty weights for each objective, which can be time-consuming and dependent on the planner's experience. To automate this process, this study used a multi-criteria approach called multi-objective Bayesian optimization with q-noisy expected hypervolume improvement as its acquisition function (MOBO-qNEHVI). Approach. The treatment plans of 13 prostate cancer patients were retrospectively imported to a research treatment planning system, RapidBrachyMTPS, where fast mixed integer optimization (FMIO) performs dwell time optimization given a set of penalty weights to deliver 15 Gy to the target volume. MOBO-qNEHVI was used to find patient-specific Pareto optimal penalty weight vectors that yield clinically acceptable dose volume histogram metrics. The relationship between the number of MOBO-qNEHVI iterations and the number of clinically acceptable plans per patient (acceptance rate) was investigated. The performance time was obtained for various parameter configurations. Main results. MOBO-qNEHVI found clinically acceptable treatment plans for all patients. With increasing the number of MOBO-qNEHVI iterations, the acceptance rate grew logarithmically while the performance time grew exponentially. Fixing the penalty weight of the tumour volume to maximum value, adding the target dose as a parameter, initiating MOBO-qNEHVI with 25 parallel sampling of FMIO, and running 6 MOBO-qNEHVI iterations found solutions that delivered 15 Gy to the hottest 95% of the clinical target volume while respecting the dose constraints to the organs at risk. The average acceptance rate for each patient was 89.74% ± 8.11%, and performance time was 66.6 ± 12.6 s. The initiation took 22.47 ± 7.57 s, and each iteration took 7.35 ± 2.45 s to find one Pareto solution.Significance. MOBO-qNEHVI combined with FMIO can automatically explore the trade-offs between treatment plan objectives in a patient specific manner within a minute. This approach can reduce the dependency of plan quality on planner's experience and reduce dose to the organs at risk.
Mingwei Wen et al 2024 Phys. Med. Biol. 69 115023
Objective. Automated biopsy needle segmentation in 3D ultrasound images can be used for biopsy navigation, but it is quite challenging due to the low ultrasound image resolution and interference similar to the needle appearance. For 3D medical image segmentation, such deep learning networks as convolutional neural network and transformer have been investigated. However, these segmentation methods require numerous labeled data for training, have difficulty in meeting the real-time segmentation requirement and involve high memory consumption. Approach. In this paper, we have proposed the temporal information-based semi-supervised training framework for fast and accurate needle segmentation. Firstly, a novel circle transformer module based on the static and dynamic features has been designed after the encoders for extracting and fusing the temporal information. Then, the consistency constraints of the outputs before and after combining temporal information are proposed to provide the semi-supervision for the unlabeled volume. Finally, the model is trained using the loss function which combines the cross-entropy and Dice similarity coefficient (DSC) based segmentation loss with mean square error based consistency loss. The trained model with the single ultrasound volume input is applied to realize the needle segmentation in ultrasound volume. Main results. Experimental results on three needle ultrasound datasets acquired during the beagle biopsy show that our approach is superior to the most competitive mainstream temporal segmentation model and semi-supervised method by providing higher DSC (77.1% versus 76.5%), smaller needle tip position (1.28 mm versus 1.87 mm) and length (1.78 mm versus 2.19 mm) errors on the kidney dataset as well as DSC (78.5% versus 76.9%), needle tip position (0.86 mm versus 1.12 mm) and length (1.01 mm versus 1.26 mm) errors on the prostate dataset. Significance. The proposed method can significantly enhance needle segmentation accuracy by training with sequential images at no additional cost. This enhancement may further improve the effectiveness of biopsy navigation systems.
Kuan Zhang et al 2024 Phys. Med. Biol. 69 115022
Objective. Conventional computed tomography (CT) imaging does not provide quantitative information on local thermal changes during percutaneous ablative therapy of cancerous and benign tumors, aside from few qualitative, visual cues. In this study, we have investigated changes in CT signal across a wide range of temperatures and two physical phases for two different tissue mimicking materials, each. Approach. A series of experiments were conducted using an anthropomorphic phantom filled with water-based gel and olive oil, respectively. Multiple, clinically used ablation devices were applied to locally cool or heat the phantom material and were arranged in a configuration that produced thermal changes in regions with inconsequential amounts of metal artifact. Eight fiber optic thermal sensors were positioned in the region absent of metal artifact and were used to record local temperatures throughout the experiments. A spectral CT scanner was used to periodically acquire and generate electron density weighted images. Average electron density weighted values in 1 mm3 volumes of interest near the temperature sensors were computed and these data were then used to calculate thermal volumetric expansion coefficients for each material and phase. Main results. The experimentally determined expansion coefficients well-matched existing published values and variations with temperature—maximally differing by 5% of the known value. As a proof of concept, a CT-generated temperature map was produced during a heating time point of the water-based gel phantom, demonstrating the capability to map changes in electron density weighted signal to temperature. Significance. This study has demonstrated that spectral CT can be used to estimate local temperature changes for different materials and phases across temperature ranges produced by thermal ablations.