BAYESIAN INFERENCE AND MAXIMUM ENTROPY METHODS IN SCIENCE AND ENGINEERING: 24th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Characterizing Water Diffusion In Fixed Baboon Brain
View Description Hide DescriptionIn the Biomedical Magnetic Resonance Laboratory in St. Louis, Missouri, there is an ongoing project to characterize water diffusion in fixed baboon brain using diffusion weighted magnetic resonance imaging as a means of monitoring development throughout gestation. Magnetic resonance images can be made sensitive to diffusion by applying magnetic field gradients during the pulse sequence. Results from the analysis of diffusion weighted magnetic resonance images using a full diffusion tensor model do not fit the data well. The estimated standard deviation of the noise exhibit structures corresponding to known baboon brain anatomy. However, the diffusion tensor plus a constant model overfits the data: the residuals in the brain are smaller than in regions where there is no signal. Consequently, the full diffusion tensor plus a constant model has too many parameters and needs to be simplified. This model can be simplified by imposing axial symmetry on the diffusion tensor. There are three axially symmetric diffusion tensor models, prolate, oblate, and isotropic; and two other models, no signal and full diffusion tensor, that could characterize the diffusion weighted images. These five models may or may not have a constant offset, giving 10 total models that potentially describe the diffusion process. In this paper the Bayesian calculations needed to select which of the 10 models best characterizes the diffusion data are presented. The various outputs from the analysis are illustrated using one of our baboon brain data sets.

Bayesian Wavelet Domain Segmentation
View Description Hide DescriptionWe have recently demonstrated that fully unsupervised segmentations of still images and 2D+T sequences is possible by Bayesian methods, on the basis of a Hidden Markovian Model (HMM) and a Potts‐Markov Random Field (PMRF), in the pixel domain. The use of a high number of iterations to reach convergence in a segmentation where the number of segments, or “classes” labels, is important makes the algorithm rather slow for the processing of a large quantity of data like in image sequences. We more recently have worked out a new version of this algorithm by first operating our segmentation in the wavelet transform domain rather than in the direct domain. Doing so, we take advantage of the local decay property, or “peaky” distribution of the wavelet coefficients, in an orthogonal decomposition. This decomposition is a fast pyramidal. O(N ^{2}), decomposition, so the Bayesian segmentation is performed only once on the first coarse image then on all sub‐bands up to the highest resolution level. Moreover, we have improved our Potts‐Markov model in order to take into account the three main orientations of the wavelets band‐pass, or so‐called, detail, subbands. The main advantage of such an algorithm, in comparison with the direct domain Bayesian segmentation, is that the high frequency coefficients, i.e. the coefficients of all sub‐bands except the coarsest, are segmented in only 2 classes : 1 for the weak energy coefficients and 2 for the few, and most representative, high energy coefficients, thus enabling to speed up the convergence process of the segmentation.

Multigrid Priors for fMRI time series analysis
View Description Hide DescriptionWe deal with the problem of constructing priors for data analysis in order to asses brain activity in functional Magnetic Resonance Imaging (fMRI). Our method is an example of how a prior distribution can incorporate what could be termed as conventional prior information as well as other information such as that steming from knowledge of what constitues a reasonable likelihood.
Brain activity during a cognitive, sensorial or motor task presents a certain level of localization and spatial correlations with different scales involved in the problem. These suggests a multiscale iterative procedure to construct the prior. Grids of different scales are constructed over the image. Spatially coarse grain data variables are defined for each scale, until a single voxel time series is obtained. The process consists in iterating back to finer scales, determining for each coarse scale a set of posterior probabilities. The posterior on a coarse scale is used as the prior for activity at the next finer scale. We have applied our method both to real as well as synthetic data of block experiments. A linear model and a standard hemodynamic response function are used to construct the likelihood. ROC curves are used to compare the results with other Bayesian and orthodox methods. By systematically deleting images in each period or by corrupting the signal with noise, we can study the robustness of the method under information loss.

Model Fitting and Model Evidence for Multiscale Image Texture Analysis
View Description Hide DescriptionThis paper gives an overview of the two levels of Bayesian inference: model fitting and model selection and shows how they can be used for the image texture analysis. The applied models are the Gauss‐Markov and Gibbs auto‐binomial Random Fields. In the second part the article introduces a linear model for the image wavelet coefficients able to explain the full description of the spatial, inter‐scale and inter‐band behavior of a multi‐resolution decomposed image. The model parameters, model variance and evidence are used to characterize the image texture.

Integrated Approaches in Fusion Data Analysis
View Description Hide DescriptionThe concept of integrated data analysis in nuclear fusion requires the linkage of data and physical information. Summarizing the key steps for the analysis of transport in the core plasma, benefits of probabilistic modelling of single diagnostics are discussed. Concepts for full diagnostics models consisting of several diagnostics modules and linkage through mapping procedures are given in figures of Bayesian graphical models. Coupling to theory codes is demonstrated by the error estimation of neoclassical error analysis allowing a quantitative physical model validation. As an inverted use of the integrated data analysis approach, goals for the design of diagnostics and sets of diagnostics (meta‐diagnostics) are outlined.

Bayesian Data analysis for ERDA measurements
View Description Hide DescriptionElastic recoil detection analysis (ERDA) is an important ion beam analysis (IBA) method for analysis of thin films. It does, however, suffer from broadening of the energy spectra due to multiple and plural scattering and surface roughness, with loss of depth resolution as a result. We present a method based on Bayesian probability theory to improve the depth resolution, utilising a simulation code to simulate the ERDA measurement process. The method is demonstrated on a simulated measurement on a W_{ x }C_{ y }N_{1−x−y }/SiO_{2}/Si sample, for which multiple and plural scattering becomes a large problem with traditional data analysis methods used with ERDA, due to the heavy mass of tungsten.

Relative Entropy Credibility Theory
View Description Hide DescriptionConsider a portfolio of personal motor insurance policies in which, for each policyholder in the portfolio, we want to assign a credibility factor at the end of each policy period that reflects the claim experience of the policyholder compared with the claim experience of the entire portfolio. In this paper we present the calculation of credibility factors based on the concept of relative entropy between the claim size distribution of the entire portfolio and the claim size distribution of the policyholder.

Reconstruction of piecewise homogeneous images from partial knowledge of their Fourier Transform
View Description Hide DescriptionFourier synthesis (FS) inverse problem consists in reconstructing a multi‐variable function from the measured data which correspond to partial and uncertain knowledge of its Fourier Transform (FT). By partial knowledge we mean either partial support and/or the knowledge of only the module and by uncertain we mean both uncertainty of the model and noisy data. This inverse problem arises in many applications such as : optical imaging, radio astronomy, magnetic resonance imaging (MRI) and diffraction scattering (ultrasounds or microwave imaging).
Most classical methods of inversion are based on interpolation of the data and fast inverse FT. But when the data do not fill uniformly the Fourier domain or when the phase of the signal is lacking as in optical interferometry, the results obtained by such methods are not satisfactory, because these inverse problems are ill‐posed. The Bayesian estimation approach, via an appropriate modeling of the unknown functions gives the possibility of compensating the lack of information in the data, thus giving satisfactory results.
In this paper we study the case where the observations are a part of the FT modulus of objects which are composed of a few number of homogeneous materials. To model such objects we use a Hierarchical Hidden Markov Modeling (HMM) and propose a Bayesian inversion method using appropriate Markov Chain Monte Carlo (MCMC) algorithms.

Bayesian Experimental Design — Studies for Fusion Diagnostics
View Description Hide DescriptionThe design of fusion diagnostics is essential for the physics program of future fusion devices. The goal is to maximize the information gain of a future experiment with respect to various constraints. A measure of information gain is the mutual information between the posterior and the prior distribution. The Kullback‐Leibler distance is used as a utility function to calculate the expected information gain marginalizing over data and parameter space. The expected utility function is maximized with respect to the design parameters of the experiment. The method will be applied to the design of a Thomson scattering experiment.

Bayesian estimation methods in metrology
View Description Hide DescriptionIn metrology — the science of measurement — a measurement result must be accompanied by a statement of its associated uncertainty. The degree of validity of a measurement result is determined by the validity of the uncertainty statement. In recognition of the importance of uncertainty evaluation, the International Standardization Organization in 1995 published the Guide to the Expression of Uncertainty in Measurement and the Guide has been widely adopted. The validity of uncertainty statements is tested in interlaboratory comparisons in which an artefact is measured by a number of laboratories and their measurement results compared. Since the introduction of the Mutual Recognition Arrangement, key comparisons are being undertaken to determine the degree of equivalence of laboratories for particular measurement tasks. In this paper, we discuss the possible development of the Guide to reflect Bayesian approaches and the evaluation of key comparison data using Bayesian estimation methods.

Sound Decay Analysis in Acoustically Coupled Spaces Using a Re‐Parameterized Decay Model
View Description Hide DescriptionThe sound energy decay characteristics of coupled spaces are of increasing interest to architectural acousticians. Coupled spaces occur naturally in concert halls and theaters due to the presence, for example, of balconies, orchestra pits, and stage houses. In addition, many new halls have incorporated hard chambers coupled to the primary space to achieve a flexible variation of the acoustics. In certain conditions, sound energy in these coupled spaces decays with two or more distinct exponential rate constants. The presence of multiple decay rates can have a distinct impact on a hall’s perceived acoustical quality. In previous papers we describe our initial work to apply Bayesian inferential methods to the problem of determining multiple decay times in coupled spaces using Schroeder energy decay functions derived from measured room impulse responses. In our previous work, relatively little prior information about parameters of the decay model is incorporated in the inference calculations in spite of the fact that much information regarding both the possible range of the parameters and the relationship between the parameters is known. In this paper we describe our recent efforts to incorporate this prior information into the inference calculations by re‐parameterizing the multi‐exponential decay model.

Calibrative Densities for Radiation Dosimeters
View Description Hide DescriptionWe derive calibrative densities p(d_{f}D) for the unknown “future” dose d_{f} to a dosimeter for two sets of calibration data, D_{c}. The two sets of calibration data are for film and thermoluminescent dosimeters (TLDs). The p(d_{f}D)s describe the remaining uncertainty about d_{f} conditional on D = {D_{c}, y_{f}.}. The calibration data D_{c} for the film dosimeters consist of pairs {d_{i}, Δy_{i}}, where d_{i} is a radiation dose and Δy_{i} is an optical density interval. In the case of the TLDs the pairs are {d_{i}, y_{i}}, where y_{i} is the measured light output. The calibration data together with the response y_{f} of a new dosimeter form the data D on which inference about the unknown dose d_{f} corresponding to y_{f} is based. The general form of the calibrative density for a controlled calibration experiment is: where all densities are labeled by their arguments, θ stands for the model parameters and p(d_{f}) is the prior density for the unknown dose which contains ancillary information about d_{f}. The integral represents the predictive density based on the posterior density p(θD_{c}). We present plots of calibrative densities for both calibration data sets and different y_{f} s. Brief remarks about calibration for integer response variables Y conclude the paper.

Mixture Modeling for Background and Sources Separation in x‐ray Astronomical Images
View Description Hide DescriptionA probabilistic technique for the joint estimation of background and sources in high‐energy astrophysics is described. Bayesian probability theory is applied to gain insight into the coexistence of background and sources through a probabilistic two‐component mixture model, which provides consistent uncertainties of background and sources. The present analysis is applied to ROSAT PSPC data (0.1–2.4 keV) in Survey Mode. A background map is modelled using a Thin‐Plate spline. Source probability maps are obtained for each pixel (45 arcsec) independently and for larger correlation lengths, revealing faint and extended sources. We will demonstrate that the described probabilistic method allows for detection improvement of faint extended celestial sources compared to the Standard Analysis Software System (SASS) used for the production of the ROSAT All‐Sky Survey (RASS) catalogues.

Estimation of proton configurations from NOESY spectra
View Description Hide DescriptionNuclear Overhauser effect spectroscopy (NOESY) data contain information about the geometry of a system of magnetically interacting nuclear spins. They thus allow to determine the positions of atoms in space and even suffice to infer the entire structure of a biomolecule. Yet, interpretation of NOESY spectra is often still qualitative. This is mostly due to the complexity of the theories that describe the data. We outline a Bayesian algorithm that estimates the configuration of protons by analysing unassigned NOESY volumes which may stem from spectra recorded at different mixing times. The method relies on the calculation of peak volumes with a relaxation matrix model and thereby incorporates spin‐diffusion effects. Proton coordinates and nuisance parameters, such as the scales and errors of the spectra, are estimated using Markov Chain Monte Carlo sampling.

Bayesian Learning on Graphs for Reasoning on Image Time‐Series
View Description Hide DescriptionSatellite image time‐series (SITS) are multidimensional signals of high complexity. Their main characteristics are spatio‐temporal patterns which describes the scene dynamics. The information contained in SITS was coded using Bayesian methods, resulting in a graph representation.
This paper further presents a concept of interactive learning for semantic labeling of spatio‐temporal patterns present in SITS. It enables the recognition and the probabilistic retrieval of similar events. Graphs are attached to statistical models for spatio‐temporal processes, which at their turn describe physical changes in the observed scene. Therefore, user‐specific semantics attached to spatio‐temporal events are modeled using combinations of parameters of a distance model between sub‐graphs. Thus, the learning step is performed by the incremental definition of a spatio‐temporal event type via user‐provided positive and negative sub‐graph examples. From these examples we infer probabilities of the Bayesian network, based on a Dirichlet model, that links user interest to a specific similarity measurement. According to the current state of learning, sub‐graph posterior probabilities are estimated. Experiments, performed on a multitemporal SPOT image time‐series, demonstrate the presented reasoning concept.

Discovering Planetary Nebula Geometries: Explorations with a Hierarchy of Models
View Description Hide DescriptionAstronomical objects known as planetary nebulae (PNe) consist of a shell of gas expelled by an aging star. In cases where the gas shell can be assumed to be ellipsoidal, the PN can be easily modeled in three spatial dimensions. We utilize a model that joins the physics of PNe to this geometry and generates simulated nebular images. Hubble Space Telescope images of actual PNe provide data with which the model images may be compared. We employ Bayesian model estimation and search the parameter space for values that generate a match between observed and model images. The forward model is characterized by thirteen parameters; consequently model estimation requires the search of a 13‐dimensional parameter space. The ‘curse of dimensionality,’ compounded by a computationally intense forward problem, makes forward searches extremely time‐consuming and frequently causes them to become trapped in a local solution. We find that both the speed and quality of the search can be improved by reducing the dimensionality of the search space.
Our basic approach utilizes a hierarchy of models of increasing complexity. Earlier studies establish that a hierarchical sequence converges more quickly, and to a better solution, than a search relying only on the most complex model. Here we report results for a hierarchy of five models. The first three models treat the nebula as a 2D image, estimating its position, angular size, orientation and rim thickness. The last two models explore its characteristics as a 3D object and enable us to characterize the physics of the nebula. This live‐model hierarchy is applied to real ellipsoidal PNe to estimate their geometric properties and gas density profiles.

Bayesian Vision for Shape Recovery
View Description Hide DescriptionWe present a new Bayesian vision technique that aims at recovering a shape from two or more noisy observations taken under similar lighting conditions. The shape is parametrized by a piecewise linear height field, textured by a piecewise linear irradiance field, and we assume Gaussian Markovian priors for both shape vertices and irradiance variables. The modeled observation process, equivalent to rendering, is modeled by a non‐affine projection (e.g. perspective projection) followed by a convolution with a piecewise linear point spread function, and contamination by additive Gaussian noise. We assume that the observation parameters are calibrated beforehand.
The major novelty of the proposed method consists of marginalizing out the irradiances considered as nuisance parameters, which is achieved by a hierarchy of approximations. This reduces the inference to minimizing an energy that only depends on the shape vertices, and therefore allows an efficient Iterated Conditional Mode (ICM) optimization scheme to be implemented. A Gaussian approximation of the posterior shape density is computed, thus providing estimates of both the geometry and its uncertainty. We illustrate the effectiveness of the new method by shape reconstruction results in a 2D case. A 3D version is currently under development and aims at recovering a surface from multiple images, reconstructing the topography by marginalizing out both albedo and shading.

Bayesian‐Based Motion Estimation with Flat Priors
View Description Hide DescriptionThis paper demonstrates that in a certain class of motion estimation problems, the Bayesian technique of integrating out the “nuisance parameters” yields stable solutions even if a non‐informative (“flat”) prior on the motion parameters is used. The advantage of the suggested method is more noticeable when the domain points approach a degenerate configuration, and/or when the noise is relatively large with respect to the size of the point configuration.

Application of the Evidence Procedure to Linear Problems in Signal Processing
View Description Hide DescriptionThe presented work addresses application of the evidence procedure to the field of signal processing where ill‐posed estimation problems are frequently encountered. We base our analysis on the Relevance Vector Machines (RVM) technique originally proposed by M. Tipping. It effectively locally maximizes the evidence integral for linear kernel‐based models. We extend the RVM technique by considering correlated additive Gaussian observation noise and complex‐valued signals. We also show that grouping model parameters w⃗, such that a single hyperparameter α_{ k } controls the kth cluster can be very effective in practice. In particular, it allows to cluster parameters w⃗’s according to their potential relevance which in turns leads to highly improved generalization performance of the therewith parametrized models.
The developed scheme is then illustratively applied to the problem of nonlinear system identification based on a discrete‐time Volterra model. Similar ideas are used to analyze wireless channels from the channel measurement data. Results for synthetic as well as real‐world data are presented.

Stereochemical rules for connecting disjoint protein fragments
View Description Hide DescriptionIn the process of assembling a protein model, the electron density is fitted with fragments consisting of several peptide units. In low‐quality regions of electron density maps, one runs into problems arising from either too many substantially different fragments, or a missing peptide bond. It is then crucial to reduce the set of hypotheses by disqualifying inconsistent ones.
To this end, we characterize the local shape of a main‐chain segment by angles between vectors connecting four consecutive C‐alpha atoms. The local conformation is described by three quasi‐conformation angles: two flat and one dihedral angle. We investigate the probability distribution of these angles and find that the conformational space of the angles is highly restricted. This allows one to connect disjoint fragments more efficiently. Since the procedure of matching hypotheses is computationally expensive, one needs a simple description of the quasi‐conformation angle space. We present such a convenient description and compute its parameters using data from the PDB. A complete set of parameters are provided.