The publication data currently available has been vetted by Vanderbilt faculty, staff, administrators and trainees. The data itself is retrieved directly from NCBI's PubMed and is automatically updated on a weekly basis to ensure accuracy and completeness.
If you have any questions or comments, please contact us.
While great progress has been made, only 10% of the nearly 1,000 integral, α-helical, multi-span membrane protein families are represented by at least one experimentally determined structure in the PDB. Previously, we developed the algorithm BCL::MP-Fold, which samples the large conformational space of membrane proteins de novo by assembling predicted secondary structure elements guided by knowledge-based potentials. Here, we present a case study of rhodopsin fold determination by integrating sparse and/or low-resolution restraints from multiple experimental techniques including electron microscopy, electron paramagnetic resonance spectroscopy, and nuclear magnetic resonance spectroscopy. Simultaneous incorporation of orthogonal experimental restraints not only significantly improved the sampling accuracy but also allowed identification of the correct fold, which is demonstrated by a protein size-normalized transmembrane root-mean-square deviation as low as 1.2 Å. The protocol developed in this case study can be used for the determination of unknown membrane protein folds when limited experimental restraints are available.
Copyright © 2018 Elsevier Ltd. All rights reserved.
The emergence of microscale thermophoresis (MST) as a technique for determining the dissociation constants for bimolecular interactions has enabled these quantities to be measured in systems that were previously difficult or impracticable. However, most models for analyses of these data featured the assumption of a simple 1:1 binding interaction. The only model widely used for multiple binding sites was the Hill equation. Here, we describe two new MST analytic models that assume a 1:2 binding scheme: the first features two microscopic binding constants (K(1) and K(2)), while the other assumes symmetry in the bivalent molecule, culminating in a model with a single macroscopic dissociation constant (K) and a single factor (α) that accounts for apparent cooperativity in the binding. We also discuss the general applicability of the Hill equation for MST data. The performances of the algorithms on both real and simulated data are assessed, and implementation of the algorithms in the MST analysis program PALMIST is discussed.
Copyright © 2017 Elsevier Inc. All rights reserved.
Summary - Biological models contain many parameters whose values are difficult to measure directly via experimentation and therefore require calibration against experimental data. Markov chain Monte Carlo (MCMC) methods are suitable to estimate multivariate posterior model parameter distributions, but these methods may exhibit slow or premature convergence in high-dimensional search spaces. Here, we present PyDREAM, a Python implementation of the (Multiple-Try) Differential Evolution Adaptive Metropolis [DREAM(ZS)] algorithm developed by Vrugt and ter Braak (2008) and Laloy and Vrugt (2012). PyDREAM achieves excellent performance for complex, parameter-rich models and takes full advantage of distributed computing resources, facilitating parameter inference and uncertainty estimation of CPU-intensive biological models.
Availability and implementation - PyDREAM is freely available under the GNU GPLv3 license from the Lopez lab GitHub repository at http://github.com/LoLab-VU/PyDREAM.
Contact - email@example.com.
Supplementary information - Supplementary data are available at Bioinformatics online.
© The Author(s) 2017. Published by Oxford University Press.
We consider the optimal design of pharmacokinetic studies in patients that receive intermittent hemodialysis and intravenous antibiotic. Hemodialysis perturbs the pharmacokinetic system, providing additional opportunity for study. Designs that allocate measurements to occur exclusively during hemodialysis are shown to be viable alternatives to conventional designs, where all measurements occur outside of hemodialysis. Furthermore, hybrid designs with both conventional and intradialytic measurements have nearly double the efficiency of conventional designs. Convex optimal design and Monte Carlo techniques were used to simultaneously optimize hemodialysis event characteristics and sampling times, accounting for population pharmacokinetic heterogeneity. We also present several related methodological innovations.
The authors have created a radiation transport code using the GEANT4 Monte Carlo toolkit to simulate pediatric patients undergoing CT examinations. The focus of this paper is to validate their simulation with real-world physical dosimetry measurements using two independent techniques. Exposure measurements were made with a standard 100-mm CT pencil ionization chamber, and absorbed doses were also measured using optically stimulated luminescent (OSL) dosimeters. Measurements were made in air with a standard 16-cm acrylic head phantom and with a standard 32-cm acrylic body phantom. Physical dose measurements determined from the ionization chamber in air for 100 and 120 kVp beam energies were used to derive photon-fluence calibration factors. Both ion chamber and OSL measurement results provide useful comparisons in the validation of the Monte Carlo simulations. It was found that simulated and measured CTDI values were within an overall average of 6% of each other.
We conducted simulations to compare the potential imaging performance for breast cancer detection with High-Purity Germanium (HPGe) and Cadmium Zinc Telluride (CZT) systems with 1% and 3.8% energy resolution at 140 keV, respectively. Using the Monte Carlo N-Particle (MCNP5) simulation package, we modelled both 5 mm-thick CZT and 10 mm-thick HPGe detectors with the same parallel-hole collimator for the imaging of a breast/torso phantom. Simulated energy spectra were generated, and planar images were created for various energy windows around the 140 keV photopeak. Relative sensitivity and scatter and the torso fractions were calculated along with tumour contrast and signal-to-noise ratios (SNR). Simulations showed that utilizing a ±1.25% energy window with an HPGe system better suppressed torso background and small-angle scattered photons than a comparable CZT system using a -5%/+10% energy window. Both systems provided statistically similar contrast and SNR, with HPGe providing higher relative sensitivity. Lowering the counts of HPGe images to match CZT count density still yielded equivalent contrast between HPGe and CZT. Thus, an HPGe system may provide equivalent breast imaging capability at lower injected radioactivity levels when acquiring for equal imaging time.
Label fusion is a critical step in many image segmentation frameworks (e.g., multi-atlas segmentation) as it provides a mechanism for generalizing a collection of labeled examples into a single estimate of the underlying segmentation. In the multi-label case, typical label fusion algorithms treat all labels equally - fully neglecting the known, yet complex, anatomical relationships exhibited in the data. To address this problem, we propose a generalized statistical fusion framework using hierarchical models of rater performance. Building on the seminal work in statistical fusion, we reformulate the traditional rater performance model from a multi-tiered hierarchical perspective. The proposed approach provides a natural framework for leveraging known anatomical relationships and accurately modeling the types of errors that raters (or atlases) make within a hierarchically consistent formulation. Herein, the primary contributions of this manuscript are: (1) we provide a theoretical advancement to the statistical fusion framework that enables the simultaneous estimation of multiple (hierarchical) confusion matrices for each rater, (2) we highlight the amenability of the proposed hierarchical formulation to many of the state-of-the-art advancements to the statistical fusion framework, and (3) we demonstrate statistically significant improvement on both simulated and empirical data. Specifically, both theoretically and empirically, we show that the proposed hierarchical performance model provides substantial and significant accuracy benefits when applied to two disparate multi-atlas segmentation tasks: (1) 133 label whole-brain anatomy on structural MR, and (2) orbital anatomy on CT.
Copyright © 2014 Elsevier B.V. All rights reserved.
Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This "network-free" approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of "partial network expansion" into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings can be achieved using the new approach and a monetary cost analysis provides a practical measure of its utility.
Modern statistical inference techniques may be able to improve the sensitivity and specificity of resting state functional magnetic resonance imaging (rs-fMRI) connectivity analysis through more realistic assumptions. In simulation, the advantages of such methods are readily demonstrable. However, quantitative empirical validation remains elusive in vivo as the true connectivity patterns are unknown and noise distributions are challenging to characterize, especially in ultra-high field (e.g., 7T fMRI). Though the physiological characteristics of the fMRI signal are difficult to replicate in controlled phantom studies, it is critical that the performance of statistical techniques be evaluated. The SIMulation EXtrapolation (SIMEX) method has enabled estimation of bias with asymptotically consistent estimators on empirical finite sample data by adding simulated noise . To avoid the requirement of accurate estimation of noise structure, the proposed quantitative evaluation approach leverages the theoretical core of SIMEX to study the properties of inference methods in the face of diminishing data (in contrast to increasing noise). The performance of ordinary and robust inference methods in simulation and empirical rs-fMRI are compared using the proposed quantitative evaluation approach. This study provides a simple, but powerful method for comparing a proxy for inference accuracy using empirical data.
Decision-making is explained by psychologists through stochastic accumulator models and by neurophysiologists through the activity of neurons believed to instantiate these models. We investigated an overlooked scaling problem: How does a response time (RT) that can be explained by a single model accumulator arise from numerous, redundant accumulator neurons, each of which individually appears to explain the variability of RT? We explored this scaling problem by developing a unique ensemble model of RT, called e pluribus unum, which embodies the well-known dictum "out of many, one." We used the e pluribus unum model to analyze the RTs produced by ensembles of redundant, idiosyncratic stochastic accumulators under various termination mechanisms and accumulation rate correlations in computer simulations of ensembles of varying size. We found that predicted RT distributions are largely invariant to ensemble size if the accumulators share at least modestly correlated accumulation rates and RT is not governed by the most extreme accumulators. Under these regimes the termination times of individual accumulators was predictive of ensemble RT. We also found that the threshold measured on individual accumulators, corresponding to the firing rate of neurons measured at RT, can be invariant with RT but is equivalent to the specified model threshold only when the rate correlation is very high.