The publication data currently available has been vetted by Vanderbilt faculty, staff, administrators and trainees. The data itself is retrieved directly from NCBI's PubMed and is automatically updated on a weekly basis to ensure accuracy and completeness.
If you have any questions or comments, please contact us.
This chapter provides a broad overview of ion mobility-mass spectrometry (IM-MS) and its applications in separation science, with a focus on pharmaceutical applications. A general overview of fundamental ion mobility (IM) theory is provided with descriptions of several contemporary instrument platforms which are available commercially (i.e., drift tube and traveling wave IM). Recent applications of IM-MS toward the evaluation of structural isomers are highlighted and placed in the context of both a separation and characterization perspective. We conclude this chapter with a guided reference protocol for obtaining routine IM-MS spectra on a commercially available uniform-field IM-MS.
Scaffold proteins tether and orient components of a signaling cascade to facilitate signaling. Although much is known about how scaffolds colocalize signaling proteins, it is unclear whether scaffolds promote signal amplification. Here, we used arrestin-3, a scaffold of the ASK1-MKK4/7-JNK3 cascade, as a model to understand signal amplification by a scaffold protein. We found that arrestin-3 exhibited >15-fold higher affinity for inactive JNK3 than for active JNK3, and this change involved a shift in the binding site following JNK3 activation. We used systems biochemistry modeling and Bayesian inference to evaluate how the activation of upstream kinases contributed to JNK3 phosphorylation. Our combined experimental and computational approach suggested that the catalytic phosphorylation rate of JNK3 at Thr-221 by MKK7 is two orders of magnitude faster than the corresponding phosphorylation of Tyr-223 by MKK4 with or without arrestin-3. Finally, we showed that the release of activated JNK3 was critical for signal amplification. Collectively, our data suggest a "conveyor belt" mechanism for signal amplification by scaffold proteins. This mechanism informs on a long-standing mystery for how few upstream kinase molecules activate numerous downstream kinases to amplify signaling.
For two decades diffusion fiber tractography has been used to probe both the spatial extent of white matter pathways and the region to region connectivity of the brain. In both cases, anatomical accuracy of tractography is critical for sound scientific conclusions. Here we assess and validate the algorithms and tractography implementations that have been most widely used - often because of ease of use, algorithm simplicity, or availability offered in open source software. Comparing forty tractography results to a ground truth defined by histological tracers in the primary motor cortex on the same squirrel monkey brains, we assess tract fidelity on the scale of voxels as well as over larger spatial domains or regional connectivity. No algorithms are successful in all metrics, and, in fact, some implementations fail to reconstruct large portions of pathways or identify major points of connectivity. The accuracy is most dependent on reconstruction method and tracking algorithm, as well as the seed region and how this region is utilized. We also note a tremendous variability in the results, even though the same MR images act as inputs to all algorithms. In addition, anatomical accuracy is significantly decreased at increased distances from the seed. An analysis of the spatial errors in tractography reveals that many techniques have trouble properly leaving the gray matter, and many only reveal connectivity to adjacent regions of interest. These results show that the most commonly implemented algorithms have several shortcomings and limitations, and choices in implementations lead to very different results. This study should provide guidance for algorithm choices based on study requirements for sensitivity, specificity, or the need to identify particular connections, and should serve as a heuristic for future developments in tractography.
Copyright © 2018 Elsevier Inc. All rights reserved.
Arrays of radiofrequency coils are widely used in magnetic resonance imaging to achieve high signal-to-noise ratios and flexible volume coverage, to accelerate scans using parallel reception, and to mitigate field non-uniformity using parallel transmission. However, conventional coil arrays require complex decoupling technologies to reduce electromagnetic coupling between coil elements, which would otherwise amplify noise and limit transmitted power. Here we report a novel self-decoupled RF coil design with a simple structure that requires only an intentional redistribution of electrical impedances around the length of the coil loop. We show that self-decoupled coils achieve high inter-coil isolation between adjacent and non-adjacent elements of loop arrays and mixed arrays of loops and dipoles. Self-decoupled coils are also robust to coil separation, making them attractive for size-adjustable and flexible coil arrays.
State-of-the-art strategies for proteomics are not able to rapidly interrogate complex peptide mixtures in an untargeted manner with sensitive peptide and protein identification rates. We describe a data-independent acquisition (DIA) approach, microDIA (μDIA), that applies a novel tandem mass spectrometry (MS/MS) mass spectral deconvolution method to increase the specificity of tandem mass spectra acquired during proteomics experiments. Using the μDIA approach with a 10 min liquid chromatography gradient allowed detection of 3.1-fold more HeLa proteins than the results obtained from data-dependent acquisition (DDA) of the same samples. Additionally, we found the μDIA MS/MS deconvolution procedure is critical for resolving modified peptides with relatively small precursor mass shifts that cause the same peptide sequence in modified and unmodified forms to theoretically cofragment in the same raw MS/MS spectra. The μDIA workflow is implemented in the PROTALIZER software tool which fully automates tandem mass spectral deconvolution, queries every peptide with a library-free search algorithm against a user-defined protein database, and confidently identifies multiple peptides in a single tandem mass spectrum. We also benchmarked μDIA against DDA using a 90 min gradient analysis of HeLa and Escherichia coli peptides that were mixed in predefined quantitative ratios, and our results showed μDIA provided 24% more true positives at the same false positive rate.
OBJECTIVE - Changes in microvascular perfusion have been reported in many diseases, yet the functional significance of altered perfusion is often difficult to determine. This is partly because commonly used techniques for perfusion measurement often rely on either indirect or by-hand approaches.
METHODS - We developed and validated a fully automated software technique to measure microvascular perfusion in videos acquired by fluorescence microscopy in the mouse gastrocnemius. Acute perfusion responses were recorded following intravenous injections with phenylephrine, SNP, or saline.
RESULTS - Software-measured capillary flow velocity closely correlated with by-hand measured flow velocity (R = 0.91, P < 0.0001). Software estimates of capillary hematocrit also generally agreed with by-hand measurements (R = 0.64, P < 0.0001). Detection limits range from 0 to 2000 μm/s, as compared to an average flow velocity of 326 ± 102 μm/s (mean ± SD) at rest. SNP injection transiently increased capillary flow velocity and hematocrit and made capillary perfusion more steady and homogenous. Phenylephrine injection had the opposite effect in all metrics. Saline injection transiently decreased capillary flow velocity and hematocrit without influencing flow distribution or stability. All perfusion metrics were temporally stable without intervention.
CONCLUSIONS - These results demonstrate a novel and sensitive technique for reproducible, user-independent quantification of microvascular perfusion.
© 2018 John Wiley & Sons Ltd.
Typically, C flux analysis relies on assumptions of both metabolic and isotopic steady state. If metabolism is steady but isotope labeling is not allowed to fully equilibrate, isotopically nonstationary metabolic flux analysis (INST-MFA) can be used to estimate fluxes. This requires solution of differential equations that describe the time-dependent labeling of network metabolites, while iteratively adjusting the flux and pool size parameters to match the transient labeling measurements. INST-MFA holds a number of unique advantages over approaches that rely solely upon steady-state isotope enrichments. First, INST-MFA can be applied to estimate fluxes in autotrophic systems, which consume only single-carbon substrates. Second, INST-MFA is ideally suited to systems that label slowly due to the presence of large intermediate pools or pathway bottlenecks. Finally, INST-MFA provides increased measurement sensitivity to estimate reversible exchange fluxes and metabolite pool sizes, which represents a potential framework for integrating metabolomic analysis with C flux analysis. This review highlights the unique capabilities of INST-MFA, describes newly available software tools that automate INST-MFA calculations, presents several practical examples of recent INST-MFA applications, and discusses the technical challenges that lie ahead.
Copyright © 2018 Elsevier Ltd. All rights reserved.
Computational protein design has been successful in modeling fixed backbone proteins in a single conformation. However, when modeling large ensembles of flexible proteins, current methods in protein design have been insufficient. Large barriers in the energy landscape are difficult to traverse while redesigning a protein sequence, and as a result current design methods only sample a fraction of available sequence space. We propose a new computational approach that combines traditional structure-based modeling using the Rosetta software suite with machine learning and integer linear programming to overcome limitations in the Rosetta sampling methods. We demonstrate the effectiveness of this method, which we call BROAD, by benchmarking the performance on increasing predicted breadth of anti-HIV antibodies. We use this novel method to increase predicted breadth of naturally-occurring antibody VRC23 against a panel of 180 divergent HIV viral strains and achieve 100% predicted binding against the panel. In addition, we compare the performance of this method to state-of-the-art multistate design in Rosetta and show that we can outperform the existing method significantly. We further demonstrate that sequences recovered by this method recover known binding motifs of broadly neutralizing anti-HIV antibodies. Finally, our approach is general and can be extended easily to other protein systems. Although our modeled antibodies were not tested in vitro, we predict that these variants would have greatly increased breadth compared to the wild-type antibody.
Biophysical models designed to predict the growth and response of tumors to treatment have the potential to become a valuable tool for clinicians in care of cancer patients. Specifically, individualized tumor forecasts could be used to predict response or resistance early in the course of treatment, thereby providing an opportunity for treatment selection or adaption. This chapter discusses an experimental and modeling framework in which noninvasive imaging data is used to initialize and parameterize a subject-specific model of tumor growth. This modeling approach is applied to an analysis of murine models of glioma growth.
The sizes of the data matrices assembled to resolve branches of the tree of life have increased dramatically, motivating the development of programs for fast, yet accurate, inference. For example, several different fast programs have been developed in the very popular maximum likelihood framework, including RAxML/ExaML, PhyML, IQ-TREE, and FastTree. Although these programs are widely used, a systematic evaluation and comparison of their performance using empirical genome-scale data matrices has so far been lacking. To address this question, we evaluated these four programs on 19 empirical phylogenomic data sets with hundreds to thousands of genes and up to 200 taxa with respect to likelihood maximization, tree topology, and computational speed. For single-gene tree inference, we found that the more exhaustive and slower strategies (ten searches per alignment) outperformed faster strategies (one tree search per alignment) using RAxML, PhyML, or IQ-TREE. Interestingly, single-gene trees inferred by the three programs yielded comparable coalescent-based species tree estimations. For concatenation-based species tree inference, IQ-TREE consistently achieved the best-observed likelihoods for all data sets, and RAxML/ExaML was a close second. In contrast, PhyML often failed to complete concatenation-based analyses, whereas FastTree was the fastest but generated lower likelihood values and more dissimilar tree topologies in both types of analyses. Finally, data matrix properties, such as the number of taxa and the strength of phylogenetic signal, sometimes substantially influenced the programs' relative performance. Our results provide real-world gene and species tree phylogenetic inference benchmarks to inform the design and execution of large-scale phylogenomic data analyses.
© The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.