The publication data currently available has been vetted by Vanderbilt faculty, staff, administrators and trainees. The data itself is retrieved directly from NCBI's PubMed and is automatically updated on a weekly basis to ensure accuracy and completeness.
If you have any questions or comments, please contact us.
We present an application of mechanistic modeling and nonlinear longitudinal regression in the context of biomedical response-to-challenge experiments, a field where these methods are underutilized. In this type of experiment, a system is studied by imposing an experimental challenge, and then observing its response. The combination of mechanistic modeling and nonlinear longitudinal regression has brought new insight, and revealed an unexpected opportunity for optimal design. Specifically, the mechanistic aspect of our approach enables the optimal design of experimental challenge characteristics (e.g., intensity, duration). This article lays some groundwork for this approach. We consider a series of experiments wherein an isolated rabbit heart is challenged with intermittent anoxia. The heart responds to the challenge onset, and recovers when the challenge ends. The mean response is modeled by a system of differential equations that describe a candidate mechanism for cardiac response to anoxia challenge. The cardiac system behaves more variably when challenged than when at rest. Hence, observations arising from this experiment exhibit complex heteroscedasticity and sharp changes in central tendency. We present evidence that an asymptotic statistical inference strategy may fail to adequately account for statistical uncertainty. Two alternative methods are critiqued qualitatively (i.e., for utility in the current context), and quantitatively using an innovative Monte-Carlo method. We conclude with a discussion of the exciting opportunities in optimal design of response-to-challenge experiments.
© 2013, The International Biometric Society.
The analysis of longitudinal trajectories usually focuses on evaluation of explanatory factors that are either associated with rates of change, or with overall mean levels of a continuous outcome variable. In this article, we introduce valid design and analysis methods that permit outcome dependent sampling of longitudinal data for scenarios where all outcome data currently exist, but a targeted substudy is being planned in order to collect additional key exposure information on a limited number of subjects. We propose a stratified sampling based on specific summaries of individual longitudinal trajectories, and we detail an ascertainment corrected maximum likelihood approach for estimation using the resulting biased sample of subjects. In addition, we demonstrate that the efficiency of an outcome-based sampling design relative to use of a simple random sample depends highly on the choice of outcome summary statistic used to direct sampling, and we show a natural link between the goals of the longitudinal regression model and corresponding desirable designs. Using data from the Childhood Asthma Management Program, where genetic information required retrospective ascertainment, we study a range of designs that examine lung function profiles over 4 years of follow-up for children classified according to their genotype for the IL 13 cytokine.
© 2013, The International Biometric Society.
Pharmaceutical safety has received substantial attention in the recent past; however, longitudinal clinical laboratory data routinely collected during clinical trials to derive safety profiles are often used ineffectively. For example, these data are frequently summarized by comparing proportions (between treatment arms) of participants who cross pre-specified threshold values at some time during follow-up. This research is intended, in part, to encourage more effective utilization of these data by avoiding unnecessary dichotomization of continuous data, acknowledging and making use of the longitudinal follow-up, and combining data from multiple clinical trials. However, appropriate analyses require careful consideration of a number of challenges (e.g. selection, comparability of study populations, etc.). We discuss estimation strategies based on estimating equations and maximum likelihood for analyses in the presence of three response history-dependent selection mechanisms: dropout, follow-up frequency, and treatment discontinuation. In addition, because clinical trials' participants usually represent non-random samples from target populations, we describe two sensitivity analysis approaches. All discussions are motivated by an analysis that aims to characterize the dynamic relationship between concentrations of a liver enzyme (alanine aminotransferase) and three distinct doses (no drug, low dose, and high dose) of an nk-1 antagonist across four Phase II clinical trials.
Marginalized models (Heagerty, 1999, Biometrics 55, 688-698) permit likelihood-based inference when interest lies in marginal regression models for longitudinal binary response data. Two such models are the marginalized transition and marginalized latent variable models. The former captures within-subject serial dependence among repeated measurements with transition model terms while the latter assumes exchangeable or nondiminishing response dependence using random intercepts. In this article, we extend the class of marginalized models by proposing a single unifying model that describes both serial and long-range dependence. This model will be particularly useful in longitudinal analyses with a moderate to large number of repeated measurements per subject, where both serial and exchangeable forms of response correlation can be identified. We describe maximum likelihood and Bayesian approaches toward parameter estimation and inference, and we study the large sample operating characteristics under two types of dependence model misspecification. Data from the Madras Longitudinal Schizophrenia Study (Thara et al., 1994, Acta Psychiatrica Scandinavica 90, 329-336) are analyzed.
So far, there is no genome-wide estimation of the mutational spectrum in humans. In this study, we systematically examined the directionality of the point mutations and maintenance of GC content in the human genome using approximately 1.8 million high-quality human single nucleotide polymorphisms and their ancestral sequences in chimpanzees. The frequency of C-->T (G-->A) changes was the highest among all mutation types and the frequency of each type of transition was approximately fourfold that of each type of transversion. In intergenic regions, when the GC content increased, the frequency of changes from G or C increased. In exons, the frequency of G:C-->A:T was the highest among the genomic categories and contributed mainly by the frequent mutations at the CpG sites. In contrast, mutations at the CpG sites, or CpG-->TpG/CpA mutations, occurred less frequently in the CpG islands relative to intergenic regions with similar GC content. Our results suggest that the GC content is overall not in equilibrium in the human genome, with a trend toward shifting the human genome to be AT rich and shifting the GC content of a region to approach the genome average. Our results, which differ from previous estimates based on limited loci or on the rodent lineage, provide the first representative and reliable mutational spectrum in the recent human genome and categorized genomic regions.
A method for estimating the size of a heavily exploited animal population from catch data and relative-harvest-effort data is presented. The method assumes a competing-risk model of adult deaths and captures that is similar to the hazard-regression model of Cox (1972, Journal of the Royal Statistical Society, Series B 34, 187-220). This model avoids making any assumptions about birth rates or juvenile mortality rates, and allows the user to incorporate an arbitrary number of time-dependent covariates into the natural and catch hazard functions. Estimates of the population's size, together with asymptotic error bounds and predictions of subsequent catches, are derived from maximum likelihood estimates of the parameters of the model. A simulation study is presented which indicates that this method is far more accurate than previously available catch-effort techniques. The method is illustrated with some fisheries data. A series of models is fitted to the data with the objective of improving the goodness of fit while maintaining biologic plausibility of the model. In this example a 68% reduction in the mean sum of squares for error is obtained and the accuracy of future catch predictions is greatly improved. This method is particularly appropriate for estimating the sizes of commercially exploited aquatic populations whose sizes are too large to make mark-recapture techniques feasible, and which are not amenable to line-transect techniques.
Simulated multigenerational pedigrees were analyzed using the program GENPED and POINTER to examine the 1) limits of segregation analysis for detecting single locus, two-allele transmission of a dichotomous trait and 2) accuracy of the parameter estimates. Ten data sets of 30 pedigrees each (approximately 25 persons per pedigree) were simulated. The genotypic penetrance values were varied but the population prevalence of the trait was kept constant at 2%. For some data sets a linked marker locus was also simulated. Previous results had shown that a single major locus could be easily detected when the heterozygote penetrance (f1) was high or midway between the two homozygote penetrances. In this study, we found a single major locus could not be consistently detected by either method of segregation analysis when f1 was "low" to "intermediate." Accuracy of the parameter estimates depended on assumptions about the population prevalence. In those cases where the major locus could not be detected by segregation analysis, linkage to a marker locus could be detected as long as the marker was closely linked and there were not phenocopies in the population. Owing to the limited number of simulations in this study, we cannot generalize these findings. However, they provide a basis for further testing of methods of segregation analysis when factors such as the parameter values, family structure, and ascertainment scheme are varied.
Although stuttering is known to be a familial disorder, no clear evidence regarding precise mode of transmission has arisen from previous research. In this report segregation analysis is applied to data on 386 stuttering probands and their first-degree relatives in an effort to discriminate among possible genetic models for the transmission of stuttering. Two different segregation analysis programs, PAP and POINTER, gave comparable results with respect to both hypothesis testing and parameter estimation. Specifically, the transmission of stuttering observed in these families cannot be adequately explained by a Mendelian major locus. The hypothesis of no polygenic component in the transmission of stuttering can, however, be rejected. Existence in these data of potential heterogeneity and possible violations of assumptions concerning ascertainment are considered in interpreting the results.