Campuses:

Poster Session and Reception

Thursday, September 14, 2017 - 5:30pm - 6:30pm
Lind 400
  • A mathematical model of non-human primate's circadian rhythm for developing a new drug
    DaeWook Kim (Korea Advanced Institute of Science and Technology (KAIST))
    The mammalian circadian rhythms synchronize various external environments to keep ~24h period. The circadian rhythms can be entrained by light-dark (LD) cycle and can be controlled pharmacologically, for instance, the Ck1δ/ε inhibitor, PF-670462. The majority of previous studies only described the actions of PF-670462 in a nocturnal species. Here, we extend the former studies to a diurnal species, Cynomolgus monkeys. We find that the dosing exposure of non-human primate (NHP) to PF-670462 is much higher than mouse. To consider the difference, we develop a NHP’s pharmaco systems model as modifying the original mouse suprachiasmatic nuclei (SCN) pharmaco systems model. By using the new model and experimental data, we predict and validate that the dosing effect is very different depending on dosing time. We also find the NHP's counteracting effect of light on dosing is stronger than a mouse. This work indicates that even, in a diurnal animal, dosing timing and environmental factors should be carefully considered to get the desired manipulation of circadian phase.

    With Jae Kyoung Kim (KAIST), Cheng Chang (Pfizer), Xian Chen (Pfizer), and George J. DeMarco (UMass Medical School)
  • Optimal Association Tests for Finding Weak Genetic Effects
    Zheyang Wu (Worcester Polytechnic Institute)
    Optimal tests of detecting weak and sparse signals in big data, such as the Higher Criticism test (HC), Berk-Jones test (B-J), $\phi$-divergence tests have been shown extra statistical power in detecting novel disease genes in genome-wide sequencing association studies. Genetic markers are often in linkage disequilibrium (LD) and thus the genotype data are correlated. However, due to the difficulty of p-value calculation for these tests under the dependence case, current applications are either based on de-correlation of input tests or permutations. Here we demonstrate that de-correlation is not an appropriate strategy; properly incorporated LD information can help to improve statistical power. We provide a solution to calculate the p-values under a broad range of correlation structures. Under stronger correlations our method is more accurate than the recently proposed GHC (the generalized Higher Criticism) method, which also targets at the correlated data problem. Moreover, our method applies to a wider family of goodness-of-fit (GOF) tests. This family covers the above-mentioned optimal tests, some of which are more powerful than GHC under various correlations.
  • CK-SKAT: Composite Kernel Machine Association Test for Biomarker Discovery in Pharmacogenetics Studies
    Hong Zhang (Worcester Polytechnic Institute)
    In pharmacogenetics (PGx) studies, we are interested in detecting the interaction between the drug response and genetic biomarkers. For common genetic variants, the single variant test is powerful enough to identify the association. However, if the variants are of low frequency or even rare, with the limited number of samples more sophisticated methods are needed to increase the power. We develop a novel gene-based kernel machine omnibus test of genetic main effect and gene-treatment interaction effect. In the simulation studies, we show that our proposed method has better power than other currently available methods across different scenarios, especially for small sample sizes. The method also controls type I error much better than current methods in the context of PGx with small sample size and small minor allele frequency. We also apply our method to whole exome sequencing data from a clinical trial. The result shows that our method has better controlled the p-value inflation than the existing methods and successfully detect the potentially associated gene.
  • Predicting 30-day Re-admissions in Patients with Heart Failure Using Machine Learning
    Sujay Kakarmath (Harvard Medical School)
    Background: Heart failure (CHF) is the leading cause of hospitalizations in patients aged ≥65, with costs exceeding $17 billion/yr and over 50% of patients readmitted within 6 months of discharge. Transition-of-care interventions can reduce readmission rates but are resource intensive. Predictive tools can be used to prioritize patients who should be administered these interventions.

    Methods: We used longitudinal electronic medical record data of heart failure patients admitted within the Partners Healthcare system between 2014-2015. Feature vectors were derived from structured demographic, utilization, and clinical data, as well as un-structured data like clinician-authored notes. 30-day readmission prediction models were built using logistic regression, gradient boosting, maxout networks, and neural networks. The model was validated with 10-fold cross-validation. Overall model performance was assessed using area under the ROC Curve (AUROC).

    Results: Data from 11,510 patients with 27,334 admissions and 6,369 30-day readmissions were used to train the model. After data processing, the final algorithm included 3,512 variables. . The neural networks model marked the best result of 10-fold cross-validation with area under ROC curve of 0.705, and accuracy of 76.4% at the classification threshold that corresponds with maximum cost saving.

    Conclusions: Predictive algorithms can be used to identify high-risk patients and increase the efficiency of transition-of-care interventions
  • Quantile-Optimal Treatment Regimes with Censored Data
    Yu Zhou (University of Minnesota, Twin Cities)
    An active area of statistical research is individualized treatment regime, which maps the information available to a patient up to the time of the decision to a recommended treatment, hence incorporating the heterogeneity in need of treatment across individuals. So far, most of the existing methods were based on the assumption of complete observation of a medical outcome in the sample. However, in clinical practice, many conventional benefit endpoints, e.g., progression-free survival time, which directly measures the efficacy of treatment, can be censored in observed data. We propose to estimate the optimal treatment regimes for censored data under the quantile-optimal treatment regime framework by Wang et al. as it makes a minimum assumption on clinical endpoints, and handles benefit/risk assessment at a granular level. We proved the cube-root asymptotic theory of the estimator and designed a few simulation studies to verify asymptotic properties the estimator.