Show newer

Statistical inference for association studies in the presence of binary outcome misclassification. (arXiv:2303.10215v1 [stat.ME]) arxiv.org/abs/2303.10215

Statistical inference for association studies in the presence of binary outcome misclassification

In biomedical and public health association studies, binary outcome variables may be subject to misclassification, resulting in substantial bias in effect estimates. The feasibility of addressing binary outcome misclassification in regression models is often hindered by model identifiability issues. In this paper, we characterize the identifiability problems in this class of models as a specific case of "label switching" and leverage a pattern in the resulting parameter estimates to solve the permutation invariance of the complete data log-likelihood. Our proposed algorithm in binary outcome misclassification models does not require gold standard labels and relies only on the assumption that outcomes are correctly classified at least 50% of the time. A label switching correction is applied within estimation methods to recover unbiased effect estimates and to estimate misclassification rates in cases with one or more sequential observed outcomes. Open source software is provided to implement the proposed methods for single- and two-stage models. We give a detailed simulation study for our proposed methodology and apply these methods to data for single-stage modeling of the Medical Expenditure Panel Survey (MEPS) from 2020 and two-stage modeling of data from the Virginia Department of Criminal Justice Services.

arxiv.org

A statistical framework for GWAS of high dimensional phenotypes using summary statistics, with application to metabolite GWAS. (arXiv:2303.10221v1 [stat.ME]) arxiv.org/abs/2303.10221

A statistical framework for GWAS of high dimensional phenotypes using summary statistics, with application to metabolite GWAS

The recent explosion of genetic and high dimensional biobank and 'omic' data has provided researchers with the opportunity to investigate the shared genetic origin (pleiotropy) of hundreds to thousands of related phenotypes. However, existing methods for multi-phenotype genome-wide association studies (GWAS) do not model pleiotropy, are only applicable to a small number of phenotypes, or provide no way to perform inference. To add further complication, raw genetic and phenotype data are rarely observed, meaning analyses must be performed on GWAS summary statistics whose statistical properties in high dimensions are poorly understood. We therefore developed a novel model, theoretical framework, and set of methods to perform Bayesian inference in GWAS of high dimensional phenotypes using summary statistics that explicitly model pleiotropy, beget fast computation, and facilitate the use of biologically informed priors. We demonstrate the utility of our procedure by applying it to metabolite GWAS, where we develop new nonparametric priors for genetic effects on metabolite levels that use known metabolic pathway information and foster interpretable inference at the pathway level.

arxiv.org

Point process simulation of generalised hyperbolic L\'evy processes. (arXiv:2303.10292v1 [stat.ME]) arxiv.org/abs/2303.10292

Point process simulation of generalised hyperbolic Lévy processes

Generalised hyperbolic (GH) processes are a class of stochastic processes that are used to model the dynamics of a wide range of complex systems that exhibit heavy-tailed behavior, including systems in finance, economics, biology, and physics. In this paper, we present novel simulation methods based on subordination with a generalised inverse Gaussian (GIG) process and using a generalised shot-noise representation that involves random thinning of infinite series of decreasing jump sizes. Compared with our previous work on GIG processes, we provide tighter bounds for the construction of rejection sampling ratios, leading to improved acceptance probabilities in simulation. Furthermore, we derive methods for the adaptive determination of the number of points required in the associated random series using concentration inequalities. Residual small jumps are then approximated using an appropriately scaled Brownian motion term with drift. Finally the rejection sampling steps are made significantly more computationally efficient through the use of squeezing functions based on lower and upper bounds on the Lévy density. Experimental results are presented illustrating the strong performance under various parameter settings and comparing the marginal distribution of the GH paths with exact simulations of GH random variates. The new simulation methodology is made available to researchers through the publication of a Python code repository.

arxiv.org

Cross-validatory Z-Residual for Diagnosing Shared Frailty Models. (arXiv:2303.09616v1 [stat.ME]) arxiv.org/abs/2303.09616

Cross-validatory Z-Residual for Diagnosing Shared Frailty Models

Residual diagnostic methods play a critical role in assessing model assumptions and detecting outliers in statistical modelling. In the context of survival models with censored observations, Li et al. (2021) introduced the Z-residual, which follows an approximately normal distribution under the true model. This property makes it possible to use Z-residuals for diagnosing survival models in a way similar to how Pearson residuals are used in normal regression. However, computing residuals based on the full dataset can result in a conservative bias that reduces the power of detecting model mis-specification, as the same dataset is used for both model fitting and validation. Although cross-validation is a potential solution to this problem, it has not been commonly used in residual diagnostics due to computational challenges. In this paper, we propose a cross-validation approach for computing Z-residuals in the context of shared frailty models. Specifically, we develop a general function that calculates cross-validatory Z-residuals using the output from the \texttt{coxph} function in the \texttt{survival} package in R.Our simulation studies demonstrate that, for goodness-of-fit tests and outlier detection, cross-validatory Z-residuals are significantly more powerful and more discriminative than Z-residuals without cross-validation. We also compare the performance of Z-residuals with and without cross-validation in identifying outliers in a real application that models the recurrence time of kidney infection patients. Our findings suggest that cross-validatory Z-residuals can identify outliers that are missed by Z-residuals without cross-validation.

arxiv.org

Delayed and Indirect Impacts of Link Recommendations. (arXiv:2303.09700v1 [cs.SI]) arxiv.org/abs/2303.09700

Delayed and Indirect Impacts of Link Recommendations

The impacts of link recommendations on social networks are challenging to evaluate, and so far they have been studied in limited settings. Observational studies are restricted in the kinds of causal questions they can answer and naive A/B tests often lead to biased evaluations due to unaccounted network interference. Furthermore, evaluations in simulation settings are often limited to static network models that do not take into account the potential feedback loops between link recommendation and organic network evolution. To this end, we study the impacts of recommendations on social networks in dynamic settings. Adopting a simulation-based approach, we consider an explicit dynamic formation model -- an extension of the celebrated Jackson-Rogers model -- and investigate how link recommendations affect network evolution over time. Empirically, we find that link recommendations have surprising delayed and indirect effects on the structural properties of networks. Specifically, we find that link recommendations can exhibit considerably different impacts in the immediate term and in the long term. For instance, we observe that friend-of-friend recommendations can have an immediate effect in decreasing degree inequality, but in the long term, they can make the degree distribution substantially more unequal. Moreover, we show that the effects of recommendations can persist in networks, in part due to their indirect impacts on natural dynamics even after recommendations are turned off. We show that, in counterfactual simulations, removing the indirect effects of link recommendations can make the network trend faster toward what it would have been under natural growth dynamics.

arxiv.org

A New Covariate Selection Strategy for High Dimensional Data in Causal Effect Estimation with Multivariate Treatments. (arXiv:2303.09766v1 [stat.ME]) arxiv.org/abs/2303.09766

A New Covariate Selection Strategy for High Dimensional Data in Causal Effect Estimation with Multivariate Treatments

Selection of covariates is crucial in the estimation of average treatment effects given observational data with high or even ultra-high dimensional pretreatment variables. Existing methods for this problem typically assume sparse linear models for both outcome and univariate treatment, and cannot handle situations with ultra-high dimensional covariates. In this paper, we propose a new covariate selection strategy called double screening prior adaptive lasso (DSPAL) to select confounders and predictors of the outcome for multivariate treatments, which combines the adaptive lasso method with the marginal conditional (in)dependence prior information to select target covariates, in order to eliminate confounding bias and improve statistical efficiency. The distinctive features of our proposal are that it can be applied to high-dimensional or even ultra-high dimensional covariates for multivariate treatments, and can deal with the cases of both parametric and nonparametric outcome models, which makes it more robust compared to other methods. Our theoretical analyses show that the proposed procedure enjoys the sure screening property, the ranking consistency property and the variable selection consistency. Through a simulation study, we demonstrate that the proposed approach selects all confounders and predictors consistently and estimates the multivariate treatment effects with smaller bias and mean squared error compared to several alternatives under various scenarios. In real data analysis, the method is applied to estimate the causal effect of a three-dimensional continuous environmental treatment on cholesterol level and enlightening results are obtained.

arxiv.org
Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.