Show newer

Benchmarking Estimators for Natural Experiments: A Novel Dataset and a Doubly Robust Algorithm arxiv.org/abs/2409.04500 .ML .ME .LG

Benchmarking Estimators for Natural Experiments: A Novel Dataset and a Doubly Robust Algorithm

Estimating the effect of treatments from natural experiments, where treatments are pre-assigned, is an important and well-studied problem. We introduce a novel natural experiment dataset obtained from an early childhood literacy nonprofit. Surprisingly, applying over 20 established estimators to the dataset produces inconsistent results in evaluating the nonprofit's efficacy. To address this, we create a benchmark to evaluate estimator accuracy using synthetic outcomes, whose design was guided by domain experts. The benchmark extensively explores performance as real world conditions like sample size, treatment correlation, and propensity score accuracy vary. Based on our benchmark, we observe that the class of doubly robust treatment effect estimators, which are based on simple and intuitive regression adjustment, generally outperform other more complicated estimators by orders of magnitude. To better support our theoretical understanding of doubly robust estimators, we derive a closed form expression for the variance of any such estimator that uses dataset splitting to obtain an unbiased estimate. This expression motivates the design of a new doubly robust estimator that uses a novel loss function when fitting functions for regression adjustment. We release the dataset and benchmark in a Python package; the package is built in a modular way to facilitate new datasets and estimators.

arxiv.org

Enhancing Electrocardiography Data Classification Confidence: A Robust Gaussian Process Approach (MuyGPs) arxiv.org/abs/2409.04642 .AP

Enhancing Electrocardiography Data Classification Confidence: A Robust Gaussian Process Approach (MuyGPs)

Analyzing electrocardiography (ECG) data is essential for diagnosing and monitoring various heart diseases. The clinical adoption of automated methods requires accurate confidence measurements, which are largely absent from existing classification methods. In this paper, we present a robust Gaussian Process classification hyperparameter training model (MuyGPs) for discerning normal heartbeat signals from the signals affected by different arrhythmias and myocardial infarction. We compare the performance of MuyGPs with traditional Gaussian process classifier as well as conventional machine learning models, such as, Random Forest, Extra Trees, k-Nearest Neighbors and Convolutional Neural Network. Comparing these models reveals MuyGPs as the most performant model for making confident predictions on individual patient ECGs. Furthermore, we explore the posterior distribution obtained from the Gaussian process to interpret the prediction and quantify uncertainty. In addition, we provide a guideline on obtaining the prediction confidence of the machine learning models and quantitatively compare the uncertainty measures of these models. Particularly, we identify a class of less-accurate (ambiguous) signals for further diagnosis by an expert.

arxiv.org

A Multi-objective Economic Statistical Design of the CUSUM chart: NSGA II Approach arxiv.org/abs/2409.04673 .AP

A Multi-objective Economic Statistical Design of the CUSUM chart: NSGA II Approach

This paper presents an approach for the economic statistical design of the Cumulative Sum (CUSUM) control chart in a multi-objective optimization framework. The proposed methodology integrates economic considerations with statistical aspects to optimize the design parameters like the sample size ($n$), sampling interval ($h$), and decision interval ($H$) of the CUSUM chart. The Non-dominated Sorting Genetic Algorithm II (NSGA II) is employed to solve the multi-objective optimization problem, aiming to minimize both the average cost per cycle ($C_E$) and the out-of-control Average Run Length ($ARL_δ$) simultaneously. The effectiveness of the proposed approach is demonstrated through a numerical example by determining the optimized CUSUM chart parameters using NSGA II. Additionally, sensitivity analysis is conducted to assess the impact of variations in input parameters. The corresponding results indicate that the proposed methodology significantly reduces the expected cost per cycle by about 43\% when compared to the findings of the article by M. Lee in the year 2011. A more extensive comparison with respect to both $C_E$ and $ARL_δ$ has also been provided for justifying the methodology proposed in this article. This highlights the practical relevance and potential of this study for the right application of the technique of the CUSUM chart for process control purposes in industries.

arxiv.org

Establishing the Parallels and Differences Between Right-Censored and Missing Covariates arxiv.org/abs/2409.04684 .ME .AP

Establishing the Parallels and Differences Between Right-Censored and Missing Covariates

While right-censored time-to-event outcomes have been studied for decades, handling time-to-event covariates, also known as right-censored covariates, is now of growing interest. So far, the literature has treated right-censored covariates as distinct from missing covariates, overlooking the potential applicability of estimators to both scenarios. We bridge this gap by establishing connections between right-censored and missing covariates under various assumptions about censoring and missingness, allowing us to identify parallels and differences to determine when estimators can be used in both contexts. These connections reveal adaptations to five estimators for right-censored covariates in the unexplored area of informative covariate right-censoring and to formulate a new estimator for this setting, where the event time depends on the censoring time. We establish the asymptotic properties of the six estimators, evaluate their robustness under incorrect distributional assumptions, and establish their comparative efficiency. We conducted a simulation study to confirm our theoretical results, and then applied all estimators to a Huntington disease observational study to analyze cognitive impairments as a function of time to clinical diagnosis.

arxiv.org

Privacy enhanced collaborative inference in the Cox proportional hazards model for distributed data arxiv.org/abs/2409.04716 .AP .ST .TH

Privacy enhanced collaborative inference in the Cox proportional hazards model for distributed data

Data sharing barriers are paramount challenges arising from multicenter clinical studies where multiple data sources are stored in a distributed fashion at different local study sites. Particularly in the case of time-to-event analysis when global risk sets are needed for the Cox proportional hazards model, access to a centralized database is typically necessary. Merging such data sources into a common data storage for a centralized statistical analysis requires a data use agreement, which is often time-consuming. Furthermore, the construction and distribution of risk sets to participating clinical centers for subsequent calculations may pose a risk of revealing individual-level information. We propose a new collaborative Cox model that eliminates the need for accessing the centralized database and constructing global risk sets but needs only the sharing of summary statistics with significantly smaller dimensions than risk sets. Thus, the proposed collaborative inference enjoys maximal protection of data privacy. We show theoretically and numerically that the new distributed proportional hazards model approach has little loss of statistical power when compared to the centralized method that requires merging the entire data. We present a renewable sieve method to establish large-sample properties for the proposed method. We illustrate its performance through simulation experiments and a real-world data example from patients with kidney transplantation in the Organ Procurement and Transplantation Network (OPTN) to understand the factors associated with the 5-year death-censored graft failure (DCGF) for patients who underwent kidney transplants in the US.

arxiv.org

A response-adaptive multi-arm design for continuous endpoints based on a weighted information measure arxiv.org/abs/2409.04970 .ME .IT .AP .IT

A response-adaptive multi-arm design for continuous endpoints based on a weighted information measure

Multi-arm trials are gaining interest in practice given the statistical and logistical advantages that they can offer. The standard approach is to use a fixed (throughout the trial) allocation ratio, but there is a call for making it adaptive and skewing the allocation of patients towards better performing arms. However, among other challenges, it is well-known that these approaches might suffer from lower statistical power. We present a response-adaptive design for continuous endpoints which explicitly allows to control the trade-off between the number of patients allocated to the 'optimal' arm and the statistical power. Such a balance is achieved through the calibration of a tuning parameter, and we explore various strategies to effectively select it. The proposed criterion is based on a context-dependent information measure which gives a greater weight to those treatment arms which have characteristics close to a pre-specified clinical target. We also introduce a simulation-based hypothesis testing procedure which focuses on selecting the target arm, discussing strategies to effectively control the type-I error rate. The potential advantage of the proposed criterion over currently used alternatives is evaluated in simulations, and its practical implementation is illustrated in the context of early Phase IIa proof-of-concept oncology clinical trials.

arxiv.org

Resultant: Incremental Effectiveness on Likelihood for Unsupervised Out-of-Distribution Detection arxiv.org/abs/2409.03801 .ML .LG

Resultant: Incremental Effectiveness on Likelihood for Unsupervised Out-of-Distribution Detection

Unsupervised out-of-distribution (U-OOD) detection is to identify OOD data samples with a detector trained solely on unlabeled in-distribution (ID) data. The likelihood function estimated by a deep generative model (DGM) could be a natural detector, but its performance is limited in some popular "hard" benchmarks, such as FashionMNIST (ID) vs. MNIST (OOD). Recent studies have developed various detectors based on DGMs to move beyond likelihood. However, despite their success on "hard" benchmarks, most of them struggle to consistently surpass or match the performance of likelihood on some "non-hard" cases, such as SVHN (ID) vs. CIFAR10 (OOD) where likelihood could be a nearly perfect detector. Therefore, we appeal for more attention to incremental effectiveness on likelihood, i.e., whether a method could always surpass or at least match the performance of likelihood in U-OOD detection. We first investigate the likelihood of variational DGMs and find its detection performance could be improved in two directions: i) alleviating latent distribution mismatch, and ii) calibrating the dataset entropy-mutual integration. Then, we apply two techniques for each direction, specifically post-hoc prior and dataset entropy-mutual calibration. The final method, named Resultant, combines these two directions for better incremental effectiveness compared to either technique alone. Experimental results demonstrate that the Resultant could be a new state-of-the-art U-OOD detector while maintaining incremental effectiveness on likelihood in a wide range of tasks.

arxiv.org

Active Sampling of Interpolation Points to Identify Dominant Subspaces for Model Reduction arxiv.org/abs/2409.03892 .ML .DS .NA .LG .NA

Active Sampling of Interpolation Points to Identify Dominant Subspaces for Model Reduction

Model reduction is an active research field to construct low-dimensional surrogate models of high fidelity to accelerate engineering design cycles. In this work, we investigate model reduction for linear structured systems using dominant reachable and observable subspaces. When the training set $-$ containing all possible interpolation points $-$ is large, then these subspaces can be determined by solving many large-scale linear systems. However, for high-fidelity models, this easily becomes computationally intractable. To circumvent this issue, in this work, we propose an active sampling strategy to sample only a few points from the given training set, which can allow us to estimate those subspaces accurately. To this end, we formulate the identification of the subspaces as the solution of the generalized Sylvester equations, guiding us to select the most relevant samples from the training set to achieve our goals. Consequently, we construct solutions of the matrix equations in low-rank forms, which encode subspace information. We extensively discuss computational aspects and efficient usage of the low-rank factors in the process of obtaining reduced-order models. We illustrate the proposed active sampling scheme to obtain reduced-order models via dominant reachable and observable subspaces and present its comparison with the method where all the points from the training set are taken into account. It is shown that the active sample strategy can provide us $17$x speed-up without sacrificing any noticeable accuracy.

arxiv.org

Average Causal Effect Estimation in DAGs with Hidden Variables: Extensions of Back-Door and Front-Door Criteria arxiv.org/abs/2409.03962 .ME .ML .LG

Average Causal Effect Estimation in DAGs with Hidden Variables: Extensions of Back-Door and Front-Door Criteria

The identification theory for causal effects in directed acyclic graphs (DAGs) with hidden variables is well-developed, but methods for estimating and inferring functionals beyond the g-formula remain limited. Previous studies have proposed semiparametric estimators for identifiable functionals in a broad class of DAGs with hidden variables. While demonstrating double robustness in some models, existing estimators face challenges, particularly with density estimation and numerical integration for continuous variables, and their estimates may fall outside the parameter space of the target estimand. Their asymptotic properties are also underexplored, especially when using flexible statistical and machine learning models for nuisance estimation. This study addresses these challenges by introducing novel one-step corrected plug-in and targeted minimum loss-based estimators of causal effects for a class of DAGs that extend classical back-door and front-door criteria (known as the treatment primal fixability criterion in prior literature). These estimators leverage machine learning to minimize modeling assumptions while ensuring key statistical properties such as asymptotic linearity, double robustness, efficiency, and staying within the bounds of the target parameter space. We establish conditions for nuisance functional estimates in terms of L2(P)-norms to achieve root-n consistent causal effect estimates. To facilitate practical application, we have developed the flexCausal package in R.

arxiv.org

Entry-Specific Matrix Estimation under Arbitrary Sampling Patterns through the Lens of Network Flows arxiv.org/abs/2409.03980 .ML .LG

Entry-Specific Matrix Estimation under Arbitrary Sampling Patterns through the Lens of Network Flows

Matrix completion tackles the task of predicting missing values in a low-rank matrix based on a sparse set of observed entries. It is often assumed that the observation pattern is generated uniformly at random or has a very specific structure tuned to a given algorithm. There is still a gap in our understanding when it comes to arbitrary sampling patterns. Given an arbitrary sampling pattern, we introduce a matrix completion algorithm based on network flows in the bipartite graph induced by the observation pattern. For additive matrices, the particular flow we used is the electrical flow and we establish error upper bounds customized to each entry as a function of the observation set, along with matching minimax lower bounds. Our results show that the minimax squared error for recovery of a particular entry in the matrix is proportional to the effective resistance of the corresponding edge in the graph. Furthermore, we show that our estimator is equivalent to the least squares estimator. We apply our estimator to the two-way fixed effects model and show that it enables us to accurately infer individual causal effects and the unit-specific and time-specific confounders. For rank-$1$ matrices, we use edge-disjoint paths to form an estimator that achieves minimax optimal estimation when the sampling is sufficiently dense. Our discovery introduces a new family of estimators parametrized by network flows, which provide a fine-grained and intuitive understanding of the impact of the given sampling pattern on the relative difficulty of estimation at an entry-specific level. This graph-based approach allows us to quantify the inherent complexity of matrix completion for individual entries, rather than relying solely on global measures of performance.

arxiv.org
Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.