Show newer

Outlier-Bias Removal with Alpha Divergence: A Robust Non-Convex Estimator for Linear Regression arxiv.org/abs/2412.19183 .ST .TH

Shifted Composition III: Local Error Framework for KL Divergence arxiv.org/abs/2412.17997 .ST .NA .ML .TH .DS .LG .NA

Shifted Composition III: Local Error Framework for KL Divergence

Coupling arguments are a central tool for bounding the deviation between two stochastic processes, but traditionally have been limited to Wasserstein metrics. In this paper, we apply the shifted composition rule--an information-theoretic principle introduced in our earlier work--in order to adapt coupling arguments to the Kullback-Leibler (KL) divergence. Our framework combine the strengths of two previously disparate approaches: local error analysis and Girsanov's theorem. Akin to the former, it yields tight bounds by incorporating the so-called weak error, and is user-friendly in that it only requires easily verified local assumptions; and akin to the latter, it yields KL divergence guarantees and applies beyond Wasserstein contractivity. We apply this framework to the problem of sampling from a target distribution $π$. Here, the two stochastic processes are the Langevin diffusion and an algorithmic discretization thereof. Our framework provides a unified analysis when $π$ is assumed to be strongly log-concave (SLC), weakly log-concave (WLC), or to satisfy a log-Sobolev inequality (LSI). Among other results, this yields KL guarantees for the randomized midpoint discretization of the Langevin diffusion. Notably, our result: (1) yields the optimal $\tilde O(\sqrt d/ε)$ rate in the SLC and LSI settings; (2) is the first result to hold beyond the 2-Wasserstein metric in the SLC setting; and (3) is the first result to hold in \emph{any} metric in the WLC and LSI settings.

arXiv.org

An information theoretic limit to data amplification arxiv.org/abs/2412.18041 .data-an .ML -ex .LG

An information theoretic limit to data amplification

In recent years generative artificial intelligence has been used to create data to support science analysis. For example, Generative Adversarial Networks (GANs) have been trained using Monte Carlo simulated input and then used to generate data for the same problem. This has the advantage that a GAN creates data in a significantly reduced computing time. N training events for a GAN can result in GN generated events with the gain factor, G, being more than one. This appears to violate the principle that one cannot get information for free. This is not the only way to amplify data so this process will be referred to as data amplification which is studied using information theoretic concepts. It is shown that a gain of greater than one is possible whilst keeping the information content of the data unchanged. This leads to a mathematical bound which only depends on the number of generated and training events. This study determines conditions on both the underlying and reconstructed probability distributions to ensure this bound. In particular, the resolution of variables in amplified data is not improved by the process but the increase in sample size can still improve statistical significance. The bound is confirmed using computer simulation and analysis of GAN generated data from the literature.

arXiv.org

Heterogeneous transfer learning for high dimensional regression with feature mismatch arxiv.org/abs/2412.18081 .ML .LG

Heterogeneous transfer learning for high dimensional regression with feature mismatch

We consider the problem of transferring knowledge from a source, or proxy, domain to a new target domain for learning a high-dimensional regression model with possibly different features. Recently, the statistical properties of homogeneous transfer learning have been investigated. However, most homogeneous transfer and multi-task learning methods assume that the target and proxy domains have the same feature space, limiting their practical applicability. In applications, target and proxy feature spaces are frequently inherently different, for example, due to the inability to measure some variables in the target data-poor environments. Conversely, existing heterogeneous transfer learning methods do not provide statistical error guarantees, limiting their utility for scientific discovery. We propose a two-stage method that involves learning the relationship between the missing and observed features through a projection step in the proxy data and then solving a joint penalized regression optimization problem in the target data. We develop an upper bound on the method's parameter estimation risk and prediction risk, assuming that the proxy and the target domain parameters are sparsely different. Our results elucidate how estimation and prediction error depend on the complexity of the model, sample size, the extent of overlap, and correlation between matched and mismatched features.

arXiv.org

Supervised centrality Cvia sparse network influence regression: an application to the 2021 henan floods' social network arxiv.org/abs/2412.18145 .soc-ph .ME .SI

Supervised centrality Cvia sparse network influence regression: an application to the 2021 henan floods' social network

The social characteristics of players in a social network are closely associated with their network positions and relational importance. Identifying those influential players in a network is of great importance as it helps to understand how ties are formed, how information is propagated, and, in turn, can guide the dissemination of new information. Motivated by a Sina Weibo social network analysis of the 2021 Henan Floods, where response variables for each Sina Weibo user are available, we propose a new notion of supervised centrality that emphasizes the task-specific nature of a player's centrality. To estimate the supervised centrality and identify important players, we develop a novel sparse network influence regression by introducing individual heterogeneity for each user. To overcome the computational difficulties in fitting the model for large social networks, we further develop a forward-addition algorithm and show that it can consistently identify a superset of the influential Sina Weibo users. We apply our method to analyze three responses in the Henan Floods data: the number of comments, reposts, and likes, and obtain meaningful results. A further simulation study corroborates the developed method.

arXiv.org

Asymptotic efficiency of inferential models and a possibilistic Bernstein--von Mises theorem arxiv.org/abs/2412.15243 .ST .TH

Asymptotic efficiency of inferential models and a possibilistic Bernstein--von Mises theorem

The inferential model (IM) framework offers an alternative to the classical probabilistic (e.g., Bayesian and fiducial) uncertainty quantification in statistical inference. A key distinction is that classical uncertainty quantification takes the form of precise probabilities and offers only limited large-sample validity guarantees, whereas the IM's uncertainty quantification is imprecise in such a way that exact, finite-sample valid inference is possible. But is the IM's imprecision and finite-sample validity compatible with statistical efficiency? That is, can IMs be both finite-sample valid and asymptotically efficient? This paper gives an affirmative answer to this question via a new possibilistic Bernstein--von Mises theorem that parallels a fundamental Bayesian result. Among other things, our result shows that the IM solution is efficient in the sense that, asymptotically, its credal set is the smallest that contains the Gaussian distribution with variance equal to the Cramer--Rao lower bound. Moreover, a corresponding version of this new Bernstein--von Mises theorem is presented for problems that involve the elimination of nuisance parameters, which settles an open question concerning the relative efficiency of profiling-based versus extension-based marginalization strategies.

arXiv.org

Quantile Mediation Analytics arxiv.org/abs/2412.15401 .ME

Quantile Mediation Analytics

Mediation analytics help examine if and how an intermediate variable mediates the influence of an exposure variable on an outcome of interest. Quantiles, rather than the mean, of an outcome are scientifically relevant to the comparison among specific subgroups in practical studies. Albeit some empirical studies available in the literature, there lacks a thorough theoretical investigation of quantile-based mediation analysis, which hinders practitioners from using such methods to answer important scientific questions. To address this significant technical gap, in this paper, we develop a quantile mediation analysis methodology to facilitate the identification, estimation, and testing of quantile mediation effects under a hypothesized directed acyclic graph. We establish two key estimands, quantile natural direct effect (qNDE) and quantile natural indirect effect (qNIE), in the counterfactual framework, both of which have closed-form expressions. To overcome the issue that the null hypothesis of no mediation effect is composite, we establish a powerful adaptive bootstrap method that is shown theoretically and numerically to achieve a proper type I error control. We illustrate the proposed quantile mediation analysis methodology through both extensive simulation experiments and a real-world dataset in that we investigate the mediation effect of lipidomic biomarkers for the influence of exposure to phthalates on early childhood obesity clinically diagnosed by 95\% percentile of body mass index.

arXiv.org

High-dimensional sliced inverse regression with endogeneity arxiv.org/abs/2412.15530 .ME

High-dimensional sliced inverse regression with endogeneity

Sliced inverse regression (SIR) is a popular sufficient dimension reduction method that identifies a few linear transformations of the covariates without losing regression information with the response. In high-dimensional settings, SIR can be combined with sparsity penalties to achieve sufficient dimension reduction and variable selection simultaneously. Nevertheless, both classical and sparse estimators assume the covariates are exogenous. However, endogeneity can arise in a variety of situations, such as when variables are omitted or are measured with error. In this article, we show such endogeneity invalidates SIR estimators, leading to inconsistent estimation of the true central subspace. To address this challenge, we propose a two-stage Lasso SIR estimator, which first constructs a sparse high-dimensional instrumental variables model to obtain fitted values of the covariates spanned by the instruments, and then applies SIR augmented with a Lasso penalty on these fitted values. We establish theoretical bounds for the estimation and selection consistency of the true central subspace for the proposed estimators, allowing the number of covariates and instruments to grow exponentially with the sample size. Simulation studies and applications to two real-world datasets in nutrition and genetics illustrate the superior empirical performance of the two-stage Lasso SIR estimator compared with existing methods that disregard endogeneity and/or nonlinearity in the outcome model.

arXiv.org

Protocol for an Observational Study on the Effects of Paternal Alcohol Use Disorder on Children's Later Life Outcomes arxiv.org/abs/2412.15535 .AP .ME

Protocol for an Observational Study on the Effects of Paternal Alcohol Use Disorder on Children's Later Life Outcomes

The harmful effects of growing up with a parent with an alcohol use disorder have been closely examined in children and adolescents, and are reported to include mental and physical health problems, interpersonal difficulties, and a worsened risk of future substance use disorders. However, few studies have investigated how these impacts evolve into later life adulthood, leaving the ensuing long-term effects of interest. In this article, we provide the protocol for our observational study of the long-term consequences of growing up with a father who had an alcohol use disorder. We will use data from the Wisconsin Longitudinal Study to examine impacts on long-term economic success, interpersonal relationships, physical, and mental health. To reinforce our findings, we will conduct this investigation on two discrete subpopulations of individuals in our study, allowing us to analyze the replicability of our conclusions. We introduce a novel statistical design, called data turnover, to carry out this analysis. Data turnover allows a single group of statisticians and domain experts to work together to assess the strength of evidence gathered across multiple data splits, while incorporating both qualitative and quantitative findings from data exploration. We delineate our analysis plan using this new method and conclude with a brief discussion of some additional considerations for our study.

arXiv.org
Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.