Show newer

Community Detection and Classification Guarantees Using Embeddings Learned by Node2Vec. (arXiv:2310.17712v1 [stat.ML]) arxiv.org/abs/2310.17712

Unifying (Quantum) Statistical and Parametrized (Quantum) Algorithms. (arXiv:2310.17716v1 [quant-ph]) arxiv.org/abs/2310.17716

Optimal Guarantees for Algorithmic Reproducibility and Gradient Complexity in Convex Optimization. (arXiv:2310.17759v1 [cs.LG]) arxiv.org/abs/2310.17759

Novel Models for Multiple Dependent Heteroskedastic Time Series. (arXiv:2310.17760v1 [stat.ME]) arxiv.org/abs/2310.17760

Minibatch Markov chain Monte Carlo Algorithms for Fitting Gaussian Processes. (arXiv:2310.17766v1 [stat.CO]) arxiv.org/abs/2310.17766

Learning Optimal Classification Trees Robust to Distribution Shifts. (arXiv:2310.17772v1 [cs.LG]) arxiv.org/abs/2310.17772

Maximum entropy-based modeling of community-level hazard responses for civil infrastructures. (arXiv:2310.17798v1 [stat.AP]) arxiv.org/abs/2310.17798

Probabilistic Multi-product Trading in Sequential Intraday and Frequency-Regulation Markets. (arXiv:2310.17799v1 [eess.SY]) arxiv.org/abs/2310.17799

Transporting treatment effects from difference-in-differences studies. (arXiv:2310.17806v1 [stat.ME]) arxiv.org/abs/2310.17806

Dual-Class Stocks: Can They Serve as Effective Predictors?. (arXiv:2310.16845v1 [q-fin.ST]) arxiv.org/abs/2310.16845

Covariance Operator Estimation: Sparsity, Lengthscale, and Ensemble Kalman Filters. (arXiv:2310.16933v1 [math.ST]) arxiv.org/abs/2310.16933

Causal Q-Aggregation for CATE Model Selection. (arXiv:2310.16945v1 [stat.ML]) arxiv.org/abs/2310.16945

Efficient Neural Network Approaches for Conditional Optimal Transport with Applications in Bayesian Inference. (arXiv:2310.16975v1 [stat.ML]) arxiv.org/abs/2310.16975

Randomization Inference When N Equals One. (arXiv:2310.16989v1 [stat.ME]) arxiv.org/abs/2310.16989

On the Identifiability and Interpretability of Gaussian Process Models. (arXiv:2310.17023v1 [stat.ML]) arxiv.org/abs/2310.17023

Benign Oscillation of Stochastic Gradient Descent with Large Learning Rates. (arXiv:2310.17074v1 [cs.LG]) arxiv.org/abs/2310.17074

Good regularity creates large learning rate implicit biases: edge of stability, balancing, and catapult. (arXiv:2310.17087v1 [cs.LG]) arxiv.org/abs/2310.17087

A Sparse Bayesian Learning for Diagnosis of Nonstationary and Spatially Correlated Faults with Application to Multistation Assembly Systems. (arXiv:2310.16058v1 [cs.LG]) arxiv.org/abs/2310.16058

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.