Show newer

Likelihood-based solution to the Monty Hall puzzle and a related 3-prisoner paradox. (arXiv:2010.02211v1 [stat.OT]) arxiv.org/abs/2010.02211

A new Framework for Causal Discovery. (arXiv:2010.02247v1 [stat.ME]) arxiv.org/abs/2010.02247

Forecasting COVID-19 daily cases using phone call data. (arXiv:2010.02252v1 [stat.AP]) arxiv.org/abs/2010.02252

Temporal Difference Uncertainties as a Signal for Exploration. (arXiv:2010.02255v1 [cs.AI]) arxiv.org/abs/2010.02255

Subspace Embeddings Under Nonlinear Transformations. (arXiv:2010.02264v1 [cs.LG]) arxiv.org/abs/2010.02264

Detecting approximate replicate components of a high-dimensional random vector with latent structure. (arXiv:2010.02288v1 [stat.ME]) arxiv.org/abs/2010.02288

Latent World Models For Intrinsically Motivated Exploration. (arXiv:2010.02302v1 [cs.LG]) arxiv.org/abs/2010.02302

A Power Analysis of the Conditional Randomization Test and Knockoffs. (arXiv:2010.02304v1 [math.ST]) arxiv.org/abs/2010.02304

Deep Anomaly Detection by Residual Adaptation. (arXiv:2010.02310v1 [cs.LG]) arxiv.org/abs/2010.02310

Evaluating Progress on Machine Learning for Longitudinal Electronic Healthcare Data. (arXiv:2010.01149v1 [cs.LG]) arxiv.org/abs/2010.01149

Representational aspects of depth and conditioning in normalizing flows. (arXiv:2010.01155v1 [cs.LG]) arxiv.org/abs/2010.01155

Data-Driven Assessment of Deep Neural Networks with Random Input Uncertainty. (arXiv:2010.01171v1 [cs.LG]) arxiv.org/abs/2010.01171

The Surprising Power of Graph Neural Networks with Random Node Initialization. (arXiv:2010.01179v1 [cs.LG]) arxiv.org/abs/2010.01179

Deep FPF: Gain function approximation in high-dimensional setting. (arXiv:2010.01183v1 [cs.LG]) arxiv.org/abs/2010.01183

Covariate Shift Adaptation in High-Dimensional and Divergent Distributions. (arXiv:2010.01184v1 [stat.ML]) arxiv.org/abs/2010.01184

Compressing Images by Encoding Their Latent Representations with Relative Entropy Coding. (arXiv:2010.01185v1 [cs.IT]) arxiv.org/abs/2010.01185

Neighbourhood Distillation: On the benefits of non end-to-end distillation. (arXiv:2010.01189v1 [cs.LG]) arxiv.org/abs/2010.01189

Stock2Vec: A Hybrid Deep Learning Framework for Stock Market Prediction with Representation Learning and Temporal Convolutional Network. (arXiv:2010.01197v1 [q-fin.ST]) arxiv.org/abs/2010.01197

$f$-GAIL: Learning $f$-Divergence for Generative Adversarial Imitation Learning. (arXiv:2010.01207v1 [cs.LG]) arxiv.org/abs/2010.01207

Universal consistency and rates of convergence of multiclass prototype algorithms in metric spaces. (arXiv:2010.00636v1 [cs.LG]) arxiv.org/abs/2010.00636

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.