These are public posts tagged with #hyperparameters. You can interact with them if you have an account anywhere in the fediverse.
IRIS Insights I Nico Formanek: Are hyperparameters vibes?
April 24, 2025, 2:00 p.m. (CEST)
Our second IRIS Insights talk will take place with Nico Formanek.
This talk will discuss the role of hyperparameters in optimization methods for model selection (currently often called ML) from a philosophy of science point of view. Special consideration is given to the question of whether there can be principled ways to fix hyperparameters in a maximally agnostic setting.
This is a WebEx talk to which everyone who is interested is cordially invited. It will take place in English. Our IRIS speaker, Jun.-Prof. Dr. Maria Wirzberger, will moderate it. Following Nico Formanek's presentation, there will be an opportunity to ask questions. We look forward to active participation.
Please join this Webex talk using the following link:
https://lnkd.in/eJNiUQKV
#Hyperparameters #ModelSelection #Optimization #MLMethods #PhilosophyOfScience #ScientificMethod #AgnosticLearning #MachineLearning #InterdisciplinaryResearch #AIandPhilosophy #EthicsInAI #ResponsibleAI #AITheory #WebTalk #OnlineLecture #ResearchTalk #ScienceEvents #OpenInvitation #AICommunity #LinkedInScience #TechPhilosophy #AIConversations
'Empirical Design in Reinforcement Learning', by Andrew Patterson, Samuel Neumann, Martha White, Adam White.
http://jmlr.org/papers/v25/23-0183.html
#reinforcement #experiments #hyperparameters
#CausalML update - I am now fitting my first #CausalForest on real data!
Does anyone have advice on the most important #hyperparameters (After the # of trees & tree depth.)
I'm working on large imbalanced data sets and a large number of treatment variables, so it's not like anything you see in the economics literature. #ML #AI #causal
'On the Hyperparameters in Stochastic Gradient Descent with Momentum', by Bin Shi.
http://jmlr.org/papers/v25/22-1189.html
#sgd #hyperparameters #stochastic
'Pre-trained Gaussian Processes for Bayesian Optimization', by Zi Wang et al.
http://jmlr.org/papers/v25/23-0269.html
#priors #prior #hyperparameters
'An Algorithmic Framework for the Optimization of Deep Neural Networks Architectures and Hyperparameters', by Julie Keisler, El-Ghazali Talbi, Sandra Claudel, Gilles Cabriel.
http://jmlr.org/papers/v25/23-0166.html
#forecasting #algorithmic #hyperparameters
'Low-rank Variational Bayes correction to the Laplace method', by Janet van Niekerk, Haavard Rue.
http://jmlr.org/papers/v25/21-1405.html
#variational #hyperparameters #approximations
'Beyond the Golden Ratio for Variational Inequality Algorithms', by Ahmet Alacaoglu, Axel Böhm, Yura Malitsky.
http://jmlr.org/papers/v24/22-1488.html
#ascent #constrained #hyperparameters
Computationally-efficient initialisation of GPs: The generalised variogram method
Felipe Tobar, Elsa Cazelles, Taco de Wolff
Action editor: Cédric Archambeau.
'Prior Specification for Bayesian Matrix Factorization via Prior Predictive Matching', by Eliezer de Souza da Silva, Tomasz Kuśmierczyk, Marcelo Hartmann, Arto Klami.
http://jmlr.org/papers/v24/21-0623.html
#factorization #hyperparameters #priors
Computationally-efficient initialisation of GPs: The generalised variogram method
No More Pesky Hyperparameters: Offline Hyperparameter Tuning for RL
Han Wang, Archit Sakhadeo, Adam M White et al.
Is there a #julialang equivalent of https://github.com/google/gin-config ?
I found using .gin files a really simple but useful way to store #hyperparameters during #deeplearning
If you do #machinelearning with #python and haven't heard of it, check it out!
Gin provides a lightweight configuration framework…
github.comThe first one is the dual #benchmark - comparing all models both default and tuned #hyperparameters.
Sure, it doesn't make much difference for production deployment of the model, but good defaults are very convenient during #EDA and early experiments