Uni Stuttgart IRIS

IRIS Insights I Nico Formanek: Are hyperparameters vibes?
April 24, 2025, 2:00 p.m. (CEST)
Our second IRIS Insights talk will take place with Nico Formanek.
🟦
This talk will discuss the role of hyperparameters in optimization methods for model selection (currently often called ML) from a philosophy of science point of view. Special consideration is given to the question of whether there can be principled ways to fix hyperparameters in a maximally agnostic setting.
🟦
This is a WebEx talk to which everyone who is interested is cordially invited. It will take place in English. Our IRIS speaker, Jun.-Prof. Dr. Maria Wirzberger, will moderate it. Following Nico Formanek's presentation, there will be an opportunity to ask questions. We look forward to active participation.
🟦
Please join this Webex talk using the following link:
lnkd.in/eJNiUQKV
🟦
#Hyperparameters #ModelSelection #Optimization #MLMethods #PhilosophyOfScience #ScientificMethod #AgnosticLearning #MachineLearning #InterdisciplinaryResearch #AIandPhilosophy #EthicsInAI #ResponsibleAI #AITheory #WebTalk #OnlineLecture #ResearchTalk #ScienceEvents #OpenInvitation #AICommunity #LinkedInScience #TechPhilosophy #AIConversations

JMLR

'Empirical Design in Reinforcement Learning', by Andrew Patterson, Samuel Neumann, Martha White, Adam White.

jmlr.org/papers/v25/23-0183.ht

#reinforcement #experiments #hyperparameters

Carl Gold, PhD

#CausalML update - I am now fitting my first #CausalForest on real data!

Does anyone have advice on the most important #hyperparameters (After the # of trees & tree depth.)

I'm working on large imbalanced data sets and a large number of treatment variables, so it's not like anything you see in the economics literature. 🤔 #ML #AI #causal

JMLR

'On the Hyperparameters in Stochastic Gradient Descent with Momentum', by Bin Shi.

jmlr.org/papers/v25/22-1189.ht

#sgd #hyperparameters #stochastic

JMLR

'Pre-trained Gaussian Processes for Bayesian Optimization', by Zi Wang et al.

jmlr.org/papers/v25/23-0269.ht

#priors #prior #hyperparameters

JMLR

'An Algorithmic Framework for the Optimization of Deep Neural Networks Architectures and Hyperparameters', by Julie Keisler, El-Ghazali Talbi, Sandra Claudel, Gilles Cabriel.

jmlr.org/papers/v25/23-0166.ht

#forecasting #algorithmic #hyperparameters

JMLR

'Low-rank Variational Bayes correction to the Laplace method', by Janet van Niekerk, Haavard Rue.

jmlr.org/papers/v25/21-1405.ht

#variational #hyperparameters #approximations

JMLR

'Beyond the Golden Ratio for Variational Inequality Algorithms', by Ahmet Alacaoglu, Axel Böhm, Yura Malitsky.

jmlr.org/papers/v24/22-1488.ht

#ascent #constrained #hyperparameters

Published papers at TMLR

Computationally-efficient initialisation of GPs: The generalised variogram method

Felipe Tobar, Elsa Cazelles, Taco de Wolff

Action editor: Cédric Archambeau.

openreview.net/forum?id=slsAQH

#gps #geostatistics #hyperparameters

JMLR

'Prior Specification for Bayesian Matrix Factorization via Prior Predictive Matching', by Eliezer de Souza da Silva, Tomasz Kuśmierczyk, Marcelo Hartmann, Arto Klami.

jmlr.org/papers/v24/21-0623.ht

#factorization #hyperparameters #priors

Published papers at TMLR

No More Pesky Hyperparameters: Offline Hyperparameter Tuning for RL

Han Wang, Archit Sakhadeo, Adam M White et al.

openreview.net/forum?id=AiOUi3

#hyperparameters #hyperparameter #learns

wwydmanski

The first one is the dual #benchmark - comparing all models both default and tuned #hyperparameters.
Sure, it doesn't make much difference for production deployment of the model, but good defaults are very convenient during #EDA and early experiments