Profile directory About Mobile apps
Log in Sign up
arXiv Statistics @arxiv_stats@qoto.org
Follow

Meta-Principled Family of Hyperparameter Scaling Strategies. (arXiv:2210.04909v1 [cs.LG]) http://arxiv.org/abs/2210.04909

Meta-Principled Family of Hyperparameter Scaling Strategies

In this note, we first derive a one-parameter family of hyperparameter scaling strategies that interpolates between the neural-tangent scaling and mean-field/maximal-update scaling. We then calculate the scalings of dynamical observables -- network outputs, neural tangent kernels, and differentials of neural tangent kernels -- for wide and deep neural networks. These calculations in turn reveal a proper way to scale depth with width such that resultant large-scale models maintain their representation-learning ability. Finally, we observe that various infinite-width limits examined in the literature correspond to the distinct corners of the interconnected web spanned by effective theories for finite-width neural networks, with their training dynamics ranging from being weakly-coupled to being strongly-coupled.

arxiv.org
October 12, 2022 at 3:20 AM · · feed2toot · 0 · 0 · 0
Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.

Trending now

#ukraine1 person talking
1
#photography0 people talking
0
#monsterdon0 people talking
0

Resources

  • Terms of service
  • Privacy policy

Developers

  • Documentation
  • API

What is Mastodon?

qoto.org

  • About
  • v3.5.19-qoto

More…

  • Source code
  • Mobile apps
v3.5.19-qoto · Privacy policy