How should NNs adjust their connectivity structure to get an appropriate inductive bias for a particular problem?

Continuous parameterisations enable the use of gradients to answer this question.

Thanks to @tychovdo and David Romero for a great collaboration across the North Sea.

See qoto.org/web/statuses/10934700 for more.

🌟New work!🌟 Weight symmetries (e.g. equivariances) of NNs are typically fixed and can not be adjusted. We construct flexible symmetry constraints to efficiently interpolate between a linear map, equivariance or invariance. w/ co-authors @davidromero and @markvanderwilk . 🧵👇1/11

The longer I spend in the AI/ML community, the more I become convinced that the major issues facing our world are political/organisational, rather than technical.

We have had enough food to feed the world for decades, yet somehow agricultural robotics is seen as a solution to "feed the world".

It's interesting to see a book on AI give a perspective on this, even though there may still be many different analyses about why this is happening, and differing opinions on what to do about it.

qoto.org/web/statuses/10929992

I am dead, I can’t believe this AI intro text I just got. If you thumb through it, there are all the usual suspects like breadth-first search and probability.

Then you open the first chapter and it goes HARD on the current state of things.

Then it’s back to mathematical notation like nothing happened.

#introduction
hello Mastodon world.

Postgraduate researcher (PhD) in Machine Learning at Imperial College London. Supervised by @markvanderwilk . My main research interests are learning inductive bias and generalisation of neural networks.

Currently, I am working on deep neural networks in which the architectural (e.g. symmetry) structures themselves can be learned with gradients from training data.

Interested? Follow or direct message to chat/ meet up at NeurIPS 2022.

@nbranchini This is indeed exactly what I was looking for!

This provides a very compelling story:
- Deterministic quadrature rules relying on uniform grids slow their convergence down with the dimension.
- The *rate* of MC doesn't slow down.
- However, a particular problem can have a single-sample variance that grows with D.
- Importance sampling / MCMC is a way to reduce this effect.

@nbranchini I have wondered about exactly this when teaching about MC. Ideally, I would make a clear comparison to the alternative: grid quadrature.

Does anyone know a simple reference that shows the curse of dimensionality for grid quadrature? Ideally with a reference to its rate of convergence?

We are looking for postdocs! (1) To study how brainwide neuronal activity supports diverse behaviors (w Kenneth Harris); (2) To relate the activity of a neuron to its pre- and postsynaptic neurons across cortex (w Alipasha Vaziri and Federico Rossi). tinyurl.com/CortexlabPostdoc

I've seen some instances with many scientists so far, like fediscience.org, mathstodon.xyz, and qoto.org. Are there any instances specifically for machine learning, artificial intelligence, and statistics folks?

@jwvdm I agree this would be great!

The ability to separate and curate feeds seems to be a real feature of Masto that improves over Twitter.

Imperial College London is offering 30 Schmidt Futures Postdoctoral Fellowhips. These offer an opportunity to do great work, together with others in the Imperial community, myself included.

imperial.ac.uk/news/240936/sch

Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.