@jmw150
what remains of our definition of intelligence when you remove it from an anthropic context?
@skells The simple way that I think of it is this:
Everything exists in a multidimensional space. Each move in a chess game, pixels that looks like a picture of a cat, the value and location to sell a product, all follow this format.
Some subset of that space is defined as optimal behavior based on set criteria. Finding that space is learning, being deeper within that space is more intelligent behavior.
This ends up being a kind of problem that can be tackled without thinking about intelligence. But, thinking about how humans and animals tackle this problem can be really helpful too, such as neural networks. Neural networks are really easy to parallel compute, but it was a feat of mathematics to understand how applying back propagation would allow for training several layers of neurons. And the fact that single layer neurons have limitations to what they can learn, is also a mathematical fact.
@jmw150 "deeper into the space is more intelligent behaviour," I like this, so the only limitation is on having a big enough computer to crunch the numbers, enough neuron layers to parse the problem space?
@skells It is kind of a metaphysical question if that is possible.
You can have problems that are naturally framed with infinite dimensions or incomplete space. Or it can be like balancing on the head of a pin, there are not a good set of states that transition from balanced and unbalanced.
This last one, if taken to the weirder infinite spaces, has the consequence that there could be a lot of really intelligent behavior that is not learn-able.
https://colah.github.io/posts/2014-03-NN-Manifolds-Topology/
@skells
1a) It could always happen randomly, plenty of inventions are from accidental discoveries.
2a) Being able to change one's own reward mechanisms on the fly is a research area. But most systems do not have that ability, because it is like short circuiting the whole bot. I think it mostly converges down to simpler form. Like where rodents will press a button to get dopamine, and then do nothing else, not even eat food.
3a) Yes. That is often ideal, toy universes and toy goals. For example, classifying cat and dog pictures if given an image. But having no possible actions or goals outside of that space.
1b) not always
2b) sometimes, NEAT and other methods before the deep learning craze commonly used that to their advantage.