@jmw150
what remains of our definition of intelligence when you remove it from an anthropic context?
@jmw150 nice article, didn't follow the equations but the images are very intuitive.
if we accept the hypothesis that this is ~what intelligent agents (human or otherwise wise) do, how is the reward mechanism implemented?
as a member of a biological species we have instincts that are founded upon aeons of test data filtered by "nature" (neither "physics" or "biology" seemed to cut the mustard there).
When one optimises the quality of one's life there is already a representation in place - if optimising the quality of one's life demands some sort of religious, social, philosophical or scientific revolution, then this demands another layer of representation placed on top.
perhaps this new layer is ~analagous to a new neuron layer... this leads to two questions:
1) Is there any reason to suppose that any intelligence we can build won't be couched in our previous layers of abstraction?
I agree that using mathematical methods may help us to strip out our prejudices - the question is, are our mathematical methods advanced enough to avoid stripping intelligence out of the system? The Turing test is for us to fail.
2) How is an artificial intelligence going to ratchet itself out of whatever abstract space we plunk it in without bootstrapping reward mechanisms from human culture?
maybe we're chasing a ghost and intelligence is necessarily a cybernetic, relational thing and AI was born with the internet.
Can an agent be intelligent in isolation?
@jmw150 2 further questions:
1) Is adding a another neuron *always* beneficial?
2) does adding a neuronal layer in real time change the behaviour of the system?
@skells
1a) It could always happen randomly, plenty of inventions are from accidental discoveries.
2a) Being able to change one's own reward mechanisms on the fly is a research area. But most systems do not have that ability, because it is like short circuiting the whole bot. I think it mostly converges down to simpler form. Like where rodents will press a button to get dopamine, and then do nothing else, not even eat food.
3a) Yes. That is often ideal, toy universes and toy goals. For example, classifying cat and dog pictures if given an image. But having no possible actions or goals outside of that space.
1b) not always
2b) sometimes, NEAT and other methods before the deep learning craze commonly used that to their advantage.
@skells It is kind of a metaphysical question if that is possible.
You can have problems that are naturally framed with infinite dimensions or incomplete space. Or it can be like balancing on the head of a pin, there are not a good set of states that transition from balanced and unbalanced.
This last one, if taken to the weirder infinite spaces, has the consequence that there could be a lot of really intelligent behavior that is not learn-able.
https://colah.github.io/posts/2014-03-NN-Manifolds-Topology/