I pretty anti-intelligence for someone that works in artificial intelligence.

Intelligence is not false. Just so much of the idea of it is an anthropomorphism.

@jmw150
what remains of our definition of intelligence when you remove it from an anthropic context?

@skells The simple way that I think of it is this:

Everything exists in a multidimensional space. Each move in a chess game, pixels that looks like a picture of a cat, the value and location to sell a product, all follow this format.

Some subset of that space is defined as optimal behavior based on set criteria. Finding that space is learning, being deeper within that space is more intelligent behavior.

This ends up being a kind of problem that can be tackled without thinking about intelligence. But, thinking about how humans and animals tackle this problem can be really helpful too, such as neural networks. Neural networks are really easy to parallel compute, but it was a feat of mathematics to understand how applying back propagation would allow for training several layers of neurons. And the fact that single layer neurons have limitations to what they can learn, is also a mathematical fact.

@jmw150 "deeper into the space is more intelligent behaviour," I like this, so the only limitation is on having a big enough computer to crunch the numbers, enough neuron layers to parse the problem space?

@skells It is kind of a metaphysical question if that is possible.

You can have problems that are naturally framed with infinite dimensions or incomplete space. Or it can be like balancing on the head of a pin, there are not a good set of states that transition from balanced and unbalanced.

This last one, if taken to the weirder infinite spaces, has the consequence that there could be a lot of really intelligent behavior that is not learn-able.

colah.github.io/posts/2014-03-

@jmw150 nice article, didn't follow the equations but the images are very intuitive.

if we accept the hypothesis that this is ~what intelligent agents (human or otherwise wise) do, how is the reward mechanism implemented?

as a member of a biological species we have instincts that are founded upon aeons of test data filtered by "nature" (neither "physics" or "biology" seemed to cut the mustard there).

When one optimises the quality of one's life there is already a representation in place - if optimising the quality of one's life demands some sort of religious, social, philosophical or scientific revolution, then this demands another layer of representation placed on top.

perhaps this new layer is ~analagous to a new neuron layer... this leads to two questions:

1) Is there any reason to suppose that any intelligence we can build won't be couched in our previous layers of abstraction?

I agree that using mathematical methods may help us to strip out our prejudices - the question is, are our mathematical methods advanced enough to avoid stripping intelligence out of the system? The Turing test is for us to fail.

2) How is an artificial intelligence going to ratchet itself out of whatever abstract space we plunk it in without bootstrapping reward mechanisms from human culture?

maybe we're chasing a ghost and intelligence is necessarily a cybernetic, relational thing and AI was born with the internet.

Can an agent be intelligent in isolation?

Follow

@jmw150 2 further questions:

1) Is adding a another neuron *always* beneficial?

2) does adding a neuronal layer in real time change the behaviour of the system?

· · SubwayTooter · 1 · 0 · 0

@skells
1a) It could always happen randomly, plenty of inventions are from accidental discoveries.

2a) Being able to change one's own reward mechanisms on the fly is a research area. But most systems do not have that ability, because it is like short circuiting the whole bot. I think it mostly converges down to simpler form. Like where rodents will press a button to get dopamine, and then do nothing else, not even eat food.

3a) Yes. That is often ideal, toy universes and toy goals. For example, classifying cat and dog pictures if given an image. But having no possible actions or goals outside of that space.

1b) not always

2b) sometimes, NEAT and other methods before the deep learning craze commonly used that to their advantage.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.