I have found that #AI #researchers are an extremely heterogeneous group of people. People from around the world from all kinds of backgrounds. People who disagree with each others strongly across a wide range of topics.

I for example have always been a proponent of #NeuralNetworks, specifically asynchronous and sparse ones. I know many AI researchers who think the truth will be found in traditional machine learning algorithms such as gradient boosting and Bayesian models, starting from solid statistical theory and going onwards from that. Some researchers believe mammalian brain chemistry must be simulated exactly, some believe it all just implements some simpler underlying algorithm which can be implemented more efficiently in some other substrate.

I am a #panpsychist who believes there can be no computation in this universe which isn't fundamentally experiential. I know many AI researchers who think that's crazy, and I think they are crazy for not coming to the same conclusion.

The blogosphere is still on fire of the great on-going argument battles between people who believe in #symbolic AI against people who believe in #subsymbolic AI. Both sides consider the other side crazy.

I think this diversity of opinion has contributed greatly to the speed of progress. I hope we don't lose it. Although it is very difficult to build teams with coherent visions out of people who couldn't disagree with each others more.

This difference in opinions is a great driver of progress because people have a strong motivation and need to prove themselves right. It's a competition where the competitors compete to be right, through hard work.

If you're asking yourself: "But who is right?", you might not be asking the correct question. There are probably many routes to AGI, and what matters is drive and capacity to execute.

Follow

@tero I don't know: I guess it depends a lot on the objectives of what you're working on.
For example: working in the chemistry field there is now a boom in the use of Neural Networks. I guess that's due to the increase in data availability.
While that's very cool, works extremely well and produces desirable results there are big advantages in traditional machine learning.
And that is the fact that you can somehow learn something from how it works.
This is important, because discovering some new formula or fundamental relationship is much more useful than having a high performing model that works extremely well on a particular task.
That's because in chemistry we're applying AI to speed up things and exploit relationships we're not aware of; if we could do the same thing knowing the underlying relationships that would be much better.

Now, would an AGI who understands chemistry be useful? Definitely, it would help us a lot and surely make our jobs much easier.
But, I feel it's very far away and for the time being I'd argue the development of more complete and simple theories is what should be one of the main goals while applying these methodologies.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.