@jmw150 i'd be in favour of teaching the low level stuff rather more than less. look at the state of how programming is done today in reality. it's bad abstractions stacked upon each other, the usual fix is to throw more hardware onto it.
secondly, while there are valid applications of neural nets, you end up with a statistic black box. this isn't desireable for the majority of things where you'd rather have a proveable solution, which only a classic algorithm can give you (be it parallel or not).
imho, parallel algorithms is topic which is really interesting but shadowed by "throw a neural net into it!". neural nets are just evolutionary algorithms parallelized, it's kind of boring in a sense.
NB: i tend to be overly critical about everything ;)
i'll just answer to both of your replies here.
i have no problem with probability based logics, but NNs are a different kind of beast imho.
it's one thing to have a sound logic system which accounts for probabilities, and another if you just let your evolutionary algorithm tune itself to match a dataset.
to quote the quote from your link:
> “By quantifying uncertainty, we’re getting closer to designing the novel transparent systems that can function in high-stakes environments. Our goal here is to create a principled approach for robust machine learning – in theory and practice.”
it sounds to me like "yeah, the neural nets do things we don't understand, lets just hedge the bet". i strongly assume there can be inputs to those systems which give you false high certainty values.
i still stand by my opinion that neural nets are a local optimum of parallel algorithms. they are hyped so they see much research. if efficiency was the main interest there would be more research upon _how to design_ good parallel algorithms instead of how to fix shortcomings of neural nets.
@bonifartius for the last part. I think we will exhaust the easier problems, the sophistication of neural nets research will grow. Explanations of these things can get pretty fancy already.
@bonifartius On the second part, there is research being done on making neural nets safer to work with. Probability based logics exist as well. So while I am not certain the research will pay off, I am betting it will be pretty successful.
https://trac.csail.mit.edu/
https://www.cs.purdue.edu/homes/roopsha/purform/projects.html