Some of it is actual science, theorems about convergence criteria, or approximation ability of these kinds of functions. There are also computer security people that try to make these systems more robust to counterexamples, and that forces practitioners to understand the system better.
Most of it is not worth much. For pretty much all of the ML papers at this session, an approximated function was discovered that could perform the desired behavior, given just the right inputs, and that was it.
@jmw150 Of course I know (perhaps more than I wish, got some academic titles in the broader field). The theory is very exciting indeed, what I am referring to are the applications and the hype the field is surfing over the last years. That itself is also not special, or wrong on its own. What is wrong is that laymen started to believe ML techniques can magically solve all of their problems without them doing the hard work of learning the nitty gritty details of their "domains of inexpertise". 99,999% of times it does not work beyond the first attempt which is deemed "promising". But yes, for image recognition and now natural language processing and similar things it worked marvels - which, to defense of the "magical thinking", would not be possible if people simply did not try to throw ML at anything and see what sticks. But when it actually does work, the nitty-gritty details of the domain are still needed. There's some inherent complexity to problems which actually matter.