Just got back from a research poster-board session. About 90% of research, in just general engineering, seems to involve machine learning right now.

Sounds unimaginative and unoriginal. Clearly nothing with initial promise or has high chance if paired with same ilk of thought that just comes to the same conclusion.

How unfortunate, hopefully something beneficial came out of it.

@jmw150 Wonders of magical thinking and intellectual laziness: I do not need to analyse this any more. Just throw in some data, somebody creates a ML setup and voila, solution. Dissertation completed, mission accomplished. Except most of the time it's just minor scratches of the surface oversold as "first steps towards a _potential_ breakthrough"- with occasional/rare exceptions.

I am not complaining, we 9as the humankind) need to dig up this mine until we realise that there really isn't much in there and then we'll move on - at least knowing that we did our best and failed. But watching the process from the sidelines still hurts a bit.


Some of it is actual science, theorems about convergence criteria, or approximation ability of these kinds of functions. There are also computer security people that try to make these systems more robust to counterexamples, and that forces practitioners to understand the system better.

Most of it is not worth much. For pretty much all of the ML papers at this session, an approximated function was discovered that could perform the desired behavior, given just the right inputs, and that was it.

@jmw150 Of course I know (perhaps more than I wish, got some academic titles in the broader field). The theory is very exciting indeed, what I am referring to are the applications and the hype the field is surfing over the last years. That itself is also not special, or wrong on its own. What is wrong is that laymen started to believe ML techniques can magically solve all of their problems without them doing the hard work of learning the nitty gritty details of their "domains of inexpertise". 99,999% of times it does not work beyond the first attempt which is deemed "promising". But yes, for image recognition and now natural language processing and similar things it worked marvels - which, to defense of the "magical thinking", would not be possible if people simply did not try to throw ML at anything and see what sticks. But when it actually does work, the nitty-gritty details of the domain are still needed. There's some inherent complexity to problems which actually matter.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.