I'm starting to think that Machine Learning shouldn't be used for anything other than hard science. Use ML to create some scientific results (think protein folding or mathematical theorem proving) that can then be checked by first-principles based algorithms.
@maltimore I don't see why that should be. what's wrong with artistic uses or recommendation engines or a number of other uses that also have human review?
@maltimore you didn't say "large scale" previously, and autonomous vehicles don't depend only on machine learning, but I would say we already tolerate errors in human driving. we aren't yet at the level of human performance with artificial drivers, but I don't see any reason to think we won't get there. once we do, dealing with the errors will become a question for public safety and insurance rather than an engineering question.
the cases I suggested in my previous post are distinguished by the fact that actuation isn't directly tied to machine prediction. naturally, this is at a scale that we *could* have a human in the loop.