@maltimore I don't see why that should be. what's wrong with artistic uses or recommendation engines or a number of other uses that also have human review?
@maltimore you didn't say "large scale" previously, and autonomous vehicles don't depend only on machine learning, but I would say we already tolerate errors in human driving. we aren't yet at the level of human performance with artificial drivers, but I don't see any reason to think we won't get there. once we do, dealing with the errors will become a question for public safety and insurance rather than an engineering question.
the cases I suggested in my previous post are distinguished by the fact that actuation isn't directly tied to machine prediction. naturally, this is at a scale that we *could* have a human in the loop.
@2ck ok, artistic uses are also ok, I forgot about those!
I don't think human review works at all. In large-scale use, an ML system will generate billions of predictions and humans will only review a tiny fraction of that. Take automated driving: 99.999% of the time, the ML system will make the right decision, but we care deeply about the remaining .0001%. Human review will never catch those. It's similar to recommender systems, where inappropriate things can get recommended.