@r2qo machine learning is far too vague a field to ask any questions about generally like that.. You'd have to specifically ask about a class of algorithms, like neural networks or Bayesian networks, if you want to get a coherent answer to that.
@freemo
Thanks for the reply. I am pretty surprised😂. I am just starting to learn about this field.
Gradient-based approches involves lots of floating point arithmetic, and that will certainly hit a floating point error in computers. There is also propagation of uncertainty. However, I don't see people wory about it. That gets me confused.
@freemo
Thanks for the propagation part, I would look into my model and data set more carefully and choose the gradient accordingly.
Floating point error is not a given value and gets propagated, but a structural problem that a model would face on its way evolving. This could make some part of the computation become nonsense, and the result not reliable. Moremore, we accept the model to give out wrong answers in an 'acceptable' rate, so that we would miss our chance to fix the real problem.
@r2qo As a general rule outside of the one case I mentioned the floating point error will never produce a nonsensical error as its effect on the output is as large as the error itself, which is miniscule. so it may effect your error by 0.000000001% or something silly.
Its best to think of gradient descent with many parameters as simply a multi-dimensional topography and it becomes clear that the error would have no appreciable effect on ascending towards a local maxima/minima