@freemo
Thanks for the propagation part, I would look into my model and data set more carefully and choose the gradient accordingly.
Floating point error is not a given value and gets propagated, but a structural problem that a model would face on its way evolving. This could make some part of the computation become nonsense, and the result not reliable. Moremore, we accept the model to give out wrong answers in an 'acceptable' rate, so that we would miss our chance to fix the real problem.
@FBICatgirl
That's reasonable, but sadly that does not sounds scientific to me. It sounds more like a web engineer lol
@freemo
Thanks for the reply. I am pretty surprised😂. I am just starting to learn about this field.
Gradient-based approches involves lots of floating point arithmetic, and that will certainly hit a floating point error in computers. There is also propagation of uncertainty. However, I don't see people wory about it. That gets me confused.
Confirmed, We Live in a Simulation
https://www.scientificamerican.com/article/confirmed-we-live-in-a-simulation/
(submitted by sandebert)
Ask HN: What tech job would let me get away with the least real work possible? | Hacker News
https://news.ycombinator.com/item?id=26721951
functional programming
gugin
philosophy
bito-tech
gymnastics