Philosopher: "For all your science your beleifs are no better than a flat disc on the back of elephants resting on a turtle's back"
Astronomer: "Even without science that makes no sense, what is the turtle resting on top of!"
Philosopher: "Another turtle, its turtles all the way down!"
Astronomer: "See, that is just absurd, how could anyone believe that!"
Philosopher: "What does your science say keeps the earth and the moon fixed to each other?"
Astronomer: "Gravity, the moon orbits around the earth, or rather, the center of mass between the earth and the moon, its just an orbit"
Philosopher: "Then what does your earth orbit?"
Astronomer: "The center of the solar system."
Philosopher: "And the solar system?"
Astronomer: "The center of the galaxy!"
Philosopher: "And where does it end, what is at the end of this logic"
Astronomer with a defeated look on his face: "Its orbits, just orbits... It.. is... orbits, all the way down."
::Philosopher smiles::
Yea the point here isnt that gravity is as valid as a world turtle. The point in my eyes is more so that for all the absurdity the ideas of old may have, and summarily dismissed on their absurdity. The real world has notions that appear just as absurd (albeit with stronger evidence however).
@freemo @Diptchip @Science
Love the parable.
I read something similar the other day.
https://lukesmith.xyz/articles/chess.html
@torresjrjr @freemo @Diptchip @Science
Hmm.. I don't get the parable. Models' accuracies are evaluated not by comparing their average prediction with average result, but by something like comparing the "leftover surprise" the model leaves us with (KL divergence of the probability distribution of the world with respect to probability distribution that the model predicts). On that count, the model that models the intricacies of chess is obviously more accurate, and the question of whether the increased complexity is "worth it" is not answered obviously in the negative.
Consider the following example: let's say that we have a clock that has a dot that blinks, so that it's on during every even second and off during every odd second. The article, if I extrapolate correctly, would say that the model that says "at every point in time the probability that the light is on is 1/2" is just as _accurate_ (ignoring the question of its complexity) as the model that says "at every point in time the light is on iff it was off a second ago". I don't think that any model evaluation method that would claim that is useful. Thus, I see that article as, in large part, a strawman against a model evaluation method that isn't really used nor is intuitive. Is there something I'm missing?
@freemo @torresjrjr @Diptchip @Science
> Presuming that is, in fact, what you meant the failure here is that you can only determine how “true” a model is if you are omnipotent with regards to the problem, you already know the outcome. So in any practical sense it wouldn’t be a valuable way to interpret real world models.
You can approximate KL divergence by sampling from the "world" distribution (easiest way to see how: it's essentially expected value of log(1/p) where p is the probability the model assigned to the outcome we've sampled). That makes KL divergence estimable (with the small exception of models that assign probability of 0 to any outcome) when comparing models against the real world (insofar any estimates can be made against the real world).
> So to answer the question of if the coinflip model is a good one… Its an amazing model, if you are trying to model the chance one of two randomly selected people might win, without any other knowledge.
> It is a very poor model at predicting other things however, (...)
I agree completely. I would phrase it as it being the best model over the system where the outcome (white/black wins) is the only random variable being modeled.
> So while it is, clearly the better model, this is not due to it being a closer representation of the underlying system, it is merely better because it predicts the outcome more accurately.
I don't understand the distinction, or, perhaps I should say that I don't understand what "closer representation" means. Is it something that can be evaluated (even by an omnipotent evaluator)?
@robryk
By closer representation i simply mean that it isnt naive to the underlying mechanics. That is, it maes assumptions that chess is a game, with pieces, with a board, and with opening moves, all of which require some level of understanding the nature of the game, our earlier coin flip model is completely naive to any internal workings.
@torresjrjr @Diptchip @Science