Due to seeming inadequacies with combining the probabilities from multiple agents (averaging, elementwise multiply + renormalize, etc are all unsatisfying in some cases), I've been looking at this betting model of predictions. In this model, multiple agent's estimates are explicitly taken into account, and instead of a list of probabilities -- one probability for each outcome -- you have a matrix of real-valued bets and a payout multiplier vector. So I've been thinking: well, how is this useful when you only have *one* bettor? Simple: that one bettor has multiple sub-bettors who each bets differently. In the case where you have one sub-bettor for each outcome, each sub-bettor bets on only one outcome, and no two sub-bettors bet on the same outcome, there is a simple correspondence between the expected payout for any outcome and the probability of that outcome. But the really interesting version is when you have fewer sub-bettors than outcomes, and when the sub-bettors' bets can overlap. It seems that in that case, the sub-bettors each correspond to models of the outcome-space, and out falls MoE. ie: It seems natural to combine multiple models when making predictions, even for a single predictor (at least when using this betting model for prediction)