I think that article talks about two similar situations as if they were the same.
If we are in a situation where the wagers are fractions of the person's wealth, then all those conclusions follow and I see no caveats.
However, if the wagers are absolute-sized, and much smaller than initial wealth, then:
a) eventually the same thing will happen,
b) but on a impractically long timescale.
In that case, the obvious reason why the same will happen is that random walks on a real line will visit every place with probability 1, so always eventually someone will go bankrupt. Repeat enough times, and there's only one person with money left.
However, the same reasoning can be used to show that (in a society where children take father's surname) eventually there will be one surname only: because there's a nonzero probability of a surname disappearing in the next timestep, and there's no way back from it having disappeared.
I'm not sure what happens with the original setup from the article in the limit of small bets (i.e. in the limit where fraction of wealth that's wagered tends to 0) if we also adjust number of bets per unit of time inversely proportionally. I suspect that their conclusion holds, but the convergence gets slower, but my intuition could be wrong here.
@rysiek @smallcircles
This is an *incredibly* important model. It needs a Wikipedia article.
Octave script GPL-3+: now on #Codeberg (e.g. @robryk - you have some ideas for variations) [1].
The match with real economies' Lorenz curves for a three-parameter (\chi,\zeta,\kappa) model is impressive.
This is a very strong argument in favour of #UniversalBasicIncome.
#AffineWealthModel
#TrickleUpOligarchy
#AnirbanChakriborti
[1] https://codeberg.org/boud/yardsale
[2] https://www.scientificamerican.com/article/is-inequality-inevitable