This article from @dangoodin about "juice jacking" is the kind of article which make me happy to be an Ars Technica subscriber.

Actual research and nuanced information, while still making it clear that general fearmongering is unwarrented.

I also really appricate that the article includes the following, which is really key in the disucssion:

"The problem with the warnings coming out of the FCC and FBI is that they divert attention away from bigger security threats, such as weak passwords and the failure to install security updates. They create unneeded anxiety and inconvenience that run the risk of people simply giving up trying to be secure."

Most of the people on the Left and Right pushing for changes to Section 230 have absolutely NO clue as to the enormous scale involved in UGC (user generated content) and why their opposing "solutions" are doomed to decimate it and destroy users' choices. Part of the fault lies in the platforms themselves, who have failed to explain the situation to the public in anything even approaching an adequate manner.

Tamar Ziegler and I have just uploaded a short #NumberTheory paper to the #arXiv titled "infinite partial sumsets in the primes". The main result is that there exist two increasing sequences \(a_1 < a_2 < \dots \) and \(b_1 < b_2 < \dots\) such that a_i+b_j is prime for all i<j. The argument uses the Maynard sieve and an intersectivity lemma of Bergelson. I discuss this result further on my blog at

It’s very funny to me that the dominant Twentieth Century conception of AI was a slightly awkward nerd with an inhuman mastery of facts and logic, when what we actually got is smooth-talking bullshit artists who can’t do eighth-grade math.

This logic, applied to geographic concentration (e.g. is the auto industry concentrated in Detroit?) is explored in:
and since that 1997 paper, economists tend to call it the dartboard approach.
I had blocked content marked sensitive and apparently Snowy Plovers are sensitive. Sorry about that.

Not sure why, but your photos are invisible for me.

I have since tried to adopt my colleague's habit of tempering pure optimism with a sincere effort to locate counterexamples. Even if the result is true and counterexamples do not exist, these efforts often "map out the negative space" and leave a lot of clues as to how the proof of the positive result has to proceed, often by directly confronting the most dangerous putative counterexample scenarios and suggesting what the right "weapons" are to defeat them. 5/5

Show thread

@r000t @QOTO @Gargron @arteteco @Sphinx @khird @freemo

There are use-cases where the inverse logic is appealing, e.g. wanting to participate but not show posts to an abusive ex. What seems to be the natural compromise is (1) Alice blocking Bob prevents Bob from sending messages or replies to Alice, and (2) if Alice is private (only approved logged-in users can see her posts), Bob is blocked from seeing her content. GNU rooot makes a good point that if the content is public, there is no point in blocking any logged in user from seeing it. Note that (2) doesn't require any additional action because (1) prevents Alice from even seeing Bob's request to see Alice's content.

There are important use cases for private groups beyond the abusive ex. In my area (economics), public discussion is immediately attacked by trolls and poorly educated partisans. Having a private, restricted conversation is often necessary for having a good conversation.

And perhaps needless to say, it should be possible for a Mastodon instance to prevent blocking mods. What is beautiful about Mastodon is that if I don't like an instance's policies, I can move to another instance or start my own. This structure allows absolute free speech while allowing individuals to avoid speech they want to avoid through instance policies. Blocking moderators prevents enforcement of instance policies, so undermines the value of Mastodon.

@QOTO @Gargron @arteteco @Sphinx @khird @freemo
Kind of cancels the whole "moderation" thing if mods can be blocked.

Please. I’m begging you. Not every tutorial needs to be a video.

Mathematics can raise concerns with a proposed physical theory by pointing out that it leads to discontinuities. According to popular belief, Galileo debunked Aristotle's thesis that heavier objects fall faster than light ones by an actual experiment, but in fact he proposed a continuity argument: if Aristotle was correct, connecting two equal falling masses by a string of negligible weight would double the mass of the object and thus cause it to fall much faster, which was absurd. (1/2)
First, kudos to the author for an engaging presentation. Second, there are several elements that appear misleading.

Caveat: I don't have access to JStor and can't seem to find the original paper on the Am Scientist website.

In the two person interaction, I understand the wealth dynamics to work like this. Each person wagers a fraction f of their wealth w1, w2 and the actual wager is f*min{w1,w2}. WLOG we can set w1+w2=1, so person 1's wealth follows the process:


where p takes on the values -1 and 1 with equal probability. The right hand side has a zero expected value so 1's expected wealth does not change over time. So the statement that "the rich wind up with all the wealth" in this game is inaccurate. What does happen is that *someone* winds up with almost the wealth with high probability; the probabilities are determined by the initial wealth ratio w1/(w1+w2) because we know expected wealth is constant over time.

Here I am commenting on the statistical process; as a model of economic activity, what it shows is that even gambling with a zero expected loss is a loser for a risk averse person, as should be obvious. A fair bet still represents a garbling (in the Blackwell sense) of wealth and hence reduces expected utility for concave utility functions.

This is why there is a risk premium: risk averse people should only gamble when there is a positive expected return.

You have not given the correct interpretation. Gift-giving is a money pump. The giver obtained *at least* the purchase price in value, while the recipient obtained 2/3rds of the purchase price, producing 1 2/3rds in value. That is a pretty good social return!


Well, not the way economists use the term efficiency, but there are multiple ways one might use the term efficiency and economists don't own it.

We also apparently use the term fragile differently; I use it to mean "likely to break," so that planning for most outcomes (optimizing for probable ranges) reduces fragility. Making supply chains very long made them fragile, because dependencies were hidden. This is how we came to have a yeast shortage -- we had a shortage of packaging. Many of the shortages were of the "for the want of a nail, the kingdom was lost" sort. These sorts of fragility are readily planned for and inexpensively thwarted when interest rates are low.

I think the challenges in prediction -- which are as you suggest probably growing -- is a red herring on the efficiency of resilience. Resilience means engineering for eventualities, not over-engineering for eventualities. Hotels face random demand; appropriately sized they handle more than the average demand but less than the maximum demand. That is resilient. Investing in good forecasting aids resilience.

There is nice work by Miles Kimball

on when the response to increased uncertainty is to invest more or less in resilience.

I encourage you to mull on the extra water in my hikes -- I cast this as a known risk to make it simple, but all risks look like this: sometimes water will be more valuable than others and I have to make preparations in advance of the realization of uncertainty. Extra spending in this regard, like me carrying more water than I typically need, is exactly what insurance looks like.

BTW, markets also usually help with increased forecast uncertainty, mainly because the person with better predictions can make a lot of money and their trades spread that information. I am not claiming that market forces incent efficient investment in prediction, however.


To bring more than I will *ever* need is not efficient, yes, nor is it insurance; efficient insurance is generally between the maximum need and the minimum need. But that is precisely what builds resilience: preparation for eventualities. And that was my point: resilience is efficient and is indeed a property of competitive markets. Now I am not saying our actual markets produced resilience, for mostly they did not, but the cause was not market forces but something else, e.g. short-run performance focus or a failure to calculate the probabilities.


Insurance is not inefficient. Let me give an example. I hike and usually need one liter of water. But sometimes, say 10% of my hikes, I'm really thirsty and want a second liter very much. So I always carry two. What I've done is insure against my thirst. Provided the extra value -- which is zero 90% of the time -- is greater than the cost of carrying the water, it is efficient to insure.

You have to do the net present value calculation.

Obviously if could perfectly predict my thirst, I would not need to insure. But we don't live in a world of perfect prediction.

Show more
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.