Why has it been so hard for tech firms to manage harmful behaviors from their recommender systems?
Wonderfully clear and thoughtful article by @randomwalker for @knightcolumbia:
Initial reactions below (1/n)
https://knightcolumbia.org/content/understanding-social-media-recommendation-algorithms
In the article on recommender systems, Arvind makes an intellectual move that most of us find ourselves making when writing about these issues:
1) acknowledge that human-algorithm behavior is complex
2) go into detail about the one side of the complex system you understand well (in Arvind's case, algorithm design)
3) cite a few big names in the other fields to try to build a bridge, knowing it's insufficient (in Arvind's case, quant scholars Kahneman & Goel, who builds on Granovetter)
One claim Arvind makes is that "Virality is Unpredictable." This is the most important part of the article, and it deserves more unpacking & discussion.
Arvind points out half of an apparent contradiction in the social sciences, that while it's hard to predict what will happen in an individual situation, certain population level patterns are highly predictable.
While marketers & algorithm designers care about making individual-level predictions, people concerned about the social impact of algorithms are often more interested in population-level predictions.
And collective behavior can be frustratingly stable. Consider, for example, the problem of inequality. Universities are highly complex systems, but my colleagues and I were able to predict 99% of the variation in US faculty diversity, on average, with a very simple model
So they can predict an individual’s product purchasing behavior with laser precision, but they can’t predict that an individual is going to become radicalized and turn into a Nazi? Sounds specious to say the least. In the end, waving a flag with a swastika is no different than purchasing lite beer. Different products being sold, same techniques to drive the purchase.