Seeing one of Ylva's pals, who slapped together a very sketchy poll, offer up a weak defence of her is quite something. Once again, I'm not giving them clicks.
First off, they assert things have gone well for a decade with one particular algorithm. Except. It hasn't. The algorithm "PhotoDNA" (which was sort of secret) got leaked in 2021 and we learned the child porn "hashes" could be reversed with a trained AI model to reveal the original photographs (albeit at a much lower quality). This was tested with innocent photographs. I wonder why they wanted to keep this algorithm secret.
Also, who is to say this algorithm hasn't had significant issues? Do third party contractors (maybe in some country like Kenya) moderating content know enough about which and what algorithm (and the technical intricacies and civil rights issues) flagged a particular post to "blow the whistle"? Do they even think of these kinds of issues? There might be many false positives, and instances of state harassment, and you might never know.
They also make a point that there is a "small chance" of being hit by it. At scale though, that would mean that many messages might be disclosed almost constantly to people unknown to them. That someone might archive it as irrelevant doesn't really change that someone's privacy has been violated. Depending on how "clever" (or resource stretched) an official at the E.U. is on any particular day, they might just pass through all of the reports to their buddies in the same building (with staff transferring between the two departments, did you read that part of the proposal?).
They play down the implications of the "grooming algorithm" (this is the only paragraph which covers this, the others refer to the other one), ignoring that the accuracy is notoriously low, one news article put it at around 50%, though there are vendors who put it higher. Being accused of being a child predator looking to abuse kids is not exactly harmless.
While the model with the U.S.-based non-profit, and "voluntary" scanning with this algorithm, is not great, bringing the government into the picture creates a unique risk of things being overtaken by politics. This has always been a point of concern for me. It's not the best but it is also "the devil you know".
Another problem you run into is that using this algorithm seems to involve uploading files to Microsoft's servers for them to check to see if there is any evil in there. Big Tech could run the algorithm for themselves. For a smaller site, that means being surveilled by giants or governments, and it would also be a burden to their operations. Also, Microsoft markets immediately alerting the police as a selling point of this product.
Also, wasn't this one of those groups which got Sweden to pass a brazenly unconstitutional law? It was used to arrest someone who posted art of imaginary "children playing in water by the beach". This was ruled unconstitutional by the Supreme Court. It was a brazen disregard of human rights. When people do things like this, we need to hold them personally (and I do mean individual accountability, not simply letting them hide behind an "org", has anyone lost their job because of something this disgusting?) to account.
They also appear to be anti-porn, which is a problematic position imo (https://qoto.org/@olives/111083302650803082) (an affront to due process and freedom of expression).
Wasn't that "non-profit" created by Reagan and pals? I forgot the story behind it but I think it was like that.
QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.