There appears to be another questionable survey, this time from a "think of the children" group (they're not known to carry out ethical surveys). I'm sure you will see it in the coming days. I don't want to give these kinds of vile and disingenuous people clicks though.
Some flaws (these are very likely not the only ones):
1) The first informatory segment is not neutral or particularly nuanced. It instead frames the situation in a propagandistic manner favorable to the ideology of this group. It also contains no negative drawbacks.
To understand why it is so problematic to present something in such a one-sided manner like this, and particularly without providing the long history of misleading claims and statements (or if we're to be less charitable, what we'd refer to as lies), we only need to refer to the example of "dihydrogen monoxide"(1).
2) One question conflates minors viewing online porn with abuse. This likely inflates the number of responses where minors are "more at risk" now. I've been over online porn not being a big deal (2).
3) The second informatory segment deceives the respondent about what content might be flagged by the algorithm. There is no mention of the heated discussion around false positives either. Also, they claim that only "a few providers do scanning" but there appears to be no actual evidence for this claim (though, even if they didn't, it's arguable they'd still have a right not to do so). They also leave out that these few providers which do appear to disproportionately account for the majority of known child abuse photos.
4) A question following this fails to note that most providers are probably already "preventing exploitation", though there are probably human rights considerations at play. No evidence is provided that they don't.
The only "evidence" I've seen in around three years, unrelated to this document, is a Canadian group bringing up a few anecdotes where specific pieces of content didn't appear to be moderated to their liking. This Canadian group is very activist and appears to have zero or little regard for the human rights implications of their actions, they've even been accused of censoring historic stamps which they erroneously identified as "child abuse".
At other times, this Canadian group talks in a vague manner like "broad" and "narrow", and do not actually say what sort of content they're flagging. This creates room for creative interpretations of "abuse" which don't actually involve abuse. They refuse to define these terms. One of their advisors (who appears to be very responsive to conservative concerns, even fringe ones, and has often been preoccupied by things like "ritual abuse" in schools) explicitly refers to things which are clearly not abuse as "abuse". They network with organizations which do this. I'm also aware that the executive director of this organization has met with E.U. reps very recently.
5) There's some nonsense about it "being possible to detect things within E2EE environments". In the real world, companies just wouldn't implement E2EE, because that is the most practical thing to do. It's a red herring argument in more ways than one.
6) Loaded questions intended to make you feel like a bad person for not agreeing with the premise. Inflates responses in line with the group's ideology.
This is not an exhaustive list of all problems with the #chatcontrol spying / censorship proposal (or this survey). I don't want to repeat all of that discourse in this post.
The "loaded answers" are particularly interesting here (it's a multiple choice style survey).
Let's say someone is asking a question about a policy. Normally, you would expect there to be "Yes" and "No" as options.
Here though, there is instead something like "Yes, I think children shouldn't suffer" and "No, I think children should suffer" (it's not quite that but it is pretty close to it).
So unethical. #chatcontrol