Show newer

@greg And you don't see any problem with YouTube being liable for third-party defamation because they included a "next episode" button?

Spam filters may remove the most egregious, but for the most part, they don't; they simply categorize it into a folder. The only difference between the folders is which one opens by default when you open the app... because those are the ones you're recommended to actually pay attention to. Spam filters are absolutely recommendation algorithms.

How about search bars? Can platforms add any functionality to make them more useful by recommending datapoints that are more viewed and liked to the top of the search results? In particular, that's useful for preventing people from following fake accounts of high-profile folks on social media. But would that make them liable for the third-party content of those accounts they're "recommending" because a user searched for them?

If a small forum has a feed at the top recommending the most active threads, do they become responsible for the third-party content of those threads?

If an online game recommends friends to play with, do they become responsible for what those users say?

Is Yelp going to be liable for basically everything ever written on their website by third parties by the very nature of their business?

@greg I don't know about that. The proliferation of hashtags into everyday conversation is a pretty good indicator that the public does have some idea of how this works. Obviously, they don't know the intricate details, but I think most social media users understand that the "Recommended" or "Trending" sections feature content that's a mix of what the site owners want you to see and what other users are most talking about (and ads).

But let's digress. You started this thread by writing "content algorithms [should be] considered content themselves".

So, let's try a hypothetical. Let's say a white supremacist uploads some awful, and legally defamatory, content to YouTube in two parts. Defamation is difficult to prove, and may or may not violate YouTube's rules, but for the sake of the hypothetical, let's say it does break YouTube's rules, but it's still pending moderation (they haven't gotten to review it yet).

A viewer then watches the entire first part and, naturally, YouTube's algorithm recommends the subsequent episode, as anyone would reasonably expect them to.

Under your desired system, would this not make YouTube just as liable to be sued for the content as the creator of the content?

Another example: Wikipedia has a list of recommended articles on its homepage. If some user goes to those articles and edits one to contain libel, should Wikipedia the company then be liable for that because they "recommended" it?

A third example: Many email services now use spam filters that learn from user preferences, making them customized to the user. The fact that they sort incoming information automatically to recommend "good" mail to the inbox while hiding bad mail in the spam box... would that make the mail provider liable for the content of all messages delivered?

@greg See, I disagree with that. They can be "regulated" with simple public pressure. Look at how Facebook has massively beefed up its non-English-speaking moderators since the Myanmar thing.

This is all exceedingly new technology. It's not reasonable to punish someone for failing to predict every possible tangent the technology enables. Moreover, the law moves too slowly to keep up, and by setting things in stone, could stifle the very innovation that would produce further improvements limiting harm.

If social media causes harm (maliciously or ignorantly), they're already held responsible by the general public's outcry. They then have the option to respond to that criticism and improve, or lose customers and profits.

Which do you think they'll choose?

@brembs You better knock on some wood after "impervious to private take-over".

Email is impervious to private take-over too, but try telling that to Gmail.

@greg No, I absolutely acknowledge your point as valid.

I just primarily disagree with where to place the blame.

Those same algorithms that can cause harm also help marginalized communities find and support each other. They're not inherently good *or* bad because all they do is just repeat and distribute the content of others.

The content itself is what matters.

And it drives me up a wall that so many people are more angry at social media for *carrying* the messages than they are at the authors for *writing* those messages.

To use your own words: "I feel like you're engaging in motivated reasoning to [attack] these companies."

@murshedz It's been a while since I've heard of a plan so likely to backfire...

I remain a supporter of CDA 230 and I sincerely hope it is not gutted after tomorrow’s argument.

I understand why the focus is on whether sites like Facebook get “out of jail,” but for my clients who can’t afford a full-blown content liability fight over a specious content claim — student groups for messages on their listservs, community groups on social media platforms, users sharing content from other users — it is often the only law that we can use to put forth an effective defense.

With all of the horrible things happening in the world, it’s nice to find the wins. Especially in the area of our environment. A group called Coral Guardian is restoring coral reefs, and it’s quite magnificent. More wins like this, please.

coralguardian.org/en/

#Climate #Environment #CoralReef #Restoration

@garyrogers @gregpak Uh, no. We're engineers. We break things for a reason (to learn how to build it without breaking).

It's not cruelty. It's professionalism.

@gregpak It's also no more "artificial" than it is intelligent. You should see the call centers full of humans helping it try not to screw up.

This is a great article on the subject:
theguardian.com/technology/202

@chancerydaily From what I understand, you generally don't want to be too flowery. Like, if it's text, just transcribe it directly, no foreword. If it's an image, explain the key points of what one is seeing. The idea is to give someone without sight enough context to understand what's being posted without wasting their time with endless descriptors (text-to-speech isn't always the fastest talker).

Mentioning color would be fine, if it's key to what's being posted, but leave it out if it's not relevant.

Alt-text has been around for a long time, so there are many guides if you want more info. This one is a very reputable source for building the web: w3.org/WAI/tutorials/images/

@doncruse If that were the case, they wouldn't charge for it (at least not at first) because mass adoption would be far more important than any initial revenues.

@greg as for the Myanmar thing, the below paragraph is from the Amnesty International report you cited.

In addition, they don't seem to have any real concrete evidence or recommendations about the algorithmic recommendations. It basically boils down to "Facebook should do more human rights due diligence for its algorithms". Which is like a teacher responding "think harder" when a kid asks why they got a bad grade.

They couldn't make specific recommendations because their entire paper is established on assumptions that may or may not be true. They *feel* Facebook is reckless with its algorithms, so it must be. :eyeroll:

The bottom line is that these companies are giving people what they want. That so many people seem to want hatred isn't an algorithmic problem; It's a human nature problem.

@greg And how much success have you had applying the contents of that patent in the real world? Because it seems to me that it would only work in very narrowly-scoped environments or for very specific types of users and wouldn't work as well for the general public in the billion-user scale systems.

Of course we could train the learning models to avoid certain types of harmful-but-legal content. But the problem is: other people still want to see that content and actively seek it out.

Like your diet example from before: Sure, the algorithm could be trained to disincentivize diet advice, but then you're pissing off a whole group of people looking to lose weight to try to protect anorexics.

You can't require the entire internet have kid gloves on because some people could be harmed by seeing some content.

@greg @joeo10 How are they supposed to know if presenting a user with a certain bit of content is a "good idea"? Do you want them to collect everyone's medical records to make value judgements about which content is good for them or not?

Because that sounds dystopian to me.

It really sounds like you simply don't understand the scale of the problem or how these things work at all. The recommendations the algorithm provides do not inherently match the whims or goals of the company that published it. ChatGPT should be about as obvious an example of that as any.

Also, it wasn't Facebook's "pursuit of profit" that led to the Myanmar thing. It was a lack of non-English-speaking moderators. Facebook ran a study and found they made *more* money when they turned off their algorithm (because people scrolled past more content they didn't care about, which meant they were shown more ads in between).

So, even the "facts" you're starting from are lacking. Maybe consider that the conclusions aren't nearly as sound either.

Question for fellow users:

When you want to boost a post, do you also tend to favorite it or just boost it alone?

Ah, figures that Elon firing most of the workers got rid of everyone fighting the actual SMS fraud so he got hit by a big bill and now he thinks paywalling sms 2fa will fix it (it won't)

threadreaderapp.com/thread/162

Show thread

@greg @joeo10 Gee, could that possibly be because people suffering from anorexia frequently search for diet-related content? How is that YouTube's fault that they're just giving them more of what they've asked for?

It's not big tech's responsibility to manage anyone's addiction for them.

And the reason people object to your rhetoric is because you're ignoring the people who are actually creating the hurtful content to instead focus on killing the messenger. Your views are misguided and actively harmful to marginalized communities, as was explained in the Amicus Curiae I linked above.

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.