What would your ideal Mastodon server be like? Interested in others' opinions.

@kristinmbranson

Lazy tech journalists created too much confusion on the importance of selecting an instance, and with Local/Federated timelines, that are not important at all for daily uses, but only for exploration.

My opinion on criteria:

1 Impressum: Entity
2 Operators
3 Moderators

mastodon.social/@teixi/1093645

@teixi I've actually been feeling a bit guilty about telling people to just choose a server, that it doesn't matter as they can change later. I've heard several stories now of people from historically marginalized backgrounds being treated poorly, e.g. talking about racism resulting in them getting censored/banned or tone-policed, people getting harassed for their race. I've had one minorly condescending response myself, but I could be misinterpreting.

@teixi Reading server rules, I feel like I wouldn't quite fit in any of them, unless I'm on a free-speech server which also has its moderation issues. If we start building new servers or changing rules, which surely we must, I'm curious how we should design them.

@kristinmbranson
Don't feel wrong at all, everyones trend to start where they social circles are, and that helps for facilitation.

The penalty of changing is leaving the posts in the first one, while moving the rest of the social graph.
Not that big penalty, but then not to change on trends, but until understanding the ecosystem: economical, operational, legal, etc.

On past issues, like 2019 drama with G_B, thus how it resolved:

papers.ssrn.com/sol3/papers.cf

On futuribles

mastodon.social/@blaine/109378

@teixi Thanks for the links! Will take a look.
The cost I worry about is being subjected to harassment/censorship. I know for me, when this has happened, it is just such a terrible feeling, can't stop thinking about it for like days, can't sleep, just this hurt anger/fear, depending on what happened. I don't want anyone to feel like that.

@kristinmbranson @teixi

Ugh, I'm really sorry you've had to go through this, Kristin. It's terrible, and the "solutions" are often a different kind of terrible when implemented naively online.

I don't think we (= humanity) know yet how to reliably create a community that is welcoming and provides psychological safety for atypical ideas yet doesn't have a substantial risk for harassment. The same freedom that protects one from being censored can be wielded as a weapon to harass. It's kind of like working on advanced technology but wanting it to have no military applications whatsoever; it seems almost paradoxical.

I don't think Mastodon's "no algorithm" model is promises a good solution, either. The reason is that online discourse, without the inputs we get in person, trends towards hostility (as compared to face-to-face interactions) even in the absence of an outrage-selection algorithm. So my hunch is it needs to be algorithm-heavy, with an algorithm that favors not the outrageous and false but the measured and evidence-backed. Wikipedia is an existence proof of this working (culturally, mostly, in their case). Setting moderation as an (abusable) floor below which one can't go doesn't seem nearly as good as naturally rewarding best behavior with increased exposure.

What one can, I think, do reasonably well with Mastodon is have a curated community (maybe only implicitly curated by inviting the right people) within which people are motivated to be pro-social, and outside of which...well...there are dragons and walled kingdoms and so on, and to a large extent that just has to be accepted for what it is. Drawbridge up or down, depending on one's proclivities, but the outside world is going to do its own thing.

@ichoran Good points, thanks for sharing so eloquently! I am with you on the downsides of the reverse-chronological algorithm. It ends up showing whatever there is the most of, and if there are network effects that amplify certain types of behaviors, then that is what we will see. I like the idea of being able to train my own algorithm, but that is me trying to solve all my problems with ML, as usual :). I wonder if then people could train different algorithms and share their algorithms.

@kristinmbranson

I just had an idea--what if people could train their own algorithms, and there was an algorithm-federating algorithm? Then you could subscribe not to any individual's algorithm (including your own) but also to community wisdom for any community you care to define. (Or maybe a limited predefined set if the federating algorithm required its own training.)

I guess, as usual, a major part of algorithm-training would be defining the training data. Sharing either a training set or a classifier or both could be helpful.

With a little more intricacy, a system could be devised that would appeal to ML geeks and nobody else :)

@ichoran @kristinmbranson Couldn't this be accomplished, to a pretty good first approximation, with a list that follows some people with perhaps an 'AND' for 'any of' a set of hashtags.

@albertcardona @ichoran Probably not? I'm thinking of a text classifier (maybe even an image classifier! a link follower and classifier!), not based on hashtags, though poster and hashtag would be useful features for it. I'd want it to optimize over sequences of posts. Maybe it would intuit when I'm having a bad day and show me more cute animal photos? In general, ML is what you use when you can't think up a good set of rules, I think that would be the case I'm thinking of.

Follow

@kristinmbranson @albertcardona That's along the lines of what I was thinking too, except that I think poster is probably very important since I would not want to see Persuasively Structured Nonsense, Episode 9185, but I might want to see Resources to Concisely Rebut Persuasively Structured Nonsense (Episodes 8510-9510) which would probably overlap substantially textually with the nonsense itself.

The idea is that some X I always want to see, some Y I never want to see, and some Z I only want to see if it's done really well. For instance: everything on whole brain C. elegans imaging, nothing on roller derby, and only really thoughtful state-of-Twitter-meltdown news or opinion.

Compared to solving a massively difficult text comprehension problem, solving an implicit reputation problem seems appealing. Of course, you then have additional risks of cliques and echo chambers as you trend towards only listening to the "right people", but there are various ways that could be addressed either algorithmically or by using synthetic training data (e.g. right person, wrong content). But maybe I'm wrong and the text itself has sufficiently accessible cues. I guess it makes a difference whether we assume access to something of the level of complexity of GPT-3. (I haven't had a chance to play with content learning myself, so my intuitions aren't very well-formed.)

@ichoran @albertcardona I agree! I use Spotify for music recommendations, and this does not seem to be a solved problem here. Last week it tried to make me listen to a Christmas song by Kurt Vile. This was very wrong. There are definitely some posters for whom I would like to read almost everything they write, unless it is Christmas music.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.