A sincere question about content warnings, or really trigger warnings.
This post is a contradiction in search of guidance.
For a bit of background on myself, I'm a war refugee who escaped a genocide with his family in early childhood. As I've become more grey hair prone, I've helped my parents and myself get therapy and work through decades of anxiety processing disorders, and I'm always actively looking to tune how I can approach and see the world.
One of the things I do to learn about myself and to establish healthy boundaries is I try to get an understanding from people who are far more educated than I am in the neuroscience, psychology, and psychiatric field.
A tool that I've found immensely helpful has been the podcast 'Hidden Brain', which brings on people from these fields to discuss their research and things they've derived from it.
One of the recent episodes (https://hiddenbrain.org/podcast/a-better-way-to-worry/) had discussed how anxiety can be a force for good, and that the inability to cope with your anxiety is what should be addressed.
A specific topic in the podcast was related to trigger warnings and how the research found that there is "substantial evidence that trigger warnings' previously nonsignificant main effect of increasing anxiety responses to distressing content was genuine, albeit small."
As I'm building a new client for Mastodon consumption, I can't help but feel like the content warning field when used for a trigger warning alert _can_ be contradictory in its goals since the linked research shows that people could experience _more_ anxiety when they're forewarned about a topic that they can see as potentially harmful.
To me that makes me want to minimize, in terms of UX, posts that have content warnings as much as possible. Is that a regressive solutions?
I do understand this is one researcher with a couple of cited papers and that is not the end all be all of the problem space, but as someone hoping to build an inclusive and accessible client for users, I want to make sure I don't make any missteps and perpetuate the harm social medial tools can bring to people who struggle with anxiety disorders.
#contentwarnings #flutter #development
Tagging #neuroscience #psychology as I'm hoping for some feedback.
@LouisIngenthron I guess in that sense a solution might be to allow a user to filter out any posts that have content warnings.
@Decad3nce Given the other tools offered on this platform to mute words and hashtags, I think at some point, there are limits to how much harm the speaker can prevent (or is responsible for preventing), and it becomes incumbent on the listener to avoid spaces they know will trigger their issues.
So, I think it comes down to moderation. You do what you can (in this case, content warnings), and the listener chooses whether they want to engage with your speech (muting/blocking).
If you want the broadest possible appeal, you try to go with subjects that don't trigger anything. If you don't care about broad appeal, you can post more about what you want, with appropriate content warnings for discoverability.
That seems to be about as happy a medium as the human condition allows.